patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11861117 | DETAILED DESCRIPTION Features of the inventive concept and methods of accomplishing the same may be understood more readily by reference to the following detailed description of embodiments and the accompanying drawings. Hereinafter, example embodiments will be described in more detail with reference to the accompanying drawings, in which like reference numbers refer to like elements throughout. The present invention, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments herein. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the aspects and features of the present invention to those skilled in the art. Accordingly, processes, elements, and techniques that are not necessary to those having ordinary skill in the art for a complete understanding of the aspects and features of the present invention may not be described. Unless otherwise noted, like reference numerals denote like elements throughout the attached drawings and the written description, and thus, descriptions thereof will not be repeated. In the drawings, the relative sizes of elements, layers, and regions may be exaggerated for clarity. It will be understood that, although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section described below could be termed a second element, component, region, layer or section, without departing from the spirit and scope of the present invention. Spatially relative terms, such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of explanation to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or in operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” or “under” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” can encompass both an orientation of above and below. The device may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein should be interpreted accordingly. It will be understood that when an element, layer, region, or component is referred to as being “on,” “connected to,” or “coupled to” another element, layer, region, or component, it can be directly on, connected to, or coupled to the other element, layer, region, or component, or one or more intervening elements, layers, regions, or components may be present. In addition, it will also be understood that when an element or layer is referred to as being “between” two elements or layers, it can be the only element or layer between the two elements or layers, or one or more intervening elements or layers may also be present. In the following examples, the x-axis, the y-axis and the z-axis are not limited to three axes of a rectangular coordinate system, and may be interpreted in a broader sense. For example, the x-axis, the y-axis, and the z-axis may be perpendicular to one another, or may represent different directions that are not perpendicular to one another. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and “including,” when used in this specification, specify the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. As used herein, the term “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent deviations in measured or calculated values that would be recognized by those of ordinary skill in the art. Further, the use of “may” when describing embodiments of the present invention refers to “one or more embodiments of the present invention.” As used herein, the terms “use,” “using,” and “used” may be considered synonymous with the terms “utilize,” “utilizing,” and “utilized,” respectively. Also, the term “exemplary” is intended to refer to an example or illustration. When a certain embodiment may be implemented differently, a specific process order may be performed differently from the described order. For example, two consecutively described processes may be performed substantially at the same time or performed in an order opposite to the described order. The electronic or electric devices and/or any other relevant devices or components according to embodiments of the present invention described herein may be implemented utilizing any suitable hardware, firmware (e.g. an application-specific integrated circuit), software, or a combination of software, firmware, and hardware. For example, the various components of these devices may be formed on one integrated circuit (IC) chip or on separate IC chips. Further, the various components of these devices may be implemented on a flexible printed circuit film, a tape carrier package (TCP), a printed circuit board (PCB), or formed on one substrate. Further, the various components of these devices may be a process or thread, running on one or more processors, in one or more computing devices, executing computer program instructions and interacting with other system components for performing the various functionalities described herein. The computer program instructions are stored in a memory which may be implemented in a computing device using a standard memory device, such as, for example, a random access memory (RAM). The computer program instructions may also be stored in other non-transitory computer readable media such as, for example, a CD-ROM, flash drive, or the like. Also, a person of skill in the art should recognize that the functionality of various computing devices may be combined or integrated into a single computing device, or the functionality of a particular computing device may be distributed across one or more other computing devices without departing from the spirit and scope of the embodiments of the present invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and/or the present specification, and should not be interpreted in an idealized or overly formal sense, unless expressly so defined herein. Hereinafter, the present invention will be explained in detail with reference to the accompanying drawings. FIG.1Ais a perspective view showing a display device DD in a first operation state according to an embodiment of the present disclosure.FIG.1Bis a perspective view showing the display device DD in a second operation state according to an embodiment of the present disclosure.FIG.1Cis a perspective view showing the display device DD in a third operation state according to an embodiment of the present disclosure. Referring toFIG.1A, a display surface IS, in which an image IM is displayed in a first operation state of the display device DD, is substantially parallel to a surface or plane defined by a first direction axis DR1and a second direction axis DR2. A direction that is normal to the display surface IS (i.e., a thickness direction of the display device DD), indicates a third direction DR3. A front surface (or an “upper surface”) is distinguished from a rear surface (or a “lower surface”) by the third direction axis DR3. However, the first to third direction axes DR1to DR3are relative terms to each other, and thus the first to third direction axes DR1to DR3may be changed to any other directions. Hereinafter, first to third directions correspond to directions respectively indicated by the first to third direction axes DR1to DR3, and thus the first to third directions are assigned with the same reference numerals as those of the first to third direction axes DR1to DR3. FIGS.1A to1Cshow a foldable display device as a representative example of the flexible display device DD, but it should not be limited there to or thereby. For example, the flexible display device DD may be a rollable display device, a bendable display device, or a flat rigid display device, for example. The flexible display device DD according to the present embodiment may be applied to a large-sized electronic item, such as a television set, a monitor, etc., and may be applied to a small and medium-sized electronic item, such as a mobile phone, a tablet, a car navigation unit, a game unit, a smart watch, etc. Referring toFIG.1A, the display surface IS of the flexible display device DD may include a plurality of areas. The flexible display device DD includes a display area DD-DA in which the image IM is displayed, and a non-display area DD-NDA located next to the display area DD-DA. The image IM is not displayed in the non-display area DD-NDA.FIG.1Ashows an image of a vase as the image IM. As an example, the display area DD-DA has a substantially quadrangular shape, and the non-display area DD-NDA surrounds the display area DD-DA, but they should not be limited thereto or thereby. That is, the shape of the display area DD-DA and the shape of the non-display area DD-NDA may be designed relative to each other. Referring toFIGS.1A to1C, the display device DD is divided into a plurality of areas in accordance with the operation state thereof. The display device DD includes a bending area BA that may be bent with respect to a bending axis BX, a first non-bending area NBA1that is not bent, and a second non-bending area NBA2that is not bent. As shown inFIG.1B, the display device DD may be inwardly bent such that the display surface IS of the first non-bending area NBA1faces the display surface IS of the second non-bending area NBA2. As shown inFIG.1C, the display device DD may be outwardly bent to allow the display surface IS to be exposed. FIGS.1A to1Cshow only one bending area BA, but the number of the bending area BA is not limited to one. For instance, in the present embodiment, the display device DD may include a plurality of bending areas BA. In some embodiments, the display device DD may be configured to repeatedly perform only the operation modes shown inFIGS.1A and1B, although the present invention is not limited thereto or thereby. That is, the bending area BA may be defined to correspond to the user's operation performed on the display device DD. For instance, different fromFIGS.1B and1C, the bending area BA may be defined to be substantially parallel to the first direction axis DR1or may be defined in a diagonal direction. The bending area BA may have an area determined depending on a radius of curvature while not being fixed. FIG.2is a cross-sectional view showing a display device DD according to an embodiment of the present disclosure.FIG.2shows the cross-section defined by the second and third directions DR2and DR3. Referring toFIG.2, the display device DD includes a protective film PM, a display module DM, an optical member LM, a window WM, a first adhesive member AM1, a second adhesive member AM2, and a third adhesive member AM3. The display module DM is located between the protective film PM and the optical member LM. The optical member LM is located between the display module DM and the window WM. The first adhesive member AM1couples the display module DM and the protective film PM. The second adhesive member AM2couples the display module DM and the optical member LM. The third adhesive member AM3couples the optical member LM and the window WM. If suitable, at least one of the first, second, and third adhesive members AM1, AM2, and AM3may be omitted. The protective film PM protects the display module DM. The protective film PM includes a first outer surface OS-L exposed to the outside and an adhesive surface adhered to the first adhesive member AM1. The protective film PM prevents external moisture from entering the display module DM and absorbs external impacts. The protective film PM may include a plastic film as a base substrate. The plastic film may include one or more of polyethersulfone (PES), polyacrylate, polyetherimide (PEI), polyethylenenaphthalate (PEN), polyethyleneterephthalate (PET), polyphenylene sulfide (PPS), polyarylate, polyimide (PI), polycarbonate (PC), poly(arylene ethersulfone), and a mixture thereof. The material of the protective film PM may include a mixed material of an organic material and an inorganic material without being limited to plastic resins. The protective film PM may include a porous organic layer and an inorganic material filled in pores of the organic layer. The protective film PM may further include a functional layer formed in the plastic film. The functional layer may include a resin layer, and may be formed by a coating method. In other embodiments, the protective film PM may be omitted. The window WM protects the display module DM from the external impacts and provides an input surface to the user. The window WM provides a second outer surface OS-U exposed to the outside, and an adhesive surface adhered to the third adhesive member AM3. The display surface IS shown inFIGS.1A to1Cmay be the second outer surface OS-U. The window WM may include a plastic film. The window WM may have a multi-layer structure, which may be of a glass substrate, a plastic film, or a plastic substrate. The window WM may further include a bezel pattern. The multi-layer structure of the window WM may be formed through consecutive processes or an adhesive process using an adhesive. The optical member LM reduces a reflectance of an external light. The optical member LM may include at least a polarizing film, and may further include a retardation film. In other embodiments, the optical member LM may be omitted. The display module DM includes a display panel DP and a touch sensing unit TS. The touch sensing unit TS is directly located on the display panel DP. In the following descriptions, the expression “a first component is directly located on a second component” means that the first and second components are formed through consecutive processes without being attached to each other by using a separate adhesive layer. According to other embodiments, other layers (e.g., an adhesive layer, a substrate, etc.) may be interposed between the display panel DP and the touch sensing unit TS. Hereinafter, an organic light emitting display panel DP will be described as the display panel DP, but the display panel DP should not be limited to the organic light emitting display panel DP. The display panel DP may be a liquid crystal display panel, a plasma display panel, an electrophoretic display panel, a microelectromechanical system (MEMS) display panel, or an electrowetting display panel. The organic light emitting display panel DP generates the image IM (refer toFIG.1) corresponding to image data input thereto. The organic light emitting display panel DP includes a first display panel surface BS1-L, and a second display panel surface BS1-U facing the first display panel surface BS1-L in the thickness direction DR3. The touch sensing unit TS obtains coordinate information of an external input. The touch sensing unit TS senses the external input in an electrostatic capacitive manner. The display module DM according to other embodiments may further include an anti-reflection layer. The anti-reflection layer may include a stack structure of a color filter or a conductive layer/an insulating layer/a conductive layer. The anti-reflection layer absorbs or polarizes external light to reduce reflectance of the external light. The anti-reflection layer may be used to substitute the function of the optical member LM. Each of the first, second, and third adhesive members AM1, AM2, and AM3may be, but are not limited to, an organic adhesive layer, such as an optically clear adhesive film (OCA), an optically clear resin (OCR), or a pressure sensitive adhesive film (PSA). The organic adhesive layer may include, for example, a polyurethane-based adhesive material, a polyacryl-based adhesive material, a polyester-based adhesive material, a poly epoxy-based adhesive material, or a polyvinyl acetate-based adhesive material. The display device DD may further include a frame structure for supporting the functional layers to maintain the states shown inFIGS.1A,1B, and1C. The frame structure may have a joint structure or a hinge structure. FIGS.3A and3Bare perspective views showing a display device DD-1according to an embodiment of the present disclosure.FIG.3Ashows the display device DD-1in an unfolded state, andFIG.3Bshows the display device DD-1in a bent state. The display device DD-1includes one bending area BA and one non-bending area NBA. The non-display area DD-NDA of the display device DD-1is bent, however, the bending area area of the display device DD-1may be changed in other embodiments. Different from the display device DD shown inFIGS.1A to1C, the display device DD-1may be fixed in one state while being operated. The display device DD-1may be operated in the bent state as shown inFIG.3B. The display device DD-1may be fixed to a frame while being bent, and the frame may be coupled to a housing of an electronic device. The display device DD-1according to the present embodiment may have substantially the same cross-sectional structure as that shown inFIG.2. However, the non-bending area NBA and the bending area BA may have different stack structures from each other. The non-bending area NBA may have substantially the same cross-sectional structure as that shown inFIG.2, and the bending area BA may have a cross-sectional structure different from that shown inFIG.2. For example, the optical member LM and the window WM may be omitted from the bending area BA. That is, the optical member LM and the window WM may be located only in the non-bending area NBA. The second and third adhesive members AM2and AM3may be omitted from the bending area BA. FIG.4Ais a perspective view showing a display device DD-2according to an embodiment of the present disclosure. Referring toFIG.4A, the display device DD-2includes a non-bending area (or a “plan surface area”) NBA in which a main image is displayed to a front direction, and a bending area (or a “side surface area”) BA in which a sub-image is displayed to a side direction. The sub-image may include an icon for providing information. In the present embodiment, the terms of “non-bending area NBA and bending area BA” used herein define the display device DD-2configured to include plural areas with different shapes from each other. The bending area BA bent from the non-bending area NBA displays the sub-image to a fourth direction axis DR4crossing the first direction axis DR1, the second direction axis DR2, and the third direction axis DR3. However, the first to fourth direction axes DR1to DR4are relative terms to each other, and thus the first to fourth direction axes DR1to DR4may be changed to any other directions. FIG.4Bis a perspective view showing a display device DD-3according to an embodiment of the present disclosure. Referring toFIG.4B, the display device DD-3includes a non-bending area NBA in which a main image is displayed to a front direction, and includes first and second bending areas BA1and BA2in which a sub-image is displayed to a side direction. The first bending area BA1and the second bending area BA2are respectively bent from opposite sides of the non-bending area NBA. FIG.5Ais a plan view showing an organic light emitting display panel DP included in a display device according to an embodiment of the present disclosure, andFIG.5Bis a cross-sectional view showing a display module DM included in a display device according to an embodiment of the present disclosure. Referring toFIG.5A, the organic light emitting display panel DP includes a display area DA and a non-display area NDA when viewed in a plan view. The display area DA and the non-display area NDA of the organic light emitting display panel DP respectively correspond to the display area DD-DA and the non-display area DD-NDA of the display device DD (refer toFIG.1A). The display area DA and the non-display area NDA of the organic light emitting display panel DP are not required to be identical to the display area DD-DA and the non-display area DD-NDA of the display device DD ofFIG.1A, and the display area DA and the non-display area NDA of the organic light emitting display panel DP may be changed in accordance with the structure and design of the organic light emitting display panel DP. The organic light emitting display panel DP includes a plurality of pixels PX. An area in which the pixels PX are arranged is referred to as the display area DA. In the present embodiment, the non-display area NDA is defined along an edge of the display area DA. The organic light emitting display panel DP includes gate lines GL, data lines DL, light emitting lines EL, a control signal line SL-D, an initialization voltage line SL-Vint, a voltage line SL-VDD, and a pad part PD. Each of the gate lines GL is connected to corresponding pixels of the pixels PX, and each of the data lines DL is connected to corresponding pixels of the pixels PX. Each of the light emitting lines EL is arranged to be substantially parallel to a corresponding gate line of the gate lines GL. The control signal line SL-D applies a control signal to a gate driving circuit GDC. The initialization voltage line SL-Vint applies an initialization voltage to the pixels PX. The voltage line SL-VDD is connected to the pixels PX to apply a first voltage to the pixels PX. The voltage lines SL-VDD includes a plurality of lines extending in the first direction DR1, and a plurality of lines extending in the second direction DR2. The gate driving circuit GDC is located at one side portion of the non-display area NDA, and is connected to the gate lines GL and the light emitting lines EL. Some of the gate lines GL, the data lines DL, the light emitting lines EL, the control signal line SL-D, the initialization voltage line SL-Vint, and the voltage line SL-VDD are located on the same layer, and the others of the gate lines GL, the data lines DL, the light emitting lines EL, the control signal line SL-D, the initialization voltage line SL-Vint, and the voltage line SL-VDD are located on different layers. The pad part PD is connected to an end of the data lines DL, the control signal line SL-D, the initialization voltage line SL-Vint, and the voltage line SL-VDD. As shown inFIG.5B, the organic light emitting display panel DP includes a base layer/base substrate SUB, a circuit layer DP-CL located on the base layer SUB, an organic light emitting device layer DP-OLED located on the circuit layer DP-CL, and a thin film encapsulation layer TFE located on the organic light emitting device layer DP-OLED. The base layer SUB includes at least one plastic film. The base layer SUB may be a flexible substrate, and may include a plastic substrate, a glass substrate, a metal substrate, or an organic/inorganic-mixed material substrate. The plastic substrate includes at least one of an acryl-based resin, a methacryl-based resin, polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, and a perylene-based resin. The circuit layer DP-CL includes a plurality of insulating layers, a plurality of conductive layers, and a semiconductor layer. The conductive layers of the circuit layer DP-CL may form signal lines or a control circuit of the pixel. The organic light emitting device layer DP-OLED may include organic light emitting diodes. The thin film encapsulation layer TFE encapsulates the organic light emitting device layer DP-OLED. The thin film encapsulation layer TFE includes an inorganic layer and an organic layer. The thin film encapsulation layer TFE may include at least two inorganic layers and an organic layer located between them. The inorganic layers protect the organic light emitting device layer DP-OLED from moisture and oxygen, and the organic layer protects the organic light emitting device layer DP-OLED from foreign substance such as dust. The inorganic layer may include, for example, at least one of a silicon nitride layer, a silicon oxynitride layer, and a silicon oxide layer. The organic layer may include an acryl-based organic material, but it should not be limited thereto or thereby. According to the present embodiment, the touch sensing unit TS may provide a uniform sensitivity by adjusting a thickness of the organic layer. The touch sensing unit TS is directly located on the thin film encapsulation layer TFE, but is not limited thereto or thereby. An inorganic layer may be located on the thin film encapsulation layer TFE, and the touch sensing unit TS may be located on the inorganic layer. The inorganic layer may be a buffer layer. The inorganic layer may include at least one of a silicon nitride layer, a silicon oxy-nitride layer, and a silicon oxide layer, but the inorganic layer should not be limited thereto or thereby. In addition, the inorganic layer may be included in the thin film encapsulation layer TFE without being provided as a separate element. The touch sensing unit TS includes touch sensors and touch signal lines. The sensors and the touch signal lines may have a single-layer structure or a multi-layer structure. The touch sensors and the touch signal lines may include indium tin oxide (ITO), indium zinc oxide (IZO), zinc oxide (ZnO), indium tin zinc oxide (ITZO), PEDOT, a metal nano-wire, and a graphene. The touch sensors and the touch signal lines may include a metal layer such as, for example, molybdenum, silver, titanium, copper, aluminum, or an alloy thereof. The touch sensors and the touch signal lines may have the same layer structure or different layer structures. The touch sensor layer TS will be described in detail later. FIG.6Ais an equivalent circuit diagram of a pixel PX included in a display device according to an embodiment of the present disclosure. FIG.6Ashows an i-th pixel PXi connected to a k-th data line DLk among the data lines DL. The i-th pixel PXi includes an organic light emitting diode OLED and a pixel driving circuit controlling the organic light emitting diode OLED. The pixel driving circuit includes seven thin film transistors T1to T7and one storage capacitor Cst. In the present embodiment, the pixel driving circuit includes seven transistors T1to T7and one storage capacitor Cst, but in other embodiments, the j-th pixel PXi may include only a first transistor (or a “driving transistor”) T1, a second transistor (or a “switching transistor) T2, and the storage capacitor Cst as the driving circuit to drive the organic light emitting diode OLED, and the pixel driving circuit may have various configurations. The driving transistor controls a driving current applied to the organic light emitting diode OLED. An output electrode of the second transistor T2is electrically connected to the organic light emitting diode OLED. The output electrode of the second transistor T2directly makes contact with an anode of the organic light emitting diode OLED or is connected to the anode of the organic light emitting diode OLED via another transistor, e.g., a sixth transistor T6. Control electrodes of respective control transistors receive respective control signals. The control signals applied to the i-th pixel PXi include an (i−1)th gate signal Si−1, an i-th gate signal Si, an (i+1)th gate signal Si+1, a data signal Dk, and an i-th light emitting control signal Ei. In the present embodiment, the control transistors includes the first transistor T1and third to seventh transistors T3to T7. The first transistor T1includes an input electrode connected to the k-th data line DLk, a control electrode connected to an i-th gate line GLi, and an output electrode connected to the output electrode of the second transistor T2. The first transistor T1is turned on by the gate signal Si (hereinafter, referred to as the “i-th gate signal”) applied to the i-th gate line GLi to provide the data signal Dk applied to the k-th data line DLk to the storage capacitor Cst. FIG.6Bis a cross-sectional view showing a portion of a display panel included in a display device according to an embodiment of the present disclosure.FIG.6Cis a cross-sectional view showing a portion of a display panel included in a display device according to an embodiment of the present disclosure. In detail,FIG.6Bshows the cross-section of the portion corresponding to the first transistor T1of the equivalent circuit shown inFIG.6A, andFIG.6Cshows the cross-section of the portion corresponding to the second transistor T2, the sixth transistor T6, and the organic light emitting diode OLED of the equivalent circuit shown inFIG.6A. Referring toFIGS.6B and6C, the first transistor T1, the second transistor T2, and the sixth transistor T6are located on the base layer SUB. The first, second, and sixth transistors T1, T2, and T6have the same structure as each other, and thus the first transistor T1will be described in detail, and repeated detail of the second and sixth transistors T2and T6will be omitted. An upper surface of the base layer SUB is defined by the first direction DR1and the second direction DR2. The first transistor T1includes a first input electrode DE1, a first output electrode SE1, a first control electrode GE1, and a first oxide semiconductor pattern OSP1. A buffer layer BFL is located on the base layer SUB. The buffer layer BFL improves a coupling force between the base layer SUB and the conductive patterns or the semiconductor patterns. The buffer layer BFL includes an inorganic layer. In other embodiments, a barrier layer may be further located on the base layer SUB to prevent foreign substances from entering. The buffer layer BFL and the barrier layer may be selectively used or omitted. The base layer SUB may include a plastic substrate, a glass substrate, or a metal substrate. The plastic substrate includes at least one of an acryl-based resin, a methacryl-based resin, polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, and a perylene-based resin. The first oxide semiconductor pattern OSP1is located on the buffer layer BFL. The first oxide semiconductor pattern OSP1includes indium tin oxide (ITO), indium gallium zinc oxide (IGZO), zinc oxide (ZnO), or indium zinc oxide (IZO). A first insulating layer10is located on the buffer layer BFL to cover the first oxide semiconductor pattern OSP1. The first control electrode GE1is located on the first insulating layer10, and a second insulating layer20is located on the first insulating layer10to cover the first control electrode GE1. The second insulating layer20provides a flat upper surface/planarized surface. The second insulating layer20includes an organic material and/or an inorganic material. The first insulating layer10and the second insulating layer20include an inorganic material, which may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon oxynitride, zirconium oxide, and hafnium oxide Meanwhile, a first contact hole CH1and a second contact hole CH2are defined through the first and second insulating layers10and20to respectively expose a first area and a second area of the first oxide semiconductor pattern OSP1. Each of the first and second contact holes CH1and CH2penetrates through the first and second insulating layers10and20. The first input electrode DE1and the first output electrode SE1are located on the second insulating layer20. The first input electrode DE1and the first output electrode SE1are respectively connected to the first area and the second area of the first oxide semiconductor pattern OSP1through the first contact hole CH1and the second contact hole CH2. A third insulating layer30is located on the second insulating layer20to cover the first input electrode DE1and the first output electrode SE1. The third insulating layer30provides a flat upper surface. The third insulating layer30includes an organic material and/or an inorganic material. The third insulating layer30covers input electrodes and output electrodes. FIG.6Cshows the sixth transistor T6having substantially the same structure as the second transistor T2. However, the structure of the sixth transistor T6may be changed. The sixth transistor T6includes an input electrode DE6connected to the output electrode SE2of the second transistor T2on the second insulating layer20. The organic light emitting diode OLED and a pixel definition layer PDL are located on the third insulating layer30. An anode AE is located on the third insulating layer30. The anode AE is connected to a sixth output electrode SE6of the sixth transistor T6through a seventh contact hole CH7defined through the third insulating layer30. The pixel definition layer PDL is provided with an opening OP defined therethrough. At least a portion of the anode AE is exposed through the opening OP of the pixel definition layer PDL. The pixel PX is located in a pixel area of the organic light emitting display panel DP when viewed in a plan view. The pixel area includes a light emitting area PXA and a non-light emitting area NPXA next to the light emitting area PXA. The non-light emitting area NPXA is located to surround the light emitting area PXA. In the present embodiment, the light emitting area PXA is defined to correspond to the anode AE, but should not be limited thereto or thereby. The light emitting area PXA may be defined as an area in which a light is generated. The light emitting area PXA may be defined to correspond to a portion of the anode AE exposed through the opening OP. A hole control layer HCL is commonly located in the light emitting area PXA and the non-light emitting area NPXA. Although not shown in figures, a common layer like the hole control layer HCL may be commonly formed in the plural pixels PX. An organic light emitting layer EML is located on the hole control layer HCL. The organic light emitting layer EML is located only in an area corresponding to the opening OP. That is, the organic light emitting layer EML may be patterned into plural parts, and the parts may be respectively located in the pixels PX. An electron control layer ECL is located on the organic light emitting layer EML. A cathode CE is located on the electron control layer ECL. The cathode CE is commonly located in the pixels PX. The thin film encapsulation layer TFE is located on the cathode CE. The thin film encapsulation layer TFE is commonly located in the pixels PX. The thin film encapsulation layer TFE includes at least one inorganic layer and at least one organic layer. The thin film encapsulation layer TFE may include a plurality of inorganic layers and a plurality of organic layers alternately stacked with the inorganic layers. In the present embodiment, the patterned organic light emitting layer EML is shown as a representative example, but the organic light emitting layer EML may be commonly located in the pixels PX. In this case, the organic light emitting layer EML may generate a white light. In addition, the organic light emitting layer EML may have a multi-layer structure. In the present embodiment, the thin film encapsulation layer TFE directly covers the cathode CE. In the present embodiment, a capping layer may further cover the cathode CE, and the thin film encapsulation layer TFE may directly cover the capping layer. FIGS.7A to7Care cross-sectional views showing thin film encapsulation layers included in a display device according to an embodiment of the present disclosure. Hereinafter, the thin film encapsulation layer will be described in detail with reference toFIGS.7A to7C. Referring toFIG.7A, a thin film encapsulation layer TFE1includes n inorganic thin layers IOL1to IOLn, and a first inorganic thin layer IOL1among the n inorganic thin layers IOL1to IOLn is located on the cathode CE (refer toFIG.6C). In addition, the first inorganic thin layer IOL1may located to directly contact the cathode CE (refer toFIG.6C). The first inorganic thin layer IOL1may be referred to as a “lower inorganic thin layer,” and the inorganic thin layers except for the first inorganic thin layer IOL1among the n inorganic thin layers IOL1to IOLn may be referred to as “upper inorganic thin layers.” The thin film encapsulation layer TFE1includes n−1 organic thin layers OL1to OLn−1, and the n−1 organic thin layers OL1to OLn−1 are alternately arranged with the n inorganic thin layers IOL1to IOLn. The n−1 organic thin layers OL1to OLn−1 may have a thickness that is greater than that of the n inorganic thin layers IOL1to IOLn. Each of the n inorganic thin layers IOL1to IOLn may have a single-layer structure containing one type of material, or may have a multi-layer structure containing plural different types of material. Each of the n−1 organic thin layers OL1to OLn−1 may be formed by depositing organic monomers. Each of the n−1 organic thin layers OL1to OLn−1 may be formed by using an inkjet printing method or by coating a composition containing an acryl-based monomer. In the present embodiment, the thin film encapsulation layer TFE1may further include an n-th organic thin layer. Referring toFIGS.7B and7C, the inorganic thin layers included in each of the thin film encapsulation layers TFE2and TFE3may include the same inorganic material, or may include different inorganic materials from each other, and may have the same thickness or different thicknesses. The organic thin layers included in each of the thin film encapsulation layers TFE2and TFE3may include the same organic material, or may include different organic materials from each other, and may have the same thickness or different thicknesses. As shown inFIG.7B, the thin film encapsulation layer TFE2includes the first inorganic thin layer IOL1, the first organic thin layer OL1, the second inorganic thin layer IOL2, the second organic thin layer OL2, and the third inorganic thin layer IOL3, which are sequentially stacked. The first inorganic thin layer IOL1may have a two-layer structure. A first sub-layer S1and a second sub-layer S2of the first inorganic thin layer IOL1may have different inorganic materials. As shown inFIG.7C, the thin film encapsulation layer TFE3includes a first inorganic thin layer IOL10, a first organic thin layer OL1, and a second inorganic thin layer IOL20, which are sequentially stacked. The first inorganic thin layer IOL10may have a two-layer structure. A first sub-layer S10and a second sub-layer S20included in the first inorganic thin layer IOL10may have different inorganic materials. The second inorganic thin layer IOL20may have a two-layer structure. The second inorganic thin layer IOL20may include a first sub-layer S100and a second sub-layer S200, which are deposited in different environments. The first sub-layer S100may be deposited at lower power, and the second sub-layer S200may be deposited at high power. The first and second sub-layers S100and S200may include the same inorganic material. FIG.8Ais a cross-sectional view showing a touch sensing unit included in a display device according to an embodiment of the present disclosure.FIGS.8B to8Eare plan views showing a touch sensing unit included in a display device according to an embodiment of the present disclosure. Referring toFIG.8A, the touch sensing unit TS includes a first conductive pattern TS-CP1, a first insulating layer TS-IL1(hereinafter, referred to as a “first touch insulating layer”), a second conductive pattern TS-CP2, and a second insulating layer TS-IL2(hereinafter, referred to as a “second touch insulating layer”). The first conductive pattern TS-CP1is directly located on the thin film encapsulation layer TFE, but it should not be limited thereto or thereby. That is, another inorganic layer (e.g., a buffer layer) may be further located between the first conductive pattern TS-CP1and the thin film encapsulation layer TFE. If suitable, the second touch insulating layer TS-IL2may be omitted. A portion of the second conductive pattern TS-CP2crosses the first conductive pattern TS-CP1. The portion of the second conductive pattern TS-CP2is insulated from the first conductive pattern TS-CP1while crossing the first conductive pattern TS-CP1, and the first touch insulating layer TS-IL1is located between the first and second conductive patterns TS-CP1and TS-CP2. Each of the first conductive pattern TS-CP1and the second conductive pattern TS-CP2has a single-layer structure or a multi-layer structure of plural layers stacked in the third direction DR3. Each of the first touch insulating layer TS-IL1and the second touch insulating layer TS-IL2includes an inorganic material or an organic material. The inorganic material includes at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, and hafnium oxide. The organic material includes at least one of an acryl-based resin, a methacryl-based resin, polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, and a perylene-based resin. The first touch insulating layer TS-IL1should not be limited to a specific shape if the first touch insulating layer TS-IL1insulates the first conductive pattern TS-CP1and the second conductive pattern TS-CP2. The first touch insulating layer TS-IL1entirely covers the thin film encapsulation layer TFE or includes a plurality of insulating patterns. The insulating patterns are overlapped with first connection parts BR1and second connection parts BR2described later. In the present embodiment, the two-layer type touch sensing unit has been described, but the touch sensing unit should not be limited to the two-layer type. A single-layer type touch sensing unit includes a conductive layer and an insulating layer covering the conductive layer. The conductive layer includes touch sensors and touch signal lines connected to the touch sensors. The single-layer type touch sensing unit obtains coordinate information using a self-capacitance method. Referring toFIG.8B, the touch sensing unit TS includes first touch electrodes TE1and second touch electrodes TE2. The first touch electrodes TE1include the first connection parts BR1, first touch sensor parts SP1connected by the first connection parts BR1, and first touch signal lines SL1connected to the first touch sensor parts SP1. The second touch electrodes TE2include the second connection parts BR2, second touch sensor parts SP2connected by the second connection parts BR2, and second touch signal lines SL2connected to the second touch sensor parts SP2. In addition, connection electrodes TSD may be located between the first touch electrodes TE1and the first touch signal lines SL1, and between the second touch electrodes TE2and the second touch signal lines SL2connected to the second touch electrodes TE2. The connection electrodes TSD are connected to ends of the first and second touch electrodes TE1and TE2to transmit signals. According to another embodiment, the connection electrodes TSD may be omitted. The first touch sensor parts SP1are arranged in the first direction DR1, and the second touch sensor parts SP2are arranged in the second direction DR2. The first touch sensor parts SP1are spaced apart from the second touch sensor parts SP2. The first touch electrodes TE1extend in the first direction DR1and are spaced apart from each other in the second direction DR2. The second touch electrodes TE2extend in the second direction DR2and are spaced apart from each other in the first direction DR1. Each of the first connection parts BR1connects two adjacent first touch sensor parts SP1among the first touch sensor parts SP1. Each of the second connection parts BR2connects two adjacent second touch sensor parts SP2among the second touch sensor parts SP2. InFIG.8B, portions of the first and second connection parts BR1and BR2are represented by a bold dot to help description. The touch sensing unit TS may further include touch pad parts TS-PD. Each of the first and second touch signal lines SL1and SL2may be connected to a corresponding touch pad part of the touch pad parts TS-PD. The first touch sensor parts SP1are capacitively coupled to the second touch sensor parts SP2. When the touch sensing signals are applied to the first touch sensor parts SP1, capacitors are formed between the first touch sensor parts SP1and the second touch sensor parts SP2. Hereinafter, the touch sensing unit TS will be described in more detail with reference toFIGS.8C to8E. Referring toFIG.8C, the first conductive patterns TS-CP1may include the second connection parts BR2. The second connection parts BR2may be directly formed on the thin film encapsulation layer TFE by a patterning process. That is, the first conductive patterns TS-CP1may be directly located on the thin film encapsulation layer TFE of the organic light emitting display panel DP. The first touch insulating layer TS-IL1is located on the first conductive patterns TS-CP1. The first touch insulating layer TS-IL1is directly located on the thin film encapsulation layer TFE of the organic light emitting display panel DP and covers the second connection parts BR2. As shown inFIG.8D, a plurality of contact holes CH is defined through the first touch insulating layer TS-IL1to partially expose the second connection parts BR2. The contact holes CH are formed by a photolithography process. The second connection parts BR2of the first conductive pattern TS-CP1are electrically connected to the second touch sensor parts SP2through the contact holes CH. Referring toFIG.8E, the second conductive patterns TS-CP2are located on the first touch insulating layer TS-IL1. The second conductive patterns TS-CP2include the first connection parts BR1, the first touch sensor parts SP1connected by the first connection parts BR1, and the second touch sensor parts SP2spaced apart from the first touch sensor parts SP1. As described above, the second touch sensor parts SP2are electrically connected to the second connection parts BR2of the first conductive pattern TS-CP1through contact holes CH defined through the first touch insulating layer TS-IL1. FIG.8Fis a partially enlarged view showing an area A1ofFIG.8B.FIG.9is a partially enlarged view showing an area A2ofFIG.8B. Referring toFIGS.8F and9, each of the first touch sensor parts SP1and the second touch sensor parts SP2may include a plurality of mesh lines ML to define a plurality of mesh holes MH. Each mesh line ML has a line width of a few micrometers, for example. Each of the first touch sensor part SP1and the second touch sensor parts SP2may have a mesh shape. Although not shown in detail, the first touch signal lines SL1and the second touch signal lines SL2may have the mesh shape. Each of the first touch sensor parts SP1and the second touch sensor parts SP2is overlapped with the non-light emitting area NPXA. When viewed in a plan view, the mesh holes MH have different sizes from each other. The mesh holes MH may respectively correspond to the light emitting areas PXA in a one-to-one correspondence, but they should not be limited thereto or thereby. That is, one mesh hole may correspond to two or more light emitting areas PXA, for example. The light emitting areas PXA may have different sizes from each other when viewed in a plan view. Correspondingly, the mesh holes MH may have different sizes from each other when viewed in a plan view. For instance, the light emitting areas PXA may include a red light emitting area, a green light emitting area, and a blue light emitting area, and the light emitting areas PXA may have different sizes determined depending on their colors. However, the light emitting areas PXA may have the same size as each other, and the mesh holes MH may have the same size as each other. Referring toFIG.9, the first connection parts BR1may cross the second connection parts BR2. The first connection parts BR1and the second connection parts BR2are insulated from each other by the first touch insulating layer TS-IL1located between the first and second connection parts BR1and BR2while crossing each other. InFIG.9, the first connection parts BR1and the second connection parts BR2are represented by bold lining to aid in description, but they should not be limited thereto or thereby. For instance, the first connection parts BR1may not cross the second connection parts BR2, and this structure will be described in detail later. As shown inFIG.9, each of the first connection parts BR1and the second connection parts BR2may have the mesh shape. However, the second connection parts BR2may not have the mesh shape in other embodiments. FIGS.10A and10Bare cross-sectional views taken along the line I-I′ ofFIG.9. Referring toFIGS.10A and10B, the first conductive pattern TS-CP1has a thickness D1that is smaller than a thickness D2of the second conductive pattern TS-CP2. The first conductive pattern TS-CP1shown inFIGS.10A and10Bcorresponds to the second connection part BR2. The thickness D1of the first conductive pattern TS-CP1may be smaller than a thickness D3of the first touch insulating layer TS-IL1. In the present description, the term “thickness” denotes an average value of the thickness, or general thickness, in a corresponding component. As described above, the first conductive pattern TS-CP1and the first touch insulating layer TS-IL1may be directly located on the thin film encapsulation layer TFE of the organic light emitting display panel DP. The first touch insulating layer TS-IL1has a structure in which a step difference exists. In detail, the first touch insulating layer TS-IL1is located on the first conductive pattern TS-CP1and includes a first part IL-SUB1spaced apart from the thin film encapsulation layer TFE, a second part IL-SUB2making contact with the thin film encapsulation layer TFE, and a third part IL-SUB3connecting the first part IL-SUB1and the second part IL-SUB2. The first part IL-SUB1, the second part IL-SUB2, and the third part IL-SUB3are integrally connected to each other. The third part IL-SUB3may have a rectangular shape when viewed in a cross section, but the shape of the third part IL-SUB3should not be limited to the rectangular shape. That is, the third part IL-SUB3may have a polygonal shape including an inclined surface when viewed in a cross section. FIG.11is a cross-sectional view showing a touch sensing unit included in a conventional display device1000. In detail, the conventional display device1000shown inFIG.11includes a display panel100and a touch sensing unit200located on the display panel100. The touch sensing unit200of the conventional display device1000includes a first conductive pattern210, an insulating layer220covering the first conductive pattern210, and a second conductive pattern230located on the insulating layer220, and a thickness K1of the first conductive pattern210is substantially equal to or slightly different from a thickness K2of the second conductive pattern230. In more detail, the thickness K1of the first conductive pattern210, a thickness K3of the insulating layer220, and the thickness K2of the second conductive pattern230are substantially equal to each other or slightly different from each other. A step difference occurs in the insulating layer220due to the first conductive pattern210covered by the insulating layer220, and a crack occurs at an area in which the step difference exists, as shown in the area B ofFIG.11. Due to the crack occurring in the insulating layer220, the first conductive pattern210is electrically connected to a portion of the second conductive pattern230crossing the first conductive pattern210, and as a result, a short (e.g., electrical short) defect of the touch sensing unit200is generated. According to the display device DD of the present disclosure, the thickness D1of the first conductive pattern TS-CP1is smaller than that of the conventional first conductive pattern210, and thus the step difference occurring in the first touch insulating layer TS-IL1becomes small, thereby minimizing the occurrence of the crack in the insulating layer. To effectively achieve the effect, the thickness D1of the first conductive pattern TS-CP1may be suitably less than the thickness D3of the first touch insulating layer TS-IL1. The thickness D3of the first touch insulating layer TS-IL1and the thickness D2of the second conductive pattern TS-CP2are substantially equal to each other, or are slightly different from each other. That is, only the thickness D1of the first conductive pattern TS-CP1among the first conductive pattern TS-CP1, the first touch insulating layer TS-IL1, and the second conductive pattern TS-CP2is set to be relatively thin in the display device DD according to the present embodiment, and thus the first touch insulating layer TS-IL1may be prevented from being cracked, or the crack of the first touch insulating layer TS-IL1may be reduced or minimized. For instance, the thickness D1of the first conductive pattern TS-CP1is equal to or greater than about 1800 angstroms, and equal to or less than about 2100 angstroms, and the thickness D2of the second conductive pattern TS-CP2is equal to or greater than about 2700 angstroms and equal to or less than about 3500 angstroms. As an example, the thickness D1of the first conductive pattern TS-CP1may be about 1950 angstroms, and the thickness D2of the second conductive pattern TS-CP2may be about 3100 angstroms, but they should not be limited thereto or thereby. In the present embodiment, the thickness D3of the first touch insulating layer TS-IL1is equal to or greater than about 2700 angstroms, and equal to or less than about 3500 angstroms. As an example, the thickness D3of the first touch insulating layer TS-IL1may be about 3100 angstroms, but it should not be limited thereto or thereby. FIGS.12A to12Dare plan views showing a touch sensing unit included in a display device according to other embodiments of the present disclosure.FIGS.13A to13Care partially enlarged views showing an area A3ofFIG.12A. In detail,FIG.13Ashows second connection parts BR2included in a first conductive pattern TS-CP1in the area A3ofFIG.12A,FIG.13Bshows a second conductive pattern TS-CP2in the area A3, andFIG.13Cshows a stack structure of the first conductive pattern TS-CP1and the second conductive pattern TS-CP2. As described above, the first connection parts BR1might not cross the second connection parts BR2. Referring toFIGS.12A to12D, the second connection parts BR2do not cross the first connection parts BR1, and the second connection parts BR2cross the first touch sensor parts SP1and the second touch sensor parts SP2. Referring toFIGS.13A to13C, the first touch sensor parts SP1include the mesh lines ML defining the mesh holes MH as described above, and each of the second connection parts BR2crosses a portion of the mesh lines ML of the first touch sensor parts SP1. The second touch sensor parts SP2include the mesh lines ML defining the mesh holes MH as described above, and each of the second connection parts BR2crosses a portion of the mesh lines ML of the second touch sensor parts SP2. In this case, the second connection parts BR2may not have the mesh shape. The descriptions with reference toFIGS.8B to8Emay be applied to the touch sensing unit shown inFIGS.12A to12Dexcept for the descriptions of the second connection parts BR2, and thus repeated detailed descriptions of the touch sensing unit with reference toFIGS.12A to12Dwill be omitted except for the second connection parts BR2. FIG.14is a cross-sectional view taken along the line II-II′ ofFIG.13C. The descriptions with reference toFIGS.10A and10Bmay be applied to the touch sensing unit shown inFIG.14, except that difference features between the structure in which the second conductive pattern TS-CP2shown inFIGS.10A and10Bcorresponds to the first connection parts BR1among components of the second conductive pattern TS-CP2and the structure in which the second conductive pattern TS-CP2shown inFIG.14corresponds to the first touch sensor parts SP1among components of the second conductive pattern TS-CP2, and thus repeated detailed descriptions of the touch sensing unit with reference toFIG.14will be omitted, although different features will be described. FIG.15is a cross-sectional view showing a portion of a touch sensing unit included in a display device according to an embodiment of the present disclosure. The first conductive pattern TS-CP1may have a single-layer structure or a multi-layer structure. As shown inFIG.15, the first conductive pattern TS-CP1may have a triple-layer structure. The first conductive pattern TS-CP1may include a first conductive layer CD1, a second conductive layer CD2, and a third conductive layer CD3. The first conductive layer CD1is located on the thin film encapsulation layer TFE of the display panel DP, the second conductive layer CD2is located on the first conductive layer CD1, and the third conductive layer CD3is located on the second conductive layer CD2. In this case, an entire thickness D1of the first conductive pattern TS-CP1corresponds to a sum of a thickness G1of the first conductive layer CD1, a thickness G2of the second conductive layer CD2, and a thickness G3of the third conductive layer CD3. The second conductive layer CD2has an electrical resistivity smaller than that of each of the first conductive layer CD1and the third conductive layer CD3. That is, the second conductive layer CD2has a superior electrical conductivity when compared to that of each of the first conductive layer CD1and the third conductive layer CD3. Each of the first, second, and third conductive layers CD1, CD2, and CD3may include typical materials as long as the first, second, and third conductive layers CD1, CD2, and CD3satisfy the above-mentioned relation. For instance, each of the first, second, and third conductive layers CD1, CD2, and CD3may include silver, copper, aluminum, titanium, molybdenum, or an alloy thereof. As another example, each of the first and third conductive layers CD1and CD3may include titanium (Ti), and the second conductive layer CD2may include aluminum (Al). The thickness G2of the second conductive layer CD2may be greater than the thickness G1of the first conductive layer CD1and the thickness G3of the third conductive layer CD3. The thickness G2of the second conductive layer CD2may be, for example, six times greater than the thickness G1of the first conductive layer CD1. The thickness G2of the second conductive layer CD2may be, for example, four times greater than the thickness G3of the third conductive layer CD3. The thickness G3of the third conductive layer CD3may be greater than the thickness G1of the first conductive layer CD1. The thickness G3of the third conductive layer CD3may be, for example, one and half times greater than the thickness G1of the first conductive layer CD1. In other embodiments, the thickness G3of the third conductive layer CD3may be, for example, one and half times to three times greater than the thickness G1of the first conductive layer CD1. The second conductive layer CD2is an essential component for the first conductive pattern CP1to serve as a conductive pattern, and the first and third conductive layers CD1and CD3serve as a protective layer protecting the second conductive layer CD2to obtain a process stability. In detail, the first conductive layer CD1protects the second conductive layer CD2from defects occurring under the touch sensing unit TS, and the third conductive layer CD3prevents the second conductive layer CD2from being damaged during an etching process. As described above, because the first conductive layer CD1and the third conductive layer CD3have different purposes, the first conductive layer CD1and the third conductive layer CD3may suitably have different thicknesses. That is, because the third conductive layer CD3may suitably have a predetermined thickness or more such that the second conductive layer CD2is protected in the etching process, only the thickness G1of the first conductive layer CD1is controlled to be small, and thus the entire thickness D1of the first conductive pattern CP1may be smaller than the entire thickness D2of the second conductive pattern CP2. However, the thickness G1of the first conductive layer CD1may still suitably have a predetermined thickness or more and, for example, the thickness G1of the first conductive layer CD1may be about 50 angstroms or more. In a case than the thickness G1of the first conductive layer CD1is less than about 50 angstroms, the first conductive layer CD1might not serve as the protective layer protecting the second conductive layer CD2. For instance, the thickness G1of the first conductive layer CD1is in a range from about 50 angstroms to about 200 angstroms, the thickness G2of the second conductive layer CD2is in a range from about 1000 angstroms to about 1800 angstroms, and the thickness G3of the third conductive layer CD3is in a range from about 250 angstroms to about 350 angstroms. In detail, the thickness G1of the first conductive layer CD1may be about 150 angstroms, the thickness G2of the second conductive layer CD2may be about 1500 angstroms, and the thickness G3of the third conductive layer CD3may be about 300 angstroms. However, the thicknesses G1, G2, and G3of the first, second, and third conductive layers CD1, CD2, and CD3should not be limited thereto or thereby. FIG.16is a cross-sectional view showing a touch sensing unit included in a display device according to an embodiment of the present disclosure. Referring toFIG.16, the second conductive pattern TS-CP2may have a multi-layer structure as similar to the first conductive pattern TS-CP1, but it should not be limited thereto or thereby. The second conductive pattern TS-CP2may have a single-layer structure if the second conductive pattern TS-CP2is thicker than the first conductive pattern TS-CP1. The second conductive pattern TS-CP2may have a triple-layer structure. The second conductive pattern TS-CP2may include a fourth conductive layer CD4, a fifth conductive layer CD5, and a sixth conductive layer CD6. The fourth conductive layer CD4is located on the first touch insulating layer TS-IL1, the fifth conductive layer CD5is located on the fourth conductive layer CD4, and the sixth conductive layer CD6is located on the fifth conductive layer CD5. In this case, the entire thickness D2of the second conductive pattern TS-CP2corresponds to a sum of a thickness G4of the fourth conductive layer CD4, a thickness G5of the fifth conductive layer CD5, and a thickness G6of the sixth conductive layer CD6. The fifth conductive layer CD5has an electrical resistivity smaller than that of each of the fourth conductive layer CD4and the sixth conductive layer CD6. That is, the fifth conductive layer CD5has a superior electrical conductivity when compared to that of each of the fourth conductive layer CD4and the sixth conductive layer CD6. Each of the fourth, fifth, and sixth conductive layers CD4, CD5, and CD6may include typical materials as long as the fourth, fifth, and sixth conductive layers CD4, CD5, and CD6satisfy the above-mentioned relation. For instance, each of the fourth, fifth, and sixth conductive layers CD4, CD5, and CD6may include silver, copper, aluminum, titanium, molybdenum, or an alloy thereof. As another example, each of the fourth and sixth conductive layers CD4and CD6may include titanium (Ti), and the fifth conductive layer CD5may include aluminum (Al). The thickness G5of the fifth conductive layer CD5may be greater than the thickness G4of the fourth conductive layer CD4and the thickness G6of the sixth conductive layer CD6. The thickness G5of the fifth conductive layer CD5may be, for example, four times greater than the thickness G4of the fourth conductive layer CD4. The thickness G5of the fifth conductive layer CD5may be, for example, four times greater than the thickness G6of the sixth conductive layer CD6. The fifth conductive layer CD5is an essential component for the second conductive pattern TS-CP2to serve as a conductive pattern, and the fourth and sixth conductive layers CD4and CD6serve as a protective layer protecting the fifth conductive layer CD5to obtain a process stability. In detail, the fourth conductive layer CD4protects the fifth conductive layer CD5from defects occurring under the second conductive pattern CP2, and the sixth conductive layer CD6prevents the fifth conductive layer CD5from being damaged during an etching process. The thickness G2of the second conductive layer CD2may be smaller than the thickness G5of the fifth conductive layer CD5. That is, the entire thickness of the first conductive pattern TS-CP1may become smaller than the second conductive pattern TS-CP2by allowing the thickness G2of the second conductive layer CD2to be smaller than the thickness G5of the fifth conductive layer CD5. The thickness G1of the first conductive layer CD1may be smaller than the thickness G4of the fourth conductive layer CD4. That is, the entire thickness of the first conductive pattern TS-CP1may become smaller than the second conductive pattern TS-CP2by allowing the thickness G1of the first conductive layer CD1to be smaller than the thickness G4of the fourth conductive layer CD4that is a lower protective layer of the second conductive pattern TS-CP2. For instance, the thickness G4of the fourth conductive layer CD4is in a range from about 250 angstroms to about 350 angstroms, the thickness G5of the fifth conductive layer CD5is in a range from about 2200 angstroms to about 2800 angstroms, and the thickness G6of the sixth conductive layer CD6is in a range from about 250 angstroms to about 350 angstroms. In detail, the thickness G4of the fourth conductive layer CD4may be about 300 angstroms, the thickness G5of the fifth conductive layer CD5may be about 2500 angstroms, and the thickness G6of the sixth conductive layer CD6may be about 300 angstroms. However, the thicknesses G4, G5, and G6of the fourth, fifth, and sixth conductive layers CD4, CD5, and CD6should not be limited thereto or thereby. The display device DD according to the embodiments of the present disclosure may reduce the occurrence of the crack in the insulating layer, e.g., the first touch insulating layer TS-IL1, included in the touch sensing unit TS. Consequently, the display device DD according to the embodiments of the present disclosure may reduce an electrical short defect of the touch sensing unit TS. Although the embodiments of the present invention have been described, it is understood that the present invention should not be limited to these embodiments, but various changes and modifications can be made by one ordinary skilled in the art within the spirit and scope of the present invention as defined by the claims and their functional equivalents. | 68,901 |
11861118 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Embodiments of the present disclosure are directed to current touch display panels, due to different positions of the touch sensing electrodes distributed in the panel, a length of each touch sensing signal line in the touch display panel will also be different, resulting in different impedance of each touch sensing signal line. The aforementioned difference in impedance may cause different time delays when different touch signal lines output the touch sensing signals, thereby affecting poor uniformity of touch sensitivity of the touch display panel. The present embodiments can solve the defects. As shown inFIG.1,FIG.1is a schematic diagram of a planar structure of a touch display panel provided by an embodiment of the present disclosure. As shown inFIG.2,FIG.2is a schematic diagram of a cross-sectional structure of the touch display panel provided by the embodiment of the present disclosure. Refer toFIG.1andFIG.2, a touch display panel provided by the embodiment of the present disclosure comprises a display region10and a fan-shaped wiring region20below the display region10, the touch display panel further comprises a base substrate31, a touch electrode layer11formed on the base substrate31, and a plurality of touch signal lines12. Wherein, the touch electrode layer11comprises a plurality of touch sensing blocks111arranged in an array. A first end of the touch signal lines12is electrically connected to a corresponding touch sensing block111, and a second end of the touch signal lines extends to the fan-shaped wiring region20and is electrically connected to a touch integrated circuit21located in the fan-shaped wiring region20. The touch signal lines12comprise a first type touch signal line121and a second type touch signal line122, and a distance between a first end of the first type touch signal line121and the touch integrated circuit21is less than a distance between a first end of the second type touch signal line122and the touch integrated circuit21. The first type touch signal line121includes a single sub-signal line, and the second type touch signal line122is at least two sub-signal lines. It should be noted that a number of the sub-signal lines used in the second type of touch signal line122can be determined according to the actual product size and space, which is not limited herein. However, the present disclosure is preferably two, which can effectively reduce a resistance of the second type touch signal line122without occupying space. Specifically, continuing to refer toFIG.2, the touch display panel provided by the embodiment of the present disclosure further comprises a thin film transistor array structure layer32located on a side of the base substrate31, a planarization layer33covering the thin film transistor array structure layer32, the touch signal lines12disposed on a surface of a side of the planarization layer33away from the base substrate31, a first insulating layer34covering the planarization layer33and the touch signal lines12, the touch electrode layer11disposed on the first insulating layer34and away from a surface of a side of the base substrate31, a second insulating layer35covering the touch electrode layer11and the first insulating layer34, and a pixel electrode36disposed on a surface of a side of the second insulating layer35away from the base substrate31. The pixel electrode36is electrically connected to the thin film transistor array structure layer32through a through-hole, and the touch signal lines12are electrically connected to the touch electrode layer11through a bridge37, wherein the bridge37and the pixel electrode36can formed on a same layer. Preferably, the touch electrode layer11may also be a common electrode layer. Wherein, during a touch stage, the touch electrode layer11receives a touch drive signal; and during a display stage, the touch electrode layer11receives a common voltage. Specifically, each of the touch sensing blocks111is electrically connected to one of the touch signal lines12, and the touch signal lines12extend along a row direction. Furthermore, a number of the touch sensing blocks111arranged along the row direction is N, a number of the first type touch signal line121corresponding to the touch sensing blocks is P, and a number of the second type touch signal line122corresponding to the touch sensing blocks is Q, wherein N, P, and Q are all positive integers, and N=P+Q. In the preferred embodiment shown inFIG.1, the number N of the touch sensing blocks111arranged along the row direction is 5, the number P of the first type touch signal line121corresponding to the touch sensing blocks is 3 (the first type touch signal line121comprises a touch signal line C, a touch signal line D, and a touch signal line E), and the number Q of the second type touch signal line122corresponding to the touch sensing blocks is 2 (the second type touch signal line122comprises a touch signal line A and a touch signal line B). Go a step further, when N=2n (that is, the number N of the touch sensing blocks111arranged along the row direction is an even), P=Q=n; and when 2n+1 (that is, the number N of the touch sensing blocks111arranged along the row direction is an odd), P=n+1, Q=n, and n is a positive integer. In the preferred embodiment shown inFIG.1, the number N of the touch sensing blocks111arranged along the row direction is 5, and n is 2. The number P of the first type touch signal line121corresponding to the touch sensing blocks is 3 (the first type touch signal line121comprises a touch signal line C, a touch signal line D, and a touch signal line E), and the number Q of the second type touch signal line122corresponding to the touch sensing blocks is 2 (the second type touch signal line122comprises a touch signal line A and a touch signal line B). Specifically, each row of the touch signal lines12(that is, a direction perpendicular to the touch signal lines12) comprises a blind region in a column direction, a blind region width of the blind region is W, and a number of the touch signal lines12in one blind region is t, wherein t is a positive integer and t≤N. Specifically, the touch display panel comprises a pixel array layer, the pixel array layer comprises a plurality of sub-pixel units arranged in an array, and the sub-pixel units are one of a red sub-pixel unit, a green sub-pixel unit, or a blue sub-pixel unit. Each vertical projection of each of the touch sensing blocks111on the pixel array layer covers the plurality of sub-pixel units. Preferably, a pixel arrangement of the sub-pixel units is a diamond pixel arrangement or a 2-in-1 pixel arrangement. As shown inFIG.3,FIG.3is a schematic design diagram of touch signal lines in the touch display panel provided by the embodiments of the present disclosure based on the diamond pixel arrangement. Wherein, when the sub-pixel units43use the diamond pixel arrangement, in a first region41of the first type touch signal line121close to the touch integrated circuit21, the first type touch signal line121is a grid metal pattern and uses the single sub-signal line. Each vertical projection of the first type touch signal line121on the pixel array layer is located between two adjacent rows of the sub-pixel units43, and a main purpose is to prevent the sub-pixel units43from reducing an influence of the single sub-signal line on display light transmittance. In a second region42of the second type touch signal line122close to the touch integrated circuit21, the second type touch signal line122is a grid metal pattern and uses at least two sub-signal lines. Each vertical projection of the second type touch signal line122on the pixel array layer is located between two adjacent rows of sub-pixel units43, and a main purpose is to prevent the sub-pixel unit43from reducing an influence of the at least two sub-signal lines on display light transmittance. A unit impedance when the second type touch signal line122uses the at least two sub-signal lines is less than A unit impedance when the second type touch signal line122uses the single sub-signal line. As shown inFIG.4,FIG.4is a schematic design diagram of the touch signal lines in the touch display panel provided by the embodiments of the present disclosure based on the 2-in-1 pixel arrangement. Wherein, when the sub-pixel units53use the 2-in-1 pixel arrangement, in the first region41of the first type touch signal line121close to the touch integrated circuit21, the first type touch signal line121is a grid metal pattern and uses the single sub-signal line. Each vertical projection of the first type touch signal line121on the pixel array layer is located between two adjacent rows of the sub-pixel units53, and a main purpose is to prevent the sub-pixel units53from reducing the influence of the single sub-signal line on display light transmittance. In the second region42of the second type touch signal line122close to the touch integrated circuit21, the second type touch signal line122is a grid metal pattern and uses at least two sub-signal lines. Each vertical projection of the second type touch signal line122on the pixel array layer is located between two adjacent rows of sub-pixel units53, and a main purpose is to prevent the sub-pixel units53from reducing the influence of the at least two sub-signal lines on display light transmittance. A unit impedance when the second type touch signal line122uses the at least two sub-signal lines is less than A unit impedance when the second type touch signal line122uses the single sub-signal line. According to a resistance principle, a length of the second type touch signal line122is greater than a length of the first type touch signal line121, and a corresponding impedance R of the second type touch signal line122is greater. The embodiment of the present disclosure uses a differentiated design method for the touch signal lines, the touch signal lines away from the touch integrated circuit is designed as the at least two sub-signal lines, the touch signal lines close to the touch integrated circuit is designed as the single sub-signal line, which significantly reduces the impedance of the touch signal lines away from the touch integrated circuit, resolves a difference in touch performance caused by the impedance of the touch signal lines, and improves speed of touch driving signal transmission. Moreover, this ensure touch performance without changing a size of the blind region. For specific implementations of the above operations, referring to the previous embodiments, which will not be described here. In summary, in the touch display panel provided by the embodiment of the present disclosure, the touch signal lines close to the touch integrated circuit are disposed as the single sub-signal line, and the touch signal lines away from the touch integrated circuit are disposed as at least two sub-signal lines, which significantly reduces the impedance of the touch signal lines away from the touch integrated circuit, relieves the difference in touch performance caused by the impedance of the touch signal lines, and improves the speed of touch driving signal transmission. Moreover, this ensures touch performance without changing the size of the blind region. It should be understood that for those of ordinary skill in the art, equivalent replacements or changes can be made according to technical solutions of the present disclosure and its inventive concept, and all these changes or replacements shall fall within a protection scope of appended claims of the present disclosure. | 11,598 |
11861119 | DETAILED DESCRIPTION The technical solutions in some embodiments of the present disclosure will be described clearly and completely below with reference to accompanying drawings. Obviously, the described embodiments are merely some but not all of embodiments of the present disclosure. All other embodiments obtained on a basis of the embodiments of the present disclosure by a person of ordinary skill in the art shall be included in the protection scope of the present disclosure. Unless the context requires otherwise, throughout the specification and the claims, the term “comprise” and other forms thereof such as the third-person singular form “comprises” and the present participle form “comprising” are construed as being open and inclusive, meaning “including, but not limited to,” In the description of the specification, the terms “one embodiment”, “some embodiments”, “exemplary embodiments”, “an example”, “a specific example” or “some examples” and the like are intended to indicate that specific features, structures, materials or characteristics related to the embodiment(s) or example(s) are included in at least one embodiment or example of the present disclosure. Schematic representations of the above terms do not necessarily refer to the same embodiment(s) or example(s). In addition, the specific features, structures, materials, or characteristics may be included in any one or more embodiments or examples in any suitable manner. As following, the terms “first” and “second” are only used for descriptive purposes, and are not to be construed as indicating or implying the relative importance or implicitly indicating the number of indicated technical features. Thus, features defined as “first”, “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present disclosure, “a plurality of/the plurality of” means two or more unless otherwise specified. In the description of some embodiments, the term such as “connected” and its derivative extensions may be used. For example, the term “connected” may be used in the description of some embodiments to indicate that two or more components are in direct physical contact or electrical contact with each other. The embodiments disclosed herein are not necessarily limited to the contents herein. The phrase “A and/or B” includes the following three combinations: only A, only B, and a combination of A and B. As used herein, the term “if” is optionally construed as “when” or “in a case where” or “in response to determining” or “in response to detecting”, depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is optionally construed as “in a case where it is determined” or “in response to determining” or “in a case where [the stated condition or event] is detected” or “in response to detecting [the stated condition or event]”, depending on the context. The use of the phrase “applicable to” or “configured to” herein means an open and inclusive language, which does not exclude devices that are applicable to or configured to perform additional tasks or steps. In addition, the use of the phrase “based on” is meant to be open and inclusive, since a process, step, calculation or other action that is “based on” one or more of the stated conditions or values may, in practice, be based on additional conditions or values exceeding those stated. The term such as “about” or “approximately” as used herein includes a stated value and an average value within an acceptable range of deviation of a particular value. The acceptable range of deviation is determined by a person of ordinary skill in the art in view of the measurement in question and error associated with measurement of a particular quantity (i.e., limitations of a measurement system). Exemplary embodiments are described herein with reference to cross-sectional views and/or plan views as idealized exemplary drawings. In the accompanying drawings, thickness of layers and regions may be exaggerated for clarity. Therefore, variations in shapes with respect to the drawings due to, for example, manufacturing techniques and/or tolerances are conceivable. Therefore, the exemplary embodiments should not be construed as being limited to the shapes of the regions shown herein, but includes shape deviations due to, for example, manufacturing. For example, an etched region shown in a rectangular shape generally has a curved feature. Therefore, the regions shown in the accompanying drawings are schematic in nature, and their shapes are not intended to show actual shapes of the region in a device, and are not intended to limit the scope of the exemplary embodiments. A touch structure in a touch display panel is usually bonded to a touch flexible circuit board, so as to drive the touch structure and sense a touch position by using the flexible circuit board. In one implementation manner, as shown inFIGS.1and2, the touch structure includes a plurality of conductive pins21′. The touch display panel includes a protective layer3′ disposed on a side of the touch structure. The protective layer3″ has an opening for exposing the plurality of conductive pins21′, so that the touch flexible circuit board200′ can be bonded to the plurality of conductive pins of the touch structure through the opening. Here, the opening Q usually has a relatively regular shape (e.g., a rectangle as shown inFIG.1), which may reduce a difficulty of manufacturing and forming the opening, and improve a yield of producing the touch display panel. With the development of a narrow bezel design of the touch display panel, in a process of bonding the touch flexible circuit board200′ and the plurality of conductive pins21′, a phenomenon of stress concentration is prone to occur at positions of the plurality of conductive pins21′ corresponding to an edge of the touch flexible circuit board200′ (e.g., an edge corresponding to the conductive pins21′ inFIG.2). Moreover, in a process of bending the touch flexible circuit board200′, the stress concentration phenomenon may be worse, so that some of the conductive pins21′ are broken (as shown inFIGS.3to5), resulting in a failure of a sensing touch function of the touch display panel. In addition, in a process of performing a high and low temperature shock environment test (e.g., the test conditions are that: the temperature is within a range of −40° C. to 80° C., the number of cycles is 100 times, and the time is 100 hours) on the touch display panel, more conductive pins21′ will be broken. On this basis, some embodiments of the present disclosure provide a display panel100. As shown inFIG.8, the display panel100has a display area A and a frame area B located on a side of the display area. Here, “a side” refers to one side, two sides, or all sides of the display area A (as shown inFIG.8), etc. This means that, the frame area B may be located on one side or two sides of the display area A, or the frame area B may also be disposed around the display area A. In some examples, as shown inFIG.8, the frame area B includes a bonding region B1. There may be one or more bonding regions B1. Some embodiments of the present disclosure are schematically described by taking an example in which there is one bonding region B1. In some embodiments, as shown inFIGS.8to10, the display panel100includes a display substrate1. In some examples, as shown inFIGS.6and7, the display substrate1may include a base11. The base11may be of various types, which may be selected and set according to actual needs. For example, the base11may be a rigid base. The rigid base may be a glass base or a polymethyl methacrylate (PMMA) base, etc. For example, the base11may be a flexible base. The flexible base may be a polyethylene terephthalate (PET) base, a polyethylene naphthalate (PEN) base or a polyimide (PI) base, etc. In some examples, as shown inFIG.6, the display substrate1may further include: a plurality of gate lines GL and a plurality of data lines DL that are disposed on a side of the base11and located in the display area A. For example, the plurality of data lines DL extend in a first direction X, and the plurality of gate lines GL extend in a second direction Y. The plurality of data lines DL are located on a side of the plurality of gate lines GL away from the base11, and the plurality of data lines DL are insulated with the plurality of gate lines GL. For example, as shown inFIG.6, the first direction X and the second direction Y intersect each other. This means that, the plurality of gate lines GL and the plurality of data lines DL are arranged to intersect each other, so that the plurality of gate lines GL and the plurality of data lines DL may serve to define a plurality of sub-pixel regions P. Here, an included angle between the first direction X and the second direction Y may be selected and set according to actual needs. For example, the included angle between the first direction X and the second direction Y is 90° or approximately 90°. That is, the plurality of gate lines GL are perpendicular with or approximately perpendicular with the plurality of data lines DL. In some examples, as shown inFIG.6, the display substrate1may further include: a plurality of sub-pixels12disposed in the plurality of sub-pixel regions P. For example, the plurality of sub-pixels12and the plurality of sub-pixel regions P are arranged in one-to-one correspondence. The sub-pixel12may be of various structures, which may be selected and set according to actual needs. For example, as shown inFIG.6, each sub-pixel12may include a pixel driving circuit121and a light-emitting device122electrically connected to the pixel driving circuit121. The pixel driving circuit121is configured to provide a driving voltage to the light-emitting device122electrically connected thereto, so as to control a light-emitting state of the light-emitting device122. For example, as shown inFIG.6, sub-pixel regions P arranged in a line in the second direction Y may be referred to as a same row of sub-pixel regions P, and sub-pixel regions P arranged in a line in the first direction X may be referred to as a same column of sub-pixel regions P. Pixel driving circuits121of the same row of sub-pixel regions P may be electrically connected to one gate line GL, and pixel driving circuits121of the same column of sub-pixel regions P may be electrically connected to one data line DL. The gate line GL may provide a scan signal to the same row of pixel driving circuits121electrically connected thereto, and the data line DL may provide data signals to the same column of pixel driving circuits121electrically connected thereto. Of course, the pixel driving circuits121of the same row of sub-pixel regions P may also be electrically connected to multiple gate lines GL, which is not limited in the embodiments of the present disclosure. In some examples, as shown inFIG.6, the display substrate1may further include a plurality of power supply lines VL disposed on a side of the base11and extending in second direction Y, The pixel driving circuits121of the same row of sub-pixel regions P may be electrically connected to one power supply line VL. The power supply line VL may provide a voltage signal to the same row of pixel driving circuits121electrically connected thereto. The pixel driving circuit121may be of various structures, which may be selected and set according to actual needs. For example, the pixel driving circuit121may be of “2T1C”, “6T1C”, “7T1C”, “6T2C” or “7T2C” structure. Here, “T” represents a thin film transistor, and the number before “T” represents the number of thin film transistors; “C” represents a storage capacitor, and the number before “C” represents the number of storage capacitors. As shown inFIG.6, a plurality of thin film transistors included in the pixel driving circuit121include one driving transistor and one switching transistor. The light-emitting device122may be of various structures, which may be selected and set according to actual needs. For example, as shown inFIG.7, the light-emitting device122includes an anode layer1221disposed on a side of the pixel driving circuit121away from the base11and electrically connected to the driving transistor in the pixel driving circuit121, and a light-emitting layer1222and a cathode layer1223that are sequentially stacked on a side of the anode layer1221away from the base11. For example, the light-emitting device122may further include a hole injection layer and/or a hole transport layer disposed between the anode layer1221and the light-emitting layer1222. The light-emitting device122may further include an electron transport layer and/or an electron injection layer disposed between the light-emitting layer1222and the cathode layer1223. The light-emitting layer1222may be of various structures. For example, the light-emitting layer1222may be an organic light-emitting layer. In this case, the light-emitting device122may be referred to as an organic light-emitting diode (OLED), and the display substrate1may be referred to as an OLED display substrate. For another example, the light-emitting layer1222may be an inorganic light-emitting layer. In this case, the light-emitting device122may be referred to as a quantum dot light-emitting diode (QLED), and the display substrate1may be referred to as a QLED display substrate. Here, a plurality of light-emitting devices122in the plurality of sub-pixels12may emit light of various colors. The light of various colors may cooperate with each other to achieve display of pictures, thereby enabling the display substrate1to have a display function. In some examples, as shown inFIG.7, the display substrate1may further include an encapsulation layer13disposed on a side of the sub-pixel12away from the base11. The encapsulation layer13is located in both the display area A and the frame area B, and covers the plurality of sub-pixels12. In this way, the encapsulation layer13may serve to form a good encapsulation effect on the sub-pixels12to prevent external water vapor and/or oxygen from corroding the light-emitting devices122in the sub-pixels12and affecting the luminous efficiency and service life of the light-emitting devices122. In some embodiments, as shown inFIG.8, the display panel100further includes a touch structure2disposed on the display substrate1. It will be noted that the plurality of light-emitting devices122may be top-emission-type light-emitting devices. In this case, the light emitted by the plurality of light-emitting devices122may exit toward a direction facing away from the base11. The plurality of light-emitting devices122may also be bottom-emission-type light-emitting devices. In this case, the light emitted by the plurality of light-emitting devices122may exit after passing through the base11. Some embodiments of the present disclosure are schematically described by taking an example in which the plurality of light-emitting devices122are top-emission-type light-emitting devices. In this case, the touch structure2may be disposed on a side of the encapsulation layer13away from the base11. That is, the touch structure2may be located on a light-exit side of the display substrate1. Of course, in a case where the display substrate1is a double-sided light-emitting substrate, the touch structure2may be disposed on only one light-exit side of the display substrate1, or two light-exit sides of the display substrate leach may be provided with the touch structure2. In some examples, as shown inFIG.8, the touch structure2includes a plurality of conductive pins21located in the bonding region B1. For example, as shown inFIG.8, each conductive pin21may have a strip shape. That is, a shape of an orthogonal projection of the conductive pin21on the base11may be a rectangle, and a dimension of the rectangle in the first direction X is greater or much greater than a dimension of the rectangle in the second direction Y. In this way, it is beneficial to reduce a dimension of the bonding region B1in the second direction Y. For example, the plurality of conductive pins21are arranged at intervals in the second direction Y. In this way, every two adjacent conductive pins21may be insulated from each other to avoid short-circuiting between two adjacent conductive pins21. A distance between every two adjacent conductive pins21may be, for example, same or approximately same. That is, the plurality of conductive pins21are arranged uniformly spaced apart. In this way, it is beneficial to simplify a wiring design of signal lines in the display panel100. In some examples, as shown inFIG.8, the touch structure2may further include: a plurality of first touch units22extending in the first direction X and a plurality of second touch units23extending in the second direction Y that are located in the display area A, and a plurality of touch signal lines24located in the frame area B. For example, as shown inFIG.8, the plurality of conductive pins21include a plurality of first sub-conductive pins211and a plurality of second sub-conductive pins212. The plurality of touch signal lines24include a plurality of first sub-touch signal lines241and a plurality of second sub-touch signal lines242. The plurality of first touch units22are electrically connected to the plurality of first sub-conductive pins211through the plurality of first sub-touch signal lines241, and the plurality of second touch units23are electrically connected to the plurality of second sub-conductive pins212through the plurality of second sub-touch signal lines242. In this way, in a case where the display panel100is applied into a touch display apparatus and the plurality of conductive pins21of the touch structure2are bonded to the touch flexible circuit board, the touch flexible circuit board and the touch structure2may serve to achieve the sensing touch function. For example, a process of the sensing touch is schematically described by taking an example in which the first touch unit22is a driving channel (Tx) and the second touch unit23is a sensing channel (Rx). The touch flexible circuit board may transmit a driving signal to the first touch unit22through the first sub-conductive pin211and the first sub-touch signal line241in sequence, and the second touch unit23may feed back an attenuated electrical signal (e.g., capacitance signal) to the touch flexible circuit board through the second sub-touch signal line242and the second sub-conductive pin212in sequence. In a case where a touch object (e.g., finger) touches the display apparatus, a capacitance value of a coupling capacitor formed at an intersection of the first touch unit22and the second touch unit23will change. The touch flexible circuit board may determine a touch position of the finger according to a change amount of the capacitance signal by receiving the capacitance signal, thereby achieving the sensing touch function. The structure of the touch structure2will be schematically described below in conjunction with the accompanying drawings. For example, as shown inFIG.8, each first touch unit22may include a plurality of first touch electrodes221, and the plurality of first touch electrodes221are sequentially connected in series to form an integrated structure. For example, as shown inFIG.8, each second touch unit23may include a plurality of second touch electrodes231arranged at intervals in the second direction Y, and a plurality of conductive bridges232. Every two adjacent second touch electrodes231are electrically connected through one conductive bridge231. The conductive bridge232may be, for example, disposed on a side of the second touch electrode231away from the base11. Here, as shown inFIGS.8to10, the plurality of first touch units22and the plurality of second touch electrodes231in each second touch unit23may, for example, have a same material and be disposed in a same layer. It will be noted that, the “same layer” mentioned herein refers to a layer structure formed by a film layer for forming specific patterns by a same film forming process and then by one patterning process using a same mask. Depending on the different specific patterns, the same patterning process may include several exposure, development or etching processes, and the specific patterns in the layer structure formed may be continuous or discontinuous, and these specific patterns may also be at different heights or have different thicknesses. In this way, the plurality of first touch units22and the plurality of second touch electrodes231in each second touch unit23may be manufactured simultaneously in one patterning process, which is beneficial to simplify the manufacturing process of display panel100. The “integrated structure” mentioned herein refers to that the specific patterns in the layer structure formed may be continuous and not broken. For example, materials of the first touch unit22and the second touch electrode231may be conductive materials with high light transmittance. In this way, it may be possible to prevent the first touch unit22and the second touch electrode231from adversely affecting the light-emitting effect of the display panel100. For example, the materials of the first touch unit22and the second touch electrode231may be indium tin oxide (ITO), indium zinc oxide (IZO) or indium gallium zinc oxide (IGZO). For example, the plurality of conductive pins21, the plurality of touch signal lines24, and the conductive bridges232may have a same material and be disposed in a same layer. In this way, the plurality of conductive pins21, the plurality of touch signal lines24and the conductive bridge232may be manufactured simultaneously in one patterning process, which is beneficial to simplify the manufacturing process of display panel100. For example, the materials of the plurality of conductive pins21, the plurality of touch signal lines24, and the conductive bridges232may be metal conductive materials such as gold (Au), silver (Ag), copper (Cu), or aluminum (Al). Of course, the materials of the plurality of conductive pins21, the plurality of touch signal lines24, and the conductive bridges232may also be metal oxide conductive materials such as ITO; or, the materials of the plurality of conductive pins21may be metal oxide conductive materials such as ITO, and the materials of the plurality of touch signal lines24and the conductive bridges232may be the metal conductive material. Here, a corresponding relationship between the plurality of conductive pins21and the plurality of touch signal lines24includes various types, which may be selected and set according to actual needs. For example, the plurality of conductive pins21and the plurality of touch signal lines24may be electrically connected in one-to-one correspondence. In this way, it is beneficial to avoid a situation of signal crosstalk. For another example, each conductive pin21may be electrically connected to multiple touch signal lines24. For example, one first sub-conductive pin211may be electrically connected with two, three or four first sub-touch signal lines241, and one second sub-conductive pin212may be electrically connected with two, three, or four second sub-touch signal lines242. In this way, it is beneficial to reduce the number of conductive pins21, increase a distance between two adjacent conductive pins21, and avoid the short-circuiting between two adjacent conductive pins21. It will be noted that, each conductive pin21and a touch signal line24electrically connected thereto may be of an integrated structure. In some embodiments, as shown inFIGS.8to10, the display panel100further includes a protective layer3disposed on a side of the touch structure2away from the display substrate1. In some examples, as shown inFIGS.8to10, the protective layer3has an opening K. The opening K is located in the bonding region B1and serves to expose the plurality of conductive pins21. That is, the protective layer3covers the plurality of first touch units22, the plurality of second touch units23, and the plurality of touch signal lines24in the touch structure2, and further covers ends of the conductive pins21. In this way, the protective layer3may be serve to protect the touch structure2and prevent the touch structure2from being damaged. In some examples, as shown inFIGS.8,11, and12, the opening K includes a border K1proximate to the display area A and extending in an arrangement direction of the plurality of conductive pins21(i.e., the second direction Y), and at least portion of the border K1has a curved shape. The opening K may have various shapes, which may be selected and set according to actual needs, as long as at least portion of the opening K proximate to the border K1of the display area A has the curved shape. In a case where the display panel100is applied into the touch display apparatus and the touch flexible circuit board is bonded to the plurality of conductive pins21through the opening K, most of stress applied to the touch structure2by an edge of the touch flexible circuit board proximate to the display area A (a direction of the stress being a direction perpendicular to the base11and away from the base11) and stress applied to the touch structure2by the protective layer3(a direction of the stress being a direction perpendicular to the base11and toward the base11) will be dispersed in a portion of the touch structure2corresponding to the vicinity of a tip of the border K1(i.e., the closest or farthest portion in the border K1compared to a border of the bonding region B1), which is beneficial to reduce the stress to which the conductive pins21are subjected. Therefore, in the display panel100provided by some embodiments of the present disclosure, the protective layer3is disposed on the side of the touch structure2away from the display substrate1and provided with the opening K for exposing the plurality of conductive pins21of the touch structure2therein, and in the border K1of the opening K that is proximate to the display area A and extends in the arrangement direction of the plurality of conductive pins21, the at least portion is arranged in the curved shape. In this way, in the case where the display panel100is applied into the touch display apparatus and the plurality of conductive pins21are bonded to the touch flexible circuit board through the opening K, compared with arranging a border of the opening as a straight-line in the implementation manner above-mentioned, it may be possible to ameliorate the dispersion situation of the stress applied to the touch structure2by using of the tip of the border K1(that is, the closest or farthest portion in the border K1compared to the border of the bonding region B1), That is, the stress applied to the touch structure2may be mainly dispersed in the portion corresponding to the vicinity of the tip of the border K1, which reduces stress applied to other portions. In this way, it is beneficial to reduce the stress applied to the conductive pins21in the touch structure2. Furthermore, in the process of bending the touch flexible circuit board or performing a high and low temperature shock environment test on the display panel100, it may be possible to prevent the conductive pins21from being broken, and to effectively avoid the failure of the sensing touch function of the display panel100, and to improve the production yield of the display panel100. In addition, as a size of the frame area B of the display panel100decreases, a bending radius of the touch flexible circuit board will also decrease, so that a portion of the touch structure2corresponding to the border K1of the opening K will be subjected to greater stress. In some embodiments of the present disclosure, the at least portion of the border K1is arranged in the curved shape. In this way, it is beneficial to reduce the size of the frame area B of the display panel100while using the border K1to effectively reduce the stress applied to the conductive pins21. As a result, it is beneficial to achieve the narrow bezel design of the display panel100. It will be noted that, the border K1of the opening K of the protective layer3may be arranged in various manners, which may be selected and set according to actual needs. The arrangement manner of the border K1will be schematically described below in conjunction with the accompanying drawings. In some embodiments, as shown inFIGS.11and12, the border K1includes a first sub-border K11located between any two adjacent conductive pins21. The first sub-border K11has a curved shape, that is, a portion the border K1located between any two adjacent conductive pins21has the curved shape. By arranging the first sub-border K11located between any two adjacent conductive pins21in the border K1to have the curved shape, in the case where the display panel100is applied into the touch display apparatus and the plurality of conductive pins21are bonded to the touch flexible circuit board through the opening K, it may be ensured that most of the stress applied to the touch structure2is dispersed at a region between the two adjacent conductive pins21, which effectively reduces the stress applied to the conductive pins21. In this way, in the process of bending the touch flexible circuit board or performing the high and low temperature shock environmental test on the display panel100, it may be possible to further prevent the conductive pins21from being broken. As a result, it is beneficial to further improve the production yield of the display panel100, and the narrow bezel design of the display panel100may be achieved. In some examples, as shown inFIGS.11and12, a center of curvature of the first sub-border K11is located on a side of the border K1proximate to the display area A. That is, the first sub-border K11may protrude in a direction in which the display area A points to the bonding region B1. In this way, on the basis of using the protective layer3to cover the ends of the conductive pins21, a covering amount of the protective layer3in the bonding region B1may also be ensured, which is beneficial to ensure a good protective effect of the protective layer3on the touch structure2, and avoid a situation that the touch structure2(e.g., the touch signal line24in the touch structure2) is corroded by water vapor due to the protective layer3not forming a good cover on the touch structure2. In some examples, the curvature of each first sub-border K11may be selected and set according to actual needs. For example, each first sub-border K11has same or approximately same curvature. In this way, shapes of all first sub-borders K11may be consistent or approximately consistent. Based on this, in the case where the display panel100is applied into the touch display apparatus and the plurality of conductive pins21are bonded to the touch flexible circuit board through the opening K, it is possible to make stress applied to portions of the touch structure2corresponding to all first sub-borders K11equal or approximately equal. In this way, it is beneficial to improve the uniformity of the stress dispersion, and ensure a bonding effect between the touch flexible circuit board and the touch structure2. Moreover, in a process of forming the protective layer3, it is also beneficial to reduce the difficulty of manufacturing the protective layer3. It will be noted that, the curved shape includes various shapes. For example, the curved shape includes a circular arc shape, a wave shape, a partial arc shape, or a partial wave shape. In some examples, as shown inFIGS.11and12, the first sub-border K11may have, for example, the circular arc shape. The arc shape may be, for example, a semicircle shape, a superior arc shape, or an inferior arc shape. In this case, a size of the first sub-border K11may be selected and set according to actual needs, as long as most of the stress applied to the touch structure2may be dispersed at the region between two adjacent conductive pins21. For example, in a direction in which the plurality of conductive pins21are arranged (i.e., the second direction Y), a ratio of a dimension of each conductive pin21(i.e., a width of the conductive pin21) to a radius of the first sub-border K11is within a range of 1 to 4.67. Optionally, the width of the conductive pin21may be within a range of 0.075 mm to 0.15 mm, and the radius of the first sub-border K11may be within a range of 0.032 mm to 0.075 mm. In this way, there may be a relatively suitable distance between two adjacent conductive pins21. As a result, it may not only avoid the short-circuiting of two adjacent conductive pins21, but also make the bonding region B1have a small dimension in the second direction Y. For example, the width of the conductive pin21may be 0.075 mm, 0.08 mm, 0.093 mm, 0.11 mm, 0.13 mm, 010.15 mm. Accordingly, the radius of the first sub-border K11may be 0.032 mm, 0.040 mm, 0.053 mm, 0.061 mm, 0.069 mm, or 0.075 mm. In some embodiments, as shown inFIGS.11and12, the border K may further include second sub-borders K12overlapping with the conductive pins21. Each second sub-borders K12has a straight-line shape, that is, a portion of the first sub-border K1that overlaps with each conductive pin21has the straight-line shape. By arranging the shape of the second sub-border K12overlapping with each conductive pin21in the border K1to be straight-line, on a basis of using the first sub-border K11to disperse most of the stress applied to the touch structure2at the region between every two adjacent conductive pins21, it may be ensured that the rest of the stress may be evenly dispersed, so as to avoid the stress being totally concentrated at the conductive pins21, As a result, it may be possible to further ensure an improvement effect of the stress applied to the conductive pins21, and to further avoid the situation in which the conductive pins21are broken, which is beneficial to further improve the production yield of the display panel100to achieve the narrow bezel design of the display panel100. In some examples, as shown inFIGS.11and12, lengths of the second sub-borders K12are equal or approximately equal. In this way, it may not only be possible to further reduce the difficulty of manufacturing the protective layer3, but it may also be possible to further ensure the uniformity of dispersion of a small part of the stress dispersed in the conductive pins21, and ensure the improvement effect of the stress applied to the conductive pins21. In some examples, as shown inFIG.11, the first sub-borders K11and the second sub-borders K12are alternately arranged and connected in series in sequence. In this way, it is beneficial to reduce the complexity of the shape of the border K1, thereby reducing the difficulty of manufacturing the protective layer3. In some embodiments, as shown inFIG.12, the first sub-borders K11and the second sub-borders K12are alternately arranged. In this case, the border K1may further include a plurality of third sub-borders K13, each third sub-border K13connecting a respective first sub-border K11and a second sub-border K12that are adjacent. For example, the third sub-border K13is located between two adjacent conductive pins21. In some examples, as shown inFIG.12, the third sub-border K13has a curved shape, and a center of curvature of the third sub-border K13is located on a side of the border K1away from the display area A. That is, the third sub-border K13may protrude in a direction in which the bonding region B1points to the display area A. By arranging the third sub-border K13between the first sub-border K11and the second sub-border K12, it may be possible to make a line between the first sub-border K11and the second sub-border K12be smoothly transitioned, which avoids forming a tip to adversely affect the dispersion of the stress applied to the touch structure2. In some examples, as shown inFIG.12, the third sub-border K13may have a circular arc shape. The circular arc shape may be, for example, a semicircle shape, a superior arc shape, or an inferior arc shape. In some examples, a radius of the third sub-border K13may be within a range of 0.025 mm to 0.05 mm. In this way, it may be possible to ensure that the line between the first sub-border K11and the second sub-border K12may be smoothly transitioned. For example, the radius of the third sub-border K13may be 0.025 mm, 0.03 mm, 0.0441 mm, 0.046 mm, or 0.05 mm. In some embodiments, as shown inFIGS.9and10, the display panel100may further include: a base4disposed on a side of the touch structure2proximate to the display substrate1, and a connection layer5connecting the display substrate1and the base4. This means that the display panel100may be an on-cell touch display panel. In some examples, the base4and the connection layer5may be formed of materials with a high light transmittance, so as to avoid normal light emission of the display panel100. For example, the base4may be formed of cyclo olefin polymer (COP). For example, the connection layer5may be optically clear adhesive (OCA). In this case, the base4and the display substrate1may be bonded together using the OCA. Some embodiments of the present disclosure provide a display apparatus1000. As shown inFIGS.14to16, the display apparatus1000includes: the display panel100and a flexible circuit board200bonded to the plurality of conductive pins21of the display panel100. A portion of the flexible circuit board200is disposed on a non-light-exit side of the display panel100after being bent. In some embodiments, the flexible circuit board200may have the same structure and function as the above-mentioned touch flexible circuit board. In some examples, the flexible circuit board200may also be arranged corresponding to a plurality of bonding regions B1. The display apparatus1000provided by some embodiments of the present disclosure has the same display panel100as in some of the above embodiments, and the beneficial effects that may be achieved are the same as the beneficial effects that can be achieved by the display panel100, which will not be repeated here. In some embodiments, as shown inFIGS.15and16, the flexible circuit board200includes a plurality of gold fingers6bonded to the plurality of conductive pins21. The plurality of conductive pins21and the plurality of gold fingers6may be arranged in one-to-one correspondence. In some examples, as shown inFIG.14, in a case where the flexible circuit board200is bonded to the plurality of conductive pins21, a minimum distance between a border61of the plurality of add fingers6parallel to a direction in which the plurality of gold fingers are arranged (i.e., the second direction Y) and proximate to the display area A of the display panel100, and the border K1of the opening K of the protective layer3of the display panel100is within a range of 0 mm to 0.30 mm (i.e., 0.15 mm±0.15 mm). For example, the minimum distance may be a distance between the border61of the gold fingers6and the second sub-border K12of the border K1. By setting the distance between the border61of the gold fingers6and the second sub-border K12of the border K1, it may be possible to ensure that the plurality of gold fingers6have a large directly opposite area with corresponding conductive pins21within an error range of the bonding process, which may ensure that the plurality of gold fingers6have a large conduction area to achieve a good signal transmission effect between the gold fingers6and the conductive pins21. In some embodiments, as shown inFIGS.15and16, the display apparatus1000may further include an anisotropic conductive adhesive (e.g., anisotropic conductive film (ACF))7disposed between the plurality of conductive pins21and the plurality of gold fingers6. The ACF7is located in the opening K of the protective layer3. In a process of sensing touch, the flexible circuit board200may transmit the driving signal to the first touch unit22through the gold finger6, the ACF7and the conductive pin21in sequence, and receive the attenuated capacitance signal through the conductive pin21, the ACF7and the gold finger6in sequence. In some embodiments, as shown inFIG.16, the display apparatus1000may further include an optical clear film (OCF)300disposed on a side of the protective layer3away from the display substrate1. The OCF300has a good explosion-proof function. Here, the OCF300has a high light transmittance, which may avoid adversely affecting the light emission of the display apparatus1000. In some embodiments, the display apparatus1000is any product or component having a display function, such as a mobile phone, a tablet computer, a television, a display, a notebook computer, a digital frame, or a navigator. Some embodiments of the present disclosure provide a method for manufacturing the display panel. As shown inFIG.13, the manufacturing method includes S100to S400. In S100, a display substrate1is provided. The display substrate1has a display area A and a frame area B located on a side of the display area A, and the frame area B includes a bonding region B1. Here, the structure of the display substrate1may be referred to the schematic description in some of the above embodiments, which will not be repeated here. In S200, a touch structure2is formed on a light-exit side of the display substrate1. The touch structure2includes a plurality of conductive pins21located in the bonding region B1. In some examples, in S200, forming the touch structure2on the display substrate1may include, for example, S210to S270. In S210, a base4is provided. For example, a division of the display area A, the frame area B, and the bonding region B1in the display substrate1is the same as that of the base4. A material of the base4may be the same as the material provided in some of the above embodiments. In S220, a first conductive film is formed on a side of the base4. For example, the first conductive film is located in both the display area A and the frame area B. In S230, a portion of the first conductive film located in the display area A is patterned to form a plurality of first touch units22and a plurality of second touch electrodes231of each second touch unit23. In S240, an insulating layer is formed on a side of the plurality of first touch units22and the plurality of second touch electrodes231of each second touch unit23away from the base4. The insulating layer has a plurality of vias. In S250, a second conductive film is formed on a side of the insulating layer away from the base4. For example, the second conductive film is located in the display area A and the frame area B. In S260, a portion of the second conductive film located in the display area A is patterned to form a plurality of conductive bridges232; and simultaneously, a portion of the second conductive film located in the frame area B and a portion of the first conductive film located in the frame area B are processed, so as to form a plurality of touch signal lines24located in the frame area and a plurality of conductive pins21zA located in the bonding region B1to obtain the touch structure2. In S270, a connection layer5is provided to connect the base4and the display substrate1together and to make the touch structure2be located on a side of the base4away from the display substrate1. In S300, a protective film is formed on a side of the touch structure2away from the display substrate1. For example, a material of the protective film may be a resin material. For example, the protective film may be formed on the side of the touch structure2away from the display substrate1by using a coating process. The protective film is located in the display area A and the frame area B to cover the touch structure. In S400, the protective film is patterned to form a protective layer3. The protective layer3has an opening K, The opening K is located in the bonding region B1and serves to expose the plurality of conductive pins21. The opening K includes a border K1proximate to the display area A and extending in a direction in which the plurality of conductive pins21are arranged, and at least portion of the border K1has a curved shape. In some examples, in S400, the protective film is patterned to form the protective layer3, which may include, for example, S410to S440. In S410, photoresist is coated on a side of the protective film away from the display substrate1to form a photoresist layer. In S420, a mask is disposed above a side of the photoresist layer away from the display substrate1; the photoresist is exposed, then the mask is removed; afterwards, the exposed photoresist layer is developed, and a portion corresponding to the opening to be formed is removed to obtain a patterned photoresist layer. In S440, the protective film is patterned by using the patterned photoresist layer serving as a mask, and a portion of the protective film corresponding to the opening to be formed is removed to form the opening and to obtain the protective layer3. The beneficial effects that may be achieved by the method for manufacturing the display panel provided by some embodiments of the present disclosure are the same as the beneficial effects that may be achieved by the display panel100provided in some of the embodiments, which will not be repeated here. The foregoing descriptions are merely specific implementations of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person skilled in the art could conceive of changes or replacements within the technical scope of the present disclosure, which shall be included in the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims. | 45,893 |
11861120 | DETAILED DESCRIPTION Exemplary aspects (embodiments) to embody the present disclosure are described below in greater detail with reference to the accompanying drawings. The contents described in the embodiments are not intended to limit the present disclosure. Components described below include components easily conceivable by those skilled in the art and components substantially identical therewith. Furthermore, the components described below may be appropriately combined. What is disclosed herein is given by way of example only, and appropriate modifications made without departing from the spirit of the present disclosure and easily conceivable by those skilled in the art naturally fall within the scope of the disclosure. To make the explanation more specific, the drawings may possibly illustrate the width, the thickness, the shape, and other elements of each component more schematically than the actual aspect. These elements, however, are given by way of example only and are not intended to limit interpretation of the present disclosure. In the present specification and the drawings, components similar to those previously described with reference to previous drawings are denoted by like reference numerals, and detailed explanation thereof may be appropriately omitted. To describe an aspect where a first structure is disposed on a second structure in the present specification and the claims, the term “on” includes both of the following cases unless otherwise noted: a case where the first structure is disposed directly on the second structure in contact with the second structure, and a case where the first structure is disposed on the second structure with another structure interposed therebetween. First Embodiment FIG.1is a plan view of a display device according to a first embodiment.FIG.2is a sectional view along line II-II′ ofFIG.1. As illustrated inFIGS.1and2, a display device1according to the present embodiment has a detection region FA and a frame region GA provided outside the detection region FA. The detection region FA is a region for detecting recesses and protrusions on the surface of an object to be detected, such as a finger Fg, in contact with or in proximity to a cover member80. In the display device1according to the present embodiment, a display region of a display panel30is identical or substantially identical with the detection region FA of a detecting device100. As a result, the display device1can detect a fingerprint on the whole display region. The shape of the display region and the detection region FA is a rectangle, for example. As illustrated inFIG.2, the display device1includes the display panel30and the detecting device100. The detecting device100includes a sensor10and the cover member80. The cover member80is a plate-like member having a first surface80aand a second surface80bopposite to the first surface80a. The first surface80aof the cover member80serves not only as a detection surface for detecting recesses and protrusions on the surface of the finger Fg or the like in contact with or in proximity to the cover member80but also as a display surface on which an observer visually recognizes an image on the display panel30. The sensor10and the display panel30are provided on the second surface80bof the cover member80. The cover member80is a member for protecting the sensor10and the display panel30, and covers the sensor10and the display panel30. The cover member80is a glass or resin substrate, for example. The cover member80, the sensor10, and the display panel30are not limited to the configuration having a rectangular shape in planar view and may have other shapes, such as circular and elliptic shapes and an irregular shape obtained by removing part of these outer shapes. The cover member80is not limited to a plate shape. If the display region and the detection region FA have a curved surface, or if the frame region GA has a curved surface curved toward the display panel30, for example, the cover member80may have a curved surface. In this case, the display device1is a curved surface display with a fingerprint detection function and can detect a fingerprint on the curved surface of the curved surface display. The detecting device100is not limited to the configuration of being stacked on the display panel30and may be configured as a single fingerprint detecting device without providing the display panel30, which will be described later. In the present specification, “planar view” refers to a positional relation viewed from a direction perpendicular to a first surface101aof a substrate101illustrated inFIG.3, which will be described later. The direction perpendicular to the first surface101ais the “normal direction (third direction Dz) of the substrate101”. As illustrated inFIGS.1and2, the second surface80bof the cover member80is provided with a decorative layer81in the frame region GA. The decorative layer81is a colored layer having light transmittance lower than that of the cover member80. The decorative layer81can prevent wiring, circuits, and other components provided overlapping the frame region GA from being visually recognized by the observer. While the decorative layer81is provided on the second surface80bin the example illustrated inFIG.2, it may be provided on the first surface80a. The decorative layer81is not limited to a single layer and may be composed of a plurality of layers. The sensor10is a detector for detecting recesses and protrusions on the surface of the finger Fg or the like in contact with or in proximity to the first surface80aof the cover member80. As illustrated inFIG.2, the sensor10is provided between the cover member80and the display panel30. The sensor10overlaps the detection region FA and part of the frame region GA when viewed from a direction perpendicular to the first surface80a(normal direction). The sensor10is coupled to a wiring substrate76in the frame region GA. The wiring substrate76is provided with a detection IC (not illustrated) for controlling detection operations of the sensor10. The wiring substrate76is a flexible printed circuit board, for example. A first surface of the sensor10is bonded to the second surface80bof the cover member80with an adhesive layer71interposed therebetween. A second surface of the sensor10is bonded to a polarizing plate35of the display panel30with an adhesive layer72interposed therebetween. The adhesive layers71and72are made of translucent adhesive or resin and allow visible light to pass therethrough. The display panel30includes a pixel substrate30A, a counter substrate30B, a polarizing plate34, and the polarizing plate35. The polarizing plate34is provided under the pixel substrate30A. The polarizing plate35is provided on the counter substrate30B. The pixel substrate30A is coupled to a display IC (not illustrated) for controlling display operations of the display panel30through a wiring substrate75. The display panel30according to the present embodiment is a liquid crystal panel including liquid crystal display elements serving as a display functional layer. The display panel30is not limited thereto and may be an organic light-emitting diode (OLED) display panel including OLEDs as electro luminescence (EL) elements for display elements or a display panel including electrophoretic elements for display elements, for example. The detection IC and the display IC described above may be provided to a control substrate outside the module. Alternatively, the detection IC may be provided to the substrate101(refer toFIG.3) of the sensor10. The display IC may be provided to the pixel substrate30A. The following describes the configuration of the detecting device100in greater detail.FIG.3is a plan view illustrating an example of the configuration of the detecting device according to the first embodiment. As illustrated inFIG.3, the detecting device100includes the substrate101and the sensor10provided on the first surface101aof the substrate101. The sensor10includes a plurality of drive electrodes Tx and a plurality of detection electrodes Rx. The substrate101is a translucent glass substrate that can allow visible light to pass therethrough. The substrate101may be a translucent resin substrate or resin film made of resin, such as polyimide. The sensor10is a translucent sensor. The drive electrodes Tx and the detection electrodes Rx are provided in the detection region FA. The drive electrodes Tx are disposed side by side in a first direction Dx. The drive electrodes Tx extend in the second direction Dy. The detection electrodes Rx are disposed side by side in the second direction Dy. The detection electrodes Rx extend in the first direction Dx. As described above, the detection electrodes Rx extend in a direction intersecting the extending direction of the drive electrodes Tx. Each detection electrode Rx is coupled to the wiring substrate76provided in the frame region GA of the substrate101through frame wiring (not illustrated) and a detection electrode selection circuit14. The wiring substrate76is coupled to a side of the substrate101provided with the detection electrode selection circuits14. The first direction Dx is one direction in a plane parallel to the substrate101and is a direction parallel to one side of the detection region FR, for example. The second direction Dy is one direction in the plane parallel to the substrate101and is a direction orthogonal to the first direction Dx. The second direction Dy may intersect the first direction Dx without being orthogonal thereto. The third direction Dz is a direction orthogonal to the first direction Dx and the second direction Dy, and is a direction vertical to the first surface101aof the substrate101. The drive electrode Tx is formed in a rectangular shape, and the detection electrode Rx is formed in a zigzag-line shape. The configuration is not limited thereto, and the shape and the arrangement pitch of the drive electrodes Tx and the detection electrodes Rx can be appropriately changed. The drive electrodes Tx are made of translucent conductive material, such as indium tin oxide (ITO). The detection electrodes Rx are made of metal material, such as aluminum or an aluminum alloy. Alternatively, the drive electrodes Tx may be made of metal material, and the detection electrodes Rx may be formed by ITO. The drive electrodes Tx and the detection electrodes Rx may be made of the same material. An insulating layer (not illustrated) is interposed between the drive electrodes Tx and the detection electrodes Rx. Capacitance is formed at the intersections of the detection electrodes Rx and the drive electrodes Tx. The sensor10performs touch detection and fingerprint detection based on a change in capacitance between the detection electrodes Rx and the drive electrodes Tx. The sensor10performs, by code division multiplexing drive (hereinafter, referred to as CDM drive), a fingerprint detection operation by a mutual capacitive system. Specifically, a drive electrode selection circuit15simultaneously selects a plurality of drive electrodes Tx. The drive electrode selection circuit15supplies drive signals VTP the phase of which is determined based on a predetermined code to the selected drive electrodes Tx. The detection electrodes Rx output detection signals Vdet corresponding to a change in capacitance due to the recesses and protrusions on the surface of a finger or the like in contact with or in proximity to the sensor10. As a result, the sensor10performs fingerprint detection. When the drive electrode selection circuit15performs touch detection, it may perform touch detection by sequentially driving a plurality of drive electrodes Tx in a time-division manner or sequentially selecting and driving each drive electrode block including a plurality of drive electrodes Tx. While various circuits, such as the detection electrode selection circuits14and the drive electrode selection circuits15, are provided in the frame region GA of the substrate101inFIG.3, this configuration is given by way of example only. At least part of the various circuits may be included in the detection IC mounted on the wiring substrate76. FIG.4is a block diagram illustrating an example of the configuration of the detecting device according to the first embodiment. As illustrated inFIG.4, the detecting device100includes the sensor10, a detection controller11, the drive electrode selection circuit15, a detection electrode selection circuit14, and a detection circuit40. The detection controller11is a circuit that controls detection operations of the sensor10. The drive electrode selection circuit15is a circuit that supplies the drive signals VTP for detection to the drive electrodes Tx of the sensor10based on control signals supplied from the detection controller11. The detection electrode selection circuit14selects the detection electrodes Rx of the sensor10and couples them to the detection circuit40based on control signals supplied from the detection controller11. The detection circuit40is a circuit that detects the shape of a fingerprint by detecting the recesses and protrusions on the surface of a finger or the like in contact with or in proximity to the first surface80aof the cover member80based on the control signals supplied from the detection controller11and the detection signals Vdet output from the detection electrodes Rx. The detection circuit40includes a detection signal amplifier42, an A/D converter43, a signal processor44, a coordinate extractor45, a synthesizer46, and a detection timing controller47. The detection timing controller47controls the detection signal amplifier42, the A/D converter43, the signal processor44, the coordinate extractor45, and the synthesizer46such that they operate synchronously with one another based on the control signals supplied from the detection controller11. The detection signals Vdet are supplied from the sensor10to the detection signal amplifier42of the detection circuit40. The detection signal amplifier42amplifies the detection signals Vdet. The A/D converter43converts analog signals output from the detection signal amplifier42into digital signals. The signal processor44is a logic circuit that detects whether a finger is in contact with or in proximity to the sensor10based on the output signals from the A/D converter43. The signal processor44performs processing of extracting a signal (absolute value |ΔV|) of the difference between the detection signals due to the finger. The signal processor44compares the absolute value |ΔV| with a predetermined threshold voltage. If the absolute value |ΔV| is lower than the threshold voltage, the signal processor44determines that the object to be detected is in a non-contact state or is sufficiently far away from the detection position to determine that it is in a non-contact state. By contrast, if the absolute value |ΔV| is equal to or higher than the threshold voltage, the signal processor44determines that the object to be detected is in a contact state or is in sufficiently proximity to the detection position to determine that it is substantially in a contact state. As described above, the detection circuit40can detect contact or proximity of the object to be detected. More specifically, the signal processor44calculates a plurality of detection signals Vdet supplied from the sensor10based on a predetermined code, and performs decoding on the calculated detection signals Vdet based on the predetermined code in CDM drive. An example of the operations in CDM drive will be described later in greater detail. The coordinate extractor45is a logic circuit that calculates, when the signal processor44detects contact or proximity of a finger, the detection coordinates of the finger. The coordinate extractor45outputs the detection coordinates to the synthesizer46. The synthesizer46combines the detection signals Vdet output from the sensor10, thereby generating two-dimensional information indicating the object to be detected in contact with or in proximity to the sensor10. The synthesizer46outputs the two-dimensional information as output Vout from the detection circuit40. Alternatively, the synthesizer46may generate an image based on the two-dimensional information and output the image information as the output Vout. The detection IC described above functions as the detection circuit40illustrated inFIG.4. Part of the functions of the detection circuit40may be included in the display IC described above or be provided as functions of an external micro-processing unit (MPU). The following describes CDM drive performed by the detecting device100.FIG.5is a diagram for explaining an example of an operation in code division multiplexing drive. To simplify the explanation,FIG.5illustrates an example of the operation in CDM drive performed on four drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4. As illustrated inFIG.5, the drive electrode selection circuit15(refer toFIG.4) simultaneously selects four drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4of a drive electrode block BK. The drive electrode selection circuit15supplies the drive signals VTP the phase of which is determined based on a predetermined code to the drive electrodes Tx. The predetermined code is defined by a square matrix H in Expression (1), for example. The order of the square matrix H is four, which is equal to the number of drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4. The predetermined code is based on the square matrix H the elements of which are either “1” or “−1” or either “1” or “0” and two desired different rows of which are an orthogonal matrix. The predetermined code is based on a Hadamard matrix, for example. The drive electrode selection circuit15applies the drive signals VTP such that the phase of AC rectangular waves corresponding to the element “1” is opposite to the phase of AC rectangular waves corresponding to the element “−1” based on the square matrix H in Expression (1). In other words, the element “−1” of the square matrix H in Expression (1) is an element for supplying the drive signal VTP determined to have a phase different from that of the element “1”. H=[11111-11-111-1-11-1-11](1) FIG.5illustrates a case where an external proximity object CQ, such as the finger Fg, is present on a portion at which the drive electrode Tx-2out of the drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4intersects the detection electrode Rx, for example. A voltage of difference due to the external proximity object CQ is generated (the voltage of difference is 20%, for example) by mutual induction at the intersection portion of the drive electrode Tx-2and the detection electrode Rx to which the external proximity object CQ is in proximity. In the example illustrated inFIG.5, a signal obtained by integrating the detection signal corresponding to the element “1” and the detection signal corresponding to the element “−1” is output as the detection signal Vdet from the detection electrode Rx. In a first period of time, the drive electrode selection circuit15supplies positive-polarity drive signals VTP to the drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4corresponding to the elements (1,1,1,1) in the first row of the square matrix H (first code). The detection signal Vdet output from the detection electrode Rx and detected by the detection circuit40in the first period of time is calculated by: (1)+(0.8)+(1)+(1)=3.8. In a second period of time, the drive electrode selection circuit15supplies positive-polarity drive signals VTP to the drive electrodes Tx-1and Tx-3and supplies negative-polarity drive signals VTP to the drive electrodes Tx-2and Tx-4corresponding to the elements (1,−1,1,−1) in the second row of the square matrix H (second code). The detection signal Vdet output from the detection electrode Rx and detected by the detection circuit40in the second period of time is calculated by: (1)+(−0.8)+(1)+(−1)=0.2. Similarly, the detection signal Vdet in a third period of time (third code) is calculated by: (1)+(0.8)+(−1)+(−1)=−0.2. The detection signal Vdet in a fourth period of time (fourth code) is calculated by: (1)+(−0.8)+(−1)+(1)=0.2. The signal processor44multiplies the detection signals Vdet (Vdet=(3.8,0.2,−0.2,0.2)) output from the detection electrode Rx and detected in each period of time by the square matrix H in Expression (1), thereby performing decoding. As a result, the signal processor44derives “4.0, 3.2, 4.0, 4.0” as a decoded signal. The detection circuit40can detect that the external proximity object CQ, such as the finger Fg, is in contact with the position of the drive electrode Tx-2in the relation with the detection electrode Rx based on the decoded signal. As described above, the detecting device100can detect whether the external proximity of object CQ is in contact with the intersections of the drive electrodes Tx and the detection electrodes Rx. By making the pitch between the intersections of the drive electrodes Tx and the detection electrodes Rx as small as possible, the detecting device100can detect the recesses and protrusions (e.g., a fingerprint) on the surface of the external proximity object CQ. The coordinate extractor45outputs the touch panel coordinates or the decoded signal as the output Vout. WhileFIG.5illustrates the example of the operation in CDM drive performed on four drive electrodes Tx, CDM drive may be performed on five or more drive electrodes Tx. In this case, the predetermined code is defined by a square matrix the order of which corresponds to the number of drive electrodes Tx. The order of the matrix included in the predetermined code is not necessarily equal to the number of drive electrodes Tx included in one drive electrode block BK. FIG.6is a diagram for explaining a coupling configuration of a plurality of drive electrodes. To simplify the explanation,FIG.6illustrates four drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4and four detection electrodes Rx-1, Rx-2, Rx-3, and Rx-4. As illustrated inFIG.6, the detecting device100further includes a drive signal supply circuit20, first switch elements SW1H and SW1L, second switch elements SW2, and wiring21,22, and23. The drive signal supply circuit20is a circuit that supplies the drive signals VTP to the drive electrodes Tx. The drive signal supply circuit20may be included in the detection IC described above or be provided on the substrate101. The drive signal supply circuit20includes a first voltage signal supplier20aand a second voltage signal supplier20b. The first voltage signal supplier20ais a circuit that supplies the drive electrodes Tx with first voltage signals VH having positive polarity corresponding to the element “1” of the square matrix H. The second voltage signal supplier20bis a circuit that supplies the drive electrodes Tx with second voltage signals VL having negative polarity corresponding to the element “−1” of the square matrix H. The first voltage signal VH and the second voltage signal VL are voltage signals having different electric potentials. The first voltage signal VH and the second voltage signal VL are alternately repeated corresponding to the predetermined code, thereby forming the drive signal VTP. The first switch elements SW1H and SW1L and the second switch elements SW2are included in the drive electrode selection circuit15(refer toFIG.4). The second switch elements SW2and the wiring23are disposed at a first end of the drive electrodes Tx in the extending direction (second direction Dy). The first switch elements SW1H and SW1L and the wiring21and22are disposed at a second end of the drive electrodes Tx in the extending direction (second direction Dy). In the following description, the first end of the drive electrodes Tx in the extending direction (second direction Dy) is referred to as the “left end”, and the second end is referred to as the “right end” with reference toFIG.6and other drawings. The first switch elements SW1H and SW1L may be simply referred to as the first switch elements SW1when they need not be distinguished from each other. First ends of the first switch elements SW1H and SW1L are coupled to the right end of one corresponding drive electrode Tx. A second end of each first switch element SW1H is coupled to the wiring21. A second end of each first switch element SW1L is coupled to the wiring22. When the first switch element SW1H is turned on (coupled state), the drive signal supply circuit20supplies the first voltage signals VH to the drive electrode Tx (drive electrodes Tx-1and Tx-3inFIG.6) through the wiring21and the first switch element SW1H. When the first switch element SW1L is turned on (coupling state), the drive signal supply circuit20supplies the second voltage signals VL to the drive electrode Tx (drive electrodes Tx-2and Tx-4inFIG.6) through the wiring22and the first switch element SW1L. To facilitate the reader's understanding, the drive electrodes Tx supplied with the first voltage signals VH and the drive electrodes Tx at an intermediate potential VI (refer toFIG.9) are hatched to be distinguished from the drive electrodes Tx supplied with the second voltage signals VL inFIG.6and other drawings. The first switch element SW1H and the first switch element SW1L perform operations opposite to each other. When the first switch element SW1H is turned on, the first switch element SW1L is turned off (decoupling state). When the first switch element SW1H is turned off, the first switch element SW1L is turned on. As described above, the drive signals VTP (the first voltage signals VH or the second voltage signals VL) corresponding to the predetermined code are supplied by the operations of the first switch elements SW1H and SW1L. Let us focus on the drive electrodes Tx-1and Tx-2disposed side by side out of the drive electrodes Tx. In the example illustrated inFIG.6, the drive signal supply circuit20supplies the first voltage signals VH to one of the drive electrode Tx-1(first drive electrode) and the drive electrode Tx-2(second drive electrode). The drive signal supply circuit20supplies the second voltage signals VL having an electric potential different from that of the first voltage signals VH to the other of the drive electrode Tx-1(first drive electrode) and the drive electrode Tx-2(second drive electrode). First ends of the second switch elements SW2are coupled to the common wiring23. Second ends of the second switch elements SW2are coupled to the left ends of the respective drive electrodes Tx. When the second switch elements SW2are turned on, the drive electrodes Tx are coupled through the wiring23and the second switch elements SW2. In other words, when the second switch elements SW2are turned on, the drive electrodes Tx short out. The second switch elements SW2switch between coupling and decoupling of at least two drive electrodes Tx (e.g., the drive electrodes Tx-1and Tx-2) supplied with the first voltage signals VH and the second voltage signals VL having different electrical potentials. FIG.7is a timing waveform chart for explaining a method for driving the drive electrodes.FIG.8is a timing waveform chart for explaining an operation of switching the first switch elements and the second switch element.FIG.9is a diagram for explaining the method for driving the drive electrodes in a first period, a second period, and a third period. FIGS.7to9illustrate an example where code inversion drive is performed in one code period (period for performing drive corresponding to the elements in a predetermined row of the square matrix H). As illustrated inFIG.7, for example, the detecting device100has a first period TSp, a second period TSi, and a third period TSm. The first period TSp is a period for supplying the drive electrodes Tx with the drive signals VTP the phase of which is determined based on the predetermined code. The third period TSm is a code inversion period for supplying the drive electrodes Tx with the drive signals VTP the phase of which is determined based on a code obtained by inverting the polarity of the predetermined code. The second period TSi is a transition period provided between the first period TSp and the third period TSm. As illustrated inFIG.7, the periods are repeatedly arranged like the first period TSp, the second period TSi, the third period TSm, the second period TSi, the first period TSp, . . . . In the first period TSp, as illustrated inFIGS.7and9, the drive signal supply circuit20supplies the drive electrodes Tx with the drive signals VTP having a phase corresponding to the elements (1,−1,1,−1) in the second row of the square matrix H. Specifically, the drive electrodes Tx-1and Tx-3are supplied with the first voltage signals VH, and the drive electrodes Tx-2and Tx-4are supplied with the second voltage signals VL. In the first period TSp, one of the first switch elements SW1H and SW1L is turned on, and the other is turned off for each of the drive electrodes Tx based on the elements (1,−1,1,−1). In the first period TSp, all the second switch elements SW2are turned off, and the left ends of the drive electrodes Tx are decoupled from one another. In the second period TSi, all the first switch elements SW1are turned off, and the drive electrodes Tx are decoupled from the drive signal supply circuit20. In other words, when the period is switched from the first period TSp to the second period TSi, supplying the first voltage signals VH to the drive electrodes Tx-1and Tx-3is stopped, and supplying the second voltage signals VL to the drive electrodes Tx-2and Tx-4is stopped. In addition to turning off the first switch elements SW1, the drive signal supply circuit20may also stop supplying electric potential to the wiring21and22. In the second period TSi, all the second switch elements SW2are turned on, and the drive electrodes Tx are coupled through the wiring23and the second switch elements SW2. More specifically, as illustrated inFIG.8, all the first switch elements SW1are turned off at time t1, and supplying the drive signals VTP (the first voltage signals VH and the second voltage signals VL) to the drive electrodes Tx is stopped. After a predetermined period of time has elapsed, all the second switch elements SW2are turned on at time t2, and the left ends of the drive electrodes Tx are coupled. As described above, the drive electrodes Tx-1and Tx-3supplied with the first voltage signals VH and the drive electrodes Tx-2and Tx-4supplied with the second voltage signals VL are coupled. As a result, the electric potential of the drive electrodes Tx is the intermediate potential VI in the second period TSi. The intermediate potential VI is an electric potential between the first voltage signal VH and the second voltage signal VL and is ideally expressed by: VI=(VH+VL)/2. In the third period TSm, code inversion drive is performed, and the drive electrodes Tx are driven based on the elements (−1,1,−1,1) obtained by inverting the polarity of the elements (1,−1,1,−1) in the second row of the square matrix H described above. In other words, the drive electrodes Tx-1and Tx-3are supplied with the second voltage signals VL, and the drive electrodes Tx-2and Tx-4are supplied with the first voltage signals VH. In the third period TSm, one of the first switch elements SW1H and SW1L is turned on, and the other is turned off for each of the drive electrodes Tx based on the inverted elements (−1,1,−1,1). All the second switch elements SW2are turned off, and the left ends of the drive electrodes Tx are decoupled from one another. More specifically, as illustrated inFIG.8, all the second switch elements SW2are turned off at time t3, and the left ends of the drive electrodes Tx are decoupled from one another. After a predetermined period of time has elapsed, all the first switch elements SW1are turned on at time t4, and the drive signals VTP (the first voltage signals VH and the second voltage signals VL) are supplied to the drive electrodes Tx. Subsequently, drive is repeatedly performed like the third period TSm, the second period TSi, the first period TSp, the second period TSi, and the third period TSm as illustrated inFIG.7. The electric potential of the drive signals VTP supplied to the drive electrodes Tx repeatedly changes like the first voltage signal VH, the intermediate potential VI, the second voltage signal VL, the intermediate potential VI, the first voltage signal VH, the intermediate potential VI, . . . . WhileFIGS.7to9illustrate an example where the intermediate potential VI is formed by the operations of the second switch elements SW2in code inversion drive, the detecting device100may perform the operations in the second period TSi when shifting from the drive corresponding to the elements in the second row of the square matrix H to the drive corresponding to the elements in the third row of the square matrix H. WhileFIGS.7to9illustrate a case where the number of drive electrodes Tx supplied with the first voltage signals VH is equal to the number of drive electrodes Tx supplied with the second voltage signals VL to simplify the explanation, the present embodiment is not limited thereto. The number of drive electrodes Tx supplied with the first voltage signals VH may be different from the number of drive electrodes Tx supplied with the second voltage signals VL. In other words, the value of the intermediate potential VI also becomes a different potential depending on the number of drive electrodes Tx supplied with the first voltage signals VH and the number of drive electrodes Tx supplied with the second voltage signals VL. WhileFIGS.6to9illustrate an example where the drive signals VTP (the first voltage signals VH or the second voltage signals VL) are supplied to four drive electrodes Tx-1, Tx-2, Tx-3, and Tx-4based on a predetermined code, the present embodiment is not limited thereto. The drive signals VTP may be supplied to all the drive electrodes Tx in the detection region FA. As described above, the detecting device100according to the present embodiment includes a plurality of drive electrodes Tx, a plurality of detection electrodes Rx, the drive signal supply circuit20, and a plurality of second switch elements SW2(switch elements). The drive electrodes Tx are arrayed in the first direction Dx. The detection electrodes Rx are arrayed in the second direction Dy intersecting the first direction Dx. The drive signal supply circuit20supplies the drive signals VTP to the drive electrodes Tx. The second switch elements SW2switch between coupling and decoupling of the drive electrodes Tx. The drive electrodes Tx include at least the drive electrode Tx-1(first drive electrode) and the drive electrode Tx-2(second drive electrode) disposed side by side in the first direction Dx. The drive signal supply circuit20supplies the first voltage signal VH to one of the drive electrode Tx-1and the drive electrode Tx-2and supplies the second voltage signal VL having an electric potential different from that of the first voltage signal VH to the other of the drive electrode Tx-1and the drive electrode Tx-2. The second switch elements SW2switch between coupling and decoupling of at least the drive electrode Tx-1and the drive electrode Tx-2. In the detecting device100according to the present embodiment, the electric potential of the drive electrodes Tx shifts from the first voltage signal VH to the second voltage signal VL through the intermediate potential VI or from the second voltage signal VL to the first voltage signal VH through the intermediate potential VI by the operations of the second switch elements SW2in the second period TSi. As a result, the drive signal supply circuit20can make the amplitude of the drive signals VTP supplied to the drive electrodes Tx (the potential difference between the intermediate potential VI and the first voltage signal VH or between the intermediate potential VI and the second voltage signal VL) smaller than the amplitude in drive for shifting the electric potential between the first voltage signal VH and the second voltage signal VL not through the intermediate potential VI. Consequently, the detecting device100can reduce power consumption. Second Embodiment FIG.10is a diagram for explaining the coupling configuration of a plurality of drive electrodes and a plurality of detection electrodes in the detecting device according to a second embodiment. In the following description, the same components as those described in the embodiment above are denoted by like reference numerals, and overlapping explanation thereof is omitted. As illustrated inFIG.10, the detection electrode selection circuit14in a detecting device100A according to the second embodiment includes third switch elements SW3corresponding to respective detection electrodes Rx-1, Rx-2, Rx-3, and Rx-4. The detection electrode selection circuit14switches the coupling state between the detection electrodes Rx and the detection circuit40by the operations of the third switch elements SW3. When the third switch elements SW3are turned on, the detection electrodes Rx are coupled to the detection circuit40and output the detection signals Vdet to the detection circuit40. When the third switch elements SW3are turned off, the detection electrodes Rx are decoupled from the detection circuit40. The detection electrodes Rx are not coupled to anywhere and are in a floating state. FIG.11is a timing waveform chart for explaining the method for driving the drive electrodes and the coupling configuration of the detection electrodes.FIG.12is a timing waveform chart for explaining an operation of switching the first switch elements, the second switch element, and the third switch element. As illustrated inFIG.11, the operations of the first switch elements SW1and the second switch elements SW2are the same as the first embodiment (FIGS.6to9) described above, and the waveforms of VTP supplied to the drive electrodes Tx are the same as the first embodiment (FIGS.6to9). The detection electrode selection circuit14turns on the third switch elements SW3in the first period TSp and the third period TSm (refer toFIG.10). As a result, the detection electrodes Rx are coupled to the detection circuit40and output the detection signals Vdet based on the predetermined code to the detection circuit40. The detection electrode selection circuit14turns off the third switch elements SW3in the second period TSi. As a result, the detection electrodes Rx are decoupled from the detection circuit40and are in a floating state. More specifically, as illustrated inFIG.12, all the first switch elements SW1are turned off at time t1, and supplying the drive signals VTP (the first voltage signals VH and the second voltage signals VL) to the drive electrodes Tx is stopped. After a predetermined period of time has elapsed, the third switch elements SW3are turned off at time t11before time t2, and the detection electrodes Rx are decoupled from the detection circuit40(floating state). Subsequently, all the second switch elements SW2are turned on at time t2, and the left ends of the drive electrodes Tx are coupled. All the second switch elements SW2are turned off at time t3, and the left ends of the drive electrodes Tx are decoupled from one another. After a predetermined period of time has elapsed, the third switch elements SW3are turned on at time t31before time t4, and the detection electrodes Rx are coupled to the detection circuit40. Subsequently, all the first switch elements SW1are turned on at time t4, and the drive signals VTP (the first voltage signals VH and the second voltage signals VL) are supplied to the drive electrodes Tx. As described above, the detection electrodes Rx are brought into a floating state in the second period TSi by the operations of the third switch elements SW3. More specifically, time t2and t3when the second switch elements SW2are switched between turned on and off overlap the period of time when the third switch elements SW3are turned off, and they are positioned between time t11and time t31when the detection electrodes Rx are in a floating state. This mechanism can prevent noise due to the on-off operations of the second switch elements SW2from being superimposed on the detection signals Vdet output from the detection electrodes Rx. Consequently, the detecting device100A can suppress reduction in detection accuracy. While the detection electrode selection circuit14turns on all the third switch elements SW3inFIG.10, the present embodiment is not limited thereto. The detection electrode selection circuit14may turn on part of the third switch elements SW3and turn off the other part of the third switch elements SW3, thereby coupling the selected detection electrodes Rx to the detection circuit40in the first period TSp and the third period TSm. The configuration of the second embodiment can be combined with the configurations of embodiments and modifications described later. Third Embodiment FIG.13is a diagram for explaining the coupling configuration of a plurality of drive electrodes according to a third embodiment. As illustrated inFIG.13, a detecting device100B of the third embodiment includes a plurality of second switch elements SW2aand SW2band a plurality of wires23aand23b. The second switch elements SW2aand SW2band the wires23aand23bare coupled to the left ends and the right ends of the drive electrodes Tx. More specifically, the second switch elements SW2aand the wires23aare coupled to the left ends of the respective drive electrodes Tx. The second switch elements SW2band the wires23bare coupled to the right ends of the respective drive electrodes Tx. First ends of the second switch elements SW2bare coupled to the right ends of the respective drive electrodes Tx, and second ends of the second switch elements SW2bare coupled to the common wire23b. The second switch elements SW2acoupled to the left ends of the respective drive electrodes Tx and the second switch elements SW2bcoupled to the right ends of the respective drive electrodes Tx are synchronously switched between tuned on and off. Two second switch elements SW2aand SW2bare coupled to one drive electrode Tx. This configuration can reduce the total resistance of the second switch elements SW2aand SW2bcompared with the first and the second embodiments described above. If the period is switched from the first period TSp to the second period TSi, for example, when the drive electrodes Tx-1and Tx-3supplied with the first voltage signals VH and the drive electrodes Tx-2and Tx-4supplied with the second voltage signals VL are coupled, electric charges move at both ends of the drive electrodes Tx. Consequently, the present embodiment can shorten the time required for transition from the first voltage signal VH to the intermediate potential VI or transition from the second voltage signal VL to the intermediate potential VI. First Modification of the Third Embodiment FIG.14is a diagram for explaining the coupling configuration of a plurality of drive electrodes according to a first modification of the third embodiment. While two second switch elements SW2aand SW2bin the third embodiment describe above are coupled to both ends of each drive electrode Tx, the present embodiment is not limited thereto. In a detecting device100C according to the first modification of the third embodiment, the second switch elements SW2bare each provided between two drive electrodes Tx disposed side by side as illustrated inFIG.14. The following describes the coupling configuration of the drive electrodes Tx-1and Tx-2out of the drive electrodes Tx, for example. The left ends of the drive electrodes Tx-1and Tx-2are coupled to the respective second switch elements SW2aand the wire23a. At the right ends of the drive electrodes Tx-1and Tx-2, a first end of the second switch element SW2bis coupled to the drive electrode Tx-1through a contact portion CH, and a second end of the second switch element SW2bis coupled to the drive electrode Tx-2through a contact portion CH. As described above, the second switches SW2aand SW2bof the detecting device100C can be coupled to any desired positions. The detecting device100C can suppress an increase in the number of wires in the frame region GA compared with the third embodiment described above in the configuration where a plurality of second switch elements SW2aand SW2bare coupled to one drive electrode Tx. The detecting device100C does not necessarily have the configuration where two second switch elements SW2aand SW2bare provided to one drive electrode Tx, and three or more second switch elements may be provided. Second Modification of the Third Embodiment FIG.15is a diagram for explaining the coupling configuration of a plurality of drive electrodes according to a second modification of the third embodiment. In a detecting device100D according to the second modification of the third embodiment, the second switch elements SW2aprovided at the left ends of the drive electrodes Tx are each provided between two drive electrodes Tx disposed side by side as illustrated inFIG.15. More specifically, the following describes the coupling configuration of the drive electrodes Tx-1and Tx-2. At the left ends of the drive electrodes Tx-1and Tx-2, a first end of the second switch element SW2ais coupled to the drive electrode Tx-1through a contact portion CH, and a second end of the second switch element SW2ais coupled to the drive electrode Tx-2through a contact portion CH. In other words, the drive electrode selection circuit15is not provided in the frame region GA at the left ends of the drive electrodes Tx. Consequently, the detecting device100D can suppress an increase in the number of switch elements and wires in the frame region GA at the left ends of the drive electrodes Tx compared with the embodiments described above. The configuration illustrated in the second modification of the third embodiment is applied to a detecting device100H (refer toFIG.22) of a sixth embodiment, which will be described later. Fourth Embodiment FIG.16is a diagram for explaining the coupling configuration of a plurality of drive electrodes according to a fourth embodiment in the first period.FIG.17is a diagram for explaining the coupling configuration of the drive electrodes according to the fourth embodiment in the second period. In a detecting device100E according to the fourth embodiment, the first switch elements SW1H and SW1L and the wiring21and22are provided to both the left ends and the right ends of the drive electrodes Tx as illustrated inFIGS.16and17. The first switch elements SW1H and SW1L at the left ends of the drive electrodes Tx and the first switch elements SW1H and SW1L at the right ends of the drive electrodes Tx are controlled so as to be synchronously switched between turned on and off. With this configuration, the drive signal supply circuit20(refer toFIG.6) is coupled to the left ends and the right ends of the drive electrodes Tx through the first switch elements SW1H and SW1L and the wiring21and22. Thus, the drive signal supply circuit20(refer toFIG.6) supplies the drive signals VTP to the first ends (left ends) and the second ends (right ends) of the drive electrodes Tx in the extending direction. Consequently, the detecting device100E can shorten the time required for transition between the first voltage signal VH, the intermediate potential VI, and the second voltage signal VL compared with the embodiments described above. Two second switch elements SW2aand SW2bare provided to one drive electrode Tx. The second switch elements SW2aand SW2bare each provided between two drive electrodes Tx disposed side by side in the first direction Dx at the left end and the right end of the drive electrode Tx. The configuration is not limited thereto, and the second switch elements SW2aand SW2bmay employ a configuration in which both ends of one corresponding drive electrode Tx are coupled to each other like the third embodiment illustrated inFIG.13. The detecting device100E according to the fourth embodiment performs fingerprint detection by driving part of the drive electrodes Tx (drive electrodes Tx-3to Tx-6inFIGS.16and17) disposed in an active region AA out of the drive electrodes Tx (drive electrodes Tx-1to Tx-8inFIGS.16and17). Specifically, as illustrated inFIG.16, the drive signal supply circuit20supplies the drive signals VTP having a phase corresponding to the square matrix H to the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6in the active region AA in the first period TSp. In the drive electrodes Tx-1, Tx-2, Tx-7, and Tx-8in a region other than the active region AA, all the first switch elements SW1H and SW1L and the second switch elements SW2aand SW2bare turned off, thereby bringing the drive electrodes Tx-1, Tx-2, Tx-7, and Tx-8into a floating state. The drive electrodes Tx in a region other than the active region AA is not necessarily in a floating state and may be coupled to a predetermined reference potential (e.g., the ground potential). Next, as illustrated inFIG.17, all the first switch elements SW1H and SW1L are turned off in the second period TSi, and the drive electrodes Tx in the active region AA are decoupled from the drive signal supply circuit20. In the second period TSi, the second switch elements SW2aand SW2bthat couple the drive electrodes Tx in the active region AA are turned on, and the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6in the active region AA are coupled through the second switch elements SW2aand SW2b. As a result, the drive electrodes Tx-3and Tx-5supplied with the first voltage signals VH and the drive electrodes Tx-4and Tx-6supplied with the second voltage signals VL are coupled in the active region AA. Consequently, the electric potential of the drive electrodes Tx in the active region AA is the intermediate potential VI in the second period TSi. Subsequently, the operations in the third period TSm, the second period TSi, the first period TSp, . . . are repeatedly performed on the drive electrodes Tx in the active region AA. The detecting device100E of the fourth embodiment performs CDM drive on only the drive electrodes Tx in the active region AA in the detection region FA. Consequently, the detecting device100E can reduce power consumption and shorten the time required to scan the drive electrodes Tx compared with a case where fingerprint detection is performed on the whole detection region FA. The active region AA may be a region specified in advance as a fingerprint detection region, that is, a fixed region. Alternatively, the active region AA may be a region specified based on the position of the finger Fg detected by driving all the drive electrodes Tx in the detection region FA in a time-division manner and performing touch detection (detection of the coordinates of the finger position in the detection region FA). Third Modification of the Fourth Embodiment FIG.18is a diagram for explaining the coupling configuration of the drive electrodes according to a third modification of the fourth embodiment. In a detecting device100F according to the third modification of the fourth embodiment, the drive electrodes Tx are each divided into a plurality of parts by a slit SL as illustrated inFIG.18. The drive signal supply circuit20can independently supply the drive signals VTP to the drive electrodes Tx on the left side of the slits SL and the drive electrodes Tx on the right side of the slits SL. In other words, the first switch elements SW1H and SW1L and the second switch elements SW2acoupled to the left ends of the drive electrodes Tx, and the first switch elements SW1H and SW1L and the second switch elements SW2bcoupled to the right ends of the drive electrodes Tx are controlled so as to be independently switched between turned on and off. In the example illustrated inFIG.18, fingerprint detection is performed by the aforementioned CDM drive on the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6in an active region AA2on the right side of the slits SL out of the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6. In the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6in an active region AA1on the left side of the slits SL, all the first switch elements SW1H and SW1L and the second switch elements SW2aare turned off, thereby brining the drive electrodes Tx-3, Tx-4, Tx-5, and Tx-6into a floating state. In this case, the detection electrodes Rx-3and Rx-4overlapping the active region AA2output the detection signals Vdet. The detection electrodes Rx-1and Rx-2overlapping the active region AA1does not output the detection signals Vdet. The detecting device100F according to the third modification of the fourth embodiment can make the area of the active region AA2on which CDM drive is performed smaller than that of the fourth embodiment described above. In the detecting device100F, the area of the drive electrode Tx supplied with the drive signals VTP is smaller (the length in the second direction Dy is shorter) than that of the fourth embodiment. Consequently, the detecting device100F can suppress an increase in the time required for transition between the first voltage signal VH, the intermediate potential VI, and the second voltage signal VL. Fifth Embodiment FIG.19is a plan view illustrating an example of the configuration of the detecting device according to a fifth embodiment. As illustrated inFIG.19, a detecting device100G according to the fifth embodiment further includes a touch detection electrode TE in addition to the drive electrodes Tx and the detection electrodes Rx. The touch detection electrode TE is provided in a frame shape surrounding the drive electrodes Tx and the detection electrodes Rx. The drive electrodes Tx and the detection electrodes Rx are provided in a detection region FA1. The touch detection electrode TE is provided in a detection region FA2outside the detection region FA1(on the outer periphery of the substrate101). The touch detection electrode TE can detect contact or proximity of the finger Fg with or to the detection regions FA1and FA2by self-capacitive system touch detection, for example. The touch detection electrode TE may be directly coupled to the detection IC (not illustrated) not through the drive electrode selection circuit15or the detection electrode selection circuit14. Alternatively, the drive electrode selection circuit15and the detection electrode selection circuit14may also be used for touch detection performed by the touch detection electrode TE. The touch detection electrode TE does not necessarily have one continuous frame shape and may be divided into a plurality of parts and disposed in the detection region FA2. When a user brings the finger Fg closer to the detection region FA1to perform fingerprint detection, at least part of the finger Fg overlaps (abuts on) the detection region FA2. The detecting device100G according to the present embodiment has such a size with respect to the finger Fg of the user. With the touch detection electrode TE, the detecting device100G substantially need not perform fingerprint detection drive except when the touch detection electrode TE detects a touch. Consequently, the configuration according to the present embodiment performs normal fingerprint detection drive in a touch detection period, while, in a period other than the touch detection period, the configuration does not perform fingerprint detection drive as an idling mode or intermittently performs fingerprint detection drive in a cycle much longer than that of fingerprint detection drive in the touch detection period. FIG.20is a diagram for explaining a method for driving the detecting device according to the fifth embodiment. As illustrated inFIG.20, the detecting device100G has a fingerprint detection mode (normal detection mode) and an idling mode. In the fingerprint detection mode, the detection controller11(refer toFIG.4) performs the CDM drive described above on the drive electrodes Tx and the detection electrodes Rx to detect a fingerprint of the finger Fg. If no fingerprint is detected in a predetermined period of time (time t21), the detection controller11stops CDM drive and shifts to the idling mode. In the idling mode, the detecting device100G does not detect a fingerprint of the finger Fg. Specifically, in the idling mode, the detection controller11supplies drive signals VID to the touch detection electrode TE in a predetermined cycle to detect a touch, that is, contact or proximity of the finger Fg. The cycle of a touch detection period TSf for performing touch detection is set longer than the cycle of detection in the fingerprint detection mode. In the idling mode, the detection controller11performs the CDM drive described above on the drive electrodes Tx and the detection electrodes Rx in a predetermined cycle. As a result, the detection controller11acquires base line signals for fingerprint detection in a state where the finger Fg is not present. The detection controller11compares the newly acquired base line signals with the conventional base line signals. If these base line signals are different, the detection controller11updates the base line signals with the newly acquired base line signals. If contact or proximity of the finger Fg is detected in the touch detection period TSf (time t22), the detection controller11shifts from the idling mode to the fingerprint detection mode. The present embodiment performs the same drive on the drive electrodes Tx and the detection electrodes Rx in the detection region FA1in the fingerprint detection mode and the idling mode. The present embodiment repeatedly performs the operations in the first period TSp, the second period TSi, the third period TSm, the second period TSi, the first period TSp, . . . as described above. WhileFIG.20illustrates a case where the detecting device100G alternately performs touch detection (touch detection period TSf) by the touch detection electrodes TE and acquisition of the base line signals by the drive electrodes Tx and the detection electrodes Rx in the idling mode, the present embodiment is not limited thereto. In the period of the idling mode, for example, the detecting device100G simply needs to acquire the base line signals by the drive electrodes Tx and the detection electrodes Rx at least once. After acquiring the base line signals, the detecting device100G may repeatedly perform touch detection by the touch detection electrode TE in a plurality of predetermined cycles. Fourth Modification of the Fifth Embodiment FIG.21is a diagram for explaining the method for driving the detecting device according to a fourth modification of the fifth embodiment. As illustrated inFIG.21, the detecting device100G according to the fourth modification of the fifth embodiment is different from the aforementioned fifth embodiment in the operations in the fingerprint detection mode (normal detection mode). Specifically, the fingerprint detection mode has no second period TSi, and the first periods TSp and the third periods TSm are repeatedly arranged. In other words, in the fingerprint detection mode, the detecting device100G does not perform the operation in the second period TSi and alternately supplies the first voltage signals VH and the second voltage signals VL to the drive electrodes Tx not through the intermediate potential VI based on the predetermined code (square matrix H). In the idling mode, the detecting device100G repeatedly performs the operations in the first period TSp, the second period TSi, the third period TSm, the second period TSi, the first period TSp, . . . as described above. In other words, in the idling mode, the detecting device100G alternately supplies the first voltage signals VH and the second voltage signals VL to the drive electrodes Tx through the intermediate potential VI based on the predetermined code (square matrix H). The fourth modification can increase the speed of scanning the drive electrodes Tx in the fingerprint detection mode and shorten the time required for fingerprint detection. By contrast, the fourth modification performs the drive for reducing power consumption described above in the idling mode having less restriction on the scanning speed. As described above, the detecting device100G can switch the system of CDM drive on the drive electrodes Tx depending on the required characteristics (increase in scanning speed or reduction in power consumption). While the fourth modification describes the example that switches the system of CDM drive on the drive electrodes Tx between the fingerprint detection mode and the idling mode inFIG.21, the present modification is not limited thereto. The fourth modification, for example, may switch the system of CDM drive between fingerprint detection on the whole detection region FA and fingerprint detection on the partial active region AA. Alternatively, the fourth modification may switch the system of CDM drive based on the detection conditions, such as the resolution of detection. Sixth Embodiment FIG.22is a plan view of an example of the configuration of the detecting device according to a sixth embodiment. A detecting device100H of the sixth embodiment is different from the first to the fifth embodiments described above in the arrangement relation of the drive electrodes Tx and the detection electrodes Rx with the peripheral circuits and the wiring substrate76. Specifically, as illustrated inFIG.22, the drive electrode selection circuit15and the drive electrodes Tx are disposed side by side in the second direction Dy in which the drive electrodes Tx extend. The wiring substrate76is coupled to the side of the frame region GA provided with the drive electrode selection circuit15. A plurality of detection electrode selection circuits14are disposed in a manner sandwiching the drive electrodes Tx in the first direction Dx. The drive electrode selection circuit15includes a shift register circuit151and a buffer circuit152. The shift register circuit151selects a plurality of drive electrodes Tx based on a predetermined code. The buffer circuit152amplifies the drive signals VTP and supplies them to the selected drive electrodes Tx. A plurality of power supply lines PL supply electric power to the buffer circuit152from the outside. The power supply lines PL, for example, supply electric power to both ends and the center part of the buffer circuit152in the first direction Dx. The sixth embodiment has little variation in the distance between the drive electrodes Tx and the drive electrode selection circuit15. This configuration reduces the difference in resistance between the wires (not illustrated) that couple the drive electrodes Tx and the drive electrode selection circuit15and suppresses variation in the voltage of the drive signals VTP. Seventh Embodiment FIG.23is a sectional view of a schematic sectional configuration of the detecting device according to a seventh embodiment. As illustrated inFIG.23, a detecting device100I according to the seventh embodiment does not include the display panel30(refer toFIG.2) and is provided as the detecting device100I alone. The substrate101, the drive electrodes Tx, the detection electrodes Rx, and other components of the detecting device100I may be made of non-translucent material. The drive electrodes Tx and the detection electrodes Rx, for example, may be made of metal material. This configuration can increase the flexibility in arrangement of the switch elements, such as the second switch elements SW2. The detecting device100I does not necessarily include the cover member80. In this case, the detecting device100I may have a configuration in which a protective film (insulating film) that covers the drive electrodes Tx and the detection electrodes Rx is provided instead of the cover member80. While exemplary embodiments according to the present disclosure have been described, the embodiments are not intended to limit the disclosure. The contents disclosed in the embodiments are given by way of example only, and various changes may be made without departing from the spirit of the present disclosure. Appropriate modifications made without departing from the spirit of the present disclosure naturally fall within the technical scope of the present disclosure. | 64,359 |
11861121 | DETAILED DESCRIPTION FIG.1is a block diagram illustrating an electronic device101in a network environment100according to certain embodiments. Referring toFIG.1, the electronic device101in the network environment100may be in communication with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). The electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be provided in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. The processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. In operation, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a particular function according to another embodiment. Further, the auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) of the electronic device101while the main processor121is in an inactive (e.g., sleep) state, or may control together with the main processor121while the main processor121is in an active state (e.g., executing an application). Alternatively, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thererto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by other component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used for processing incoming calls. In alternate embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may display information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. In operation, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his or her tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. The camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. The power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. The the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). Here, one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. The the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., PCB). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. The electronic device according to certain embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that certain embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Certain embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to certain embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to certain embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to certain embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to certain embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to certain embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.2Ais a perspective view illustrating an electronic device200according to certain embodiments.FIG.2Bis a perspective view illustrating the electronic device ofFIG.2Aviewed from the rear side. Referring toFIG.2AandFIG.2B, the electronic device200according to Certain embodiments may include a housing210including a first surface (or a front surface)210A, a second surface (or a rear surface)210B, and a side surface (or a side wall)210C surrounding a space between the first surface210A and the second surface210B. In another embodiment (not illustrated), the term “housing” may refer to a structure forming a part of the first surface210A, the second surface210B, and the side surface210C ofFIG.2A. According to an embodiment, at least a part of the first surface210A may be provided by a substantially transparent front plate202(e.g., a glass plate or a polymer plate including various coating layers). According to an embodiment, the front plate202may include a curved portion extending seamlessly while being curved from the first surface201A toward a rear plate211in at least one side edge portion. According to certain embodiments, the second surface210B may be provided by a substantially opaque rear plate211. The rear plate211may be made of, for example, coated or colored glass, ceramic, a polymer, or metal (e.g., aluminum, stainless steel (STS), or magnesium), or a combination of two or more of these materials. According to an embodiment, the rear plate211may include a curved portion extending seamlessly while being curved from the second face210B towards the front plate202in at least one side edge portion. According to certain embodiments, the side surface210C may be provided by a side bezel structure (or “side member or side wall”)218coupled to the front plate202and the rear plate211and including metal and/or a polymer. In some embodiments, the rear plate211and the side bezel structure218may be configured in an integral structure, and may include the same material (e.g., a metal material such as aluminum). According to an embodiment, the electronic device200may include at least one of a display201, an audio module203, a sensor module (not illustrated), a camera module205, a key input device217, and a connector hole208. In some embodiments, in the electronic device200, at least one of the components (e.g., the key input devices217) may be omitted, or other components may be additionally included. For example, the electronic device200may further include a sensor module (not illustrated). For example, in an area provided by the front plate202, a sensor such as a proximity sensor or an illuminance sensor may be integrated with the display201or may be disposed at a position adjacent to the display201. In some embodiments, the electronic device200may further include a light-emitting element, and the light-emitting element may be disposed at a position adjacent to the display201in the area provided by the front plate202. For example, the light-emitting element may provide, for example, the status information of the electronic device200in an optical form. In another example, the light-emitting element may provide a light source that is interlocked with the operation of the camera module205. The light-emitting element may include, for example, an LED, an IR LED, and/or a xenon lamp. For example, the display201may be exposed through a substantial portion of the front plate202. In some embodiments, the edge of the display201may be configured to have substantially the same shape as the shape of the outline (e.g., a curved surface) of the front plate202adjacent thereto. In another embodiment (not illustrated), the distance between the outline of the display201and the outline of the front plate202may be substantially constant in order to enlarge the exposed area of the display201. In another embodiment (not illustrated), the electronic device may have a recess or an opening provided in a portion of the screen display area of the display201and include another electronic component, such as the camera module205, a proximity sensor (not illustrated), or an illuminance sensor (not illustrated) aligned with the recess or the opening provided. In another embodiment (not illustrated), the rear surface of the screen display area of the display201may include at least one of the camera modules212and213, a fingerprint sensor (216), and a flash206. In another embodiment (not illustrated), the display201may be coupled to or disposed adjacent to a touch-sensitive circuit, a pressure sensor capable of measuring a touch intensity (pressure), and/or a digitizer configured to detect a magnetic-field-type stylus pen. The audio module203may include a microphone hole and a speaker hole. A microphone may be disposed in the microphone hole to acquire external sound, and in some embodiments, multiple microphones may be disposed therein to detect the direction of sound. In some embodiments, the speaker hole and the microphone hole may be implemented as a single hole203, or a speaker (e.g., a piezo speaker) may be included without a speaker hole. The speaker hole may include an external speaker hole and a call receiver hole. The electronic device200may generate an electrical signal or a data value corresponding to an internal operating state or an external environmental state by including a sensor module (not illustrated). The sensor module may further include, for example, a proximity sensor disposed on the first surface210A of the housing210, a fingerprint sensor integrated with or disposed adjacent to the display201, and/or a biometric sensor (e.g., an HRM sensor) disposed on the second surface210B of the housing210. According to an embodiment, the electronic device200may further include at least one of sensor modules (not illustrated in the drawings), such as a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The camera modules205,212,213, and206may include a first camera device205disposed on the first surface210A of the electronic device200, second camera devices212and213disposed on the second surface210B thereof, and/or a flash206. Each of the camera modules205,212, and213may include one or more lenses, an image sensor, and/or an image signal processor. The flash206may include, for example, a light-emitting diode or a xenon lamp. In some embodiments, two or more lenses (e.g., an infrared camera lens, a wide-angle lens, and a telephoto lens) and image sensors may be disposed on one face of the electronic device200. The key input devices217may be disposed on the side surface210C of the housing210. In another embodiment, the electronic device200may not include some or all of the above-mentioned key input devices217, and a key input device217, which is not included in the above-mentioned key input devices, may be implemented in another form, such as a soft key, on the display201. In some embodiments, the key input devices may include at least a part of a fingerprint sensor216disposed on the second surface210B of the housing210. The connector hole208may accommodate a connector configured to transmit and receive power and/or data to and from an external electronic device, and/or a connector configured to transmit and receive an audio signal to and from an external electronic device. For example, the connector hole208may include a USB connector or an earphone jack. FIG.3is a front view illustrating a front surface of an electronic device300according to certain embodiments. Referring toFIG.3, a display301(e.g., the display201inFIG.2A) may be disposed on a front surface300aof the electronic device300(e.g., the electronic device200inFIG.2A). As shown, the front surface300A of the electronic device300may be divided into a view area VA corresponding to the display301and a non-view area N-VA surrounding an outer circumferential surface of the view area VA. According to an embodiment, the view area VA may be divided into an active area AA and an edge area provided along an outer circumferential surface AA-L of the active area AA. The active area AA may refer to an area in which the display301is activated and data is displayed for viewing. The edge area may refer to an area (or a “non-active area”) which is positioned between the active area AA and the non-view area N-VA and in which data is not displayed, and the edge area may be viewed as a dark area (black) when viewed from the outside of the electronic device300. According to an embodiment (not illustrated), a touch electrode pattern is disposed in an area corresponding to a partial area of the edge area and the active area AA and multiple pads may be disposed in an area corresponding to a partial area of the edge area, but the detailed description therefor will be made later. FIG.4is a cross-sectional view illustrating an outer edge area of an electronic device400according to an embodiment. Referring toFIG.4, the electronic device400may include a window410, a substrate420, a touch sensing unit430,440, and445, a polarizing layer450, and/or a display panel460. According to an embodiment, the window410may be disposed on the outermost side (or the uppermost end), the polarizing layer450and the substrate420on which the touch sensing unit430,440, and445is disposed may be respectively arranged under the window410, and the display panel460may be disposed under the substrate420. According to an embodiment, the window410may be made of a transparent material and may include an opaque printed layer411disposed along an outer edge of a rear surface (or a lower end). The window410may be divided into a view area VA (e.g., the view area VA inFIG.3) and a non-view area N-VA (e.g., the non-view area N-VA inFIG.3) by the printed layer411. For example, an inner area of the window410, in which the printed layer411is not disposed on the rear surface of the window, may be divided into the view area VA, and a peripheral area of the window410, in which the printed layer411is disposed on the rear surface of the window, may be divided into the non-view area N-VA. According to an embodiment, the view area VA may be configured in a rectangular shape and the non-view area N-VA may be configured in a band shape surrounding an outer edge of the view area VA. According to an embodiment, a partial area of the display panel460may be viewed from the outside of the electronic device400through the window410, and data or an image may be displayed in the area viewed from the outside of the electronic device400. For example, the display panel460may include a thin film transistor liquid crystal display (TFT-LCD) but is not limited thereto. According to an embodiment, the display panel460may include an organic light emitting diode (OLED) or an active organic light-emitting diode (e.g., an active matrix organic light emitting diode (AMOLED)). According to an embodiment, an area of the display panel460corresponding to the view area VA may be viewed from the outside of the electronic device400through the window410, but an area of the display panel460corresponding to the non-view area N-VA is not visible from the outside of the electronic device400by being covered by the printed layer411. The area of the display panel460corresponding to the view area VA may be divided into an active area AA and an edge area (or “a non-active area”) according to whether a pixel is present. For example, an area in which a pixel is disposed among the area of the display panel460corresponding to the view area VA may be divided into the active area AA, and the active area AA may display data according to the operation of the display panel460. In another embodiment, an area on which a pixel is not disposed may be divided into the edge area, and the edge area may be viewed as a dark area when viewed from the outside of the electronic device400regardless of the operation of the display panel460. According to an embodiment, the active area AA may be configured in a rectangular shape, and the edge area may be configured in a band shape disposed along the outer circumferential surface of the active area AA. According to an embodiment, the substrate420may be positioned on the display panel460, and the substrate420and the display panel460may be integrally attached to each other by a transparent adhesive401(e.g., an optically clear adhesive (OCA)). The substrate420may include a material having rigidity (e.g., glass and plastic), a material not having elasticity, a material having elasticity to be curved, bent, or folded, or a film made of a flexible material. For example, the substrate420may include at least one of polycarbonate (PC), polyethylene terephthalate (PET), cyclo olefin polymer (COP), cyclo olefin copolymer (COC), polyimide (PI), a polymer compound, or olefin, but is not limited thereto. According to an embodiment (not illustrated), a color filter may be provided in an area of the substrate420corresponding to the display panel460. According to an embodiment, the polarizing layer450may be positioned between a rear surface (or lower end) of the window410and an upper end of the substrate420so as to obtain linearly polarized light from light output from the display panel460. The polarized layer450may be integrally attached to the window410and/or the substrate420by the transparent adhesive (OCA)401. According to an embodiment (not illustrated), the polarizing layer450may be coated with a coating film for reducing the reflectance or suppressing light scattering or surface reflection. For example, a coating film may include at least one of anti-reflective (AR), low-reflective (LR), anti-glare (AG), or hard coat (HC), but is not limited thereto. According to an embodiment, a touch sensing unit configured to sense a touch by a pen or a hand of a user may be provided on the substrate420, wherein the touch sensing unit may include a touch electrode pattern430, a trace wire440, and a pad445. According to an embodiment, the touch electrode pattern430may be provided in an area421and422on the substrate420corresponding to the view area VA of the window410, and the trace wire440may be provided in an area423of the substrate420corresponding to the non-view area N-VA of the window410. The plurality of pads445may be disposed in the area422of the substrate420corresponding to the edge area of the view area VA. The touch electrode pattern430provided in the area421and422on the substrate420corresponding to the view area VA may be electrically connected to the trace wire440provided in the area423on the substrate420corresponding to the non-view area N-VA via the pad445. In the drawings, it is illustrated that the touch sensing unit is provided on the substrate420, but the touch sensing unit may be provided at a side surface or a lower end of the substrate420in some embodiments. The detailed description for each element of the touch sensing unit will be made later. FIG.5is a cross-sectional view illustrating an outer edge area of an electronic device500according to another embodiment. Referring toFIG.5, the electronic device500may include a window510, a substrate520, a touch sensing unit530,540, and545, a polarizing layer550, and/or a display panel560. At least one of components of the electronic device500may be identical or similar to at least one of the components of the electronic device (e.g., the electronic device400inFIG.4) inFIG.4, and a redundant description will be omitted below for simplicity. Unlike the electronic device (e.g., the electronic device400inFIG.4) inFIG.4, in which the substrate (e.g., the substrate420inFIG.4), the polarizing layer (e.g., the polarizing layer450inFIG.4), and the window (e.g., the window410inFIG.4) are stacked in order based on the display panel (e.g., the display panel460inFIG.4), the polarizing layer550, the substrate520, and the window510may be stacked in order based on the display panel560in the electronic device500according to another embodiment. According to an embodiment, the substrate520may be positioned between the window510and an upper end of the polarizing layer550, the window510and the substrate520may be integrally attached to each other by a transparent adhesive (OCA)501, and the substrate520and the polarizing layer550may be integrally attached to each other by the transparent adhesive (OCA)501. The touch sensing unit configured to sense a touch of a user may be provided on the substrate520, and the touch sensing unit may include a touch electrode pattern530, a trace wire540, and a pad545. According to an embodiment, the touch electrode pattern530may be provided in an area521on the substrate520corresponding to the view area VA of the window510, and the trace wire540may be provided in an area523of the substrate520corresponding to the non-view area N-VA of the window510. A plurality of pads545may be arranged in an area522of the substrate520corresponding to the edge area of the view area VA. The touch electrode pattern530provided in the area521on the substrate520corresponding to the view area VA may be electrically connected to the trace wire540provided in the area523on the substrate520corresponding to the non-view area N-VA via the pad545. InFIG.5, it is illustrated that the touch sensing unit is provided on the substrate520, but it should be obvious to those skilled in the art that the touch sensing unit can be provided at a side surface or a lower end of the substrate520in some embodiments. According to an embodiment, the display panel560may be positioned under the polarizing layer550, and the polarizing layer550may be integrally attached to the display panel560by a transparent adhesive501. FIG.6is a cross-sectional view illustrating a structure of a touch electrode pattern630disposed on a substrate620of an electronic device according to certain embodiments. As shown, the touch electrode pattern630may be provided at the substrate620of the electronic device (e.g., the electronic device400inFIG.4or the electronic device500inFIG.5). According to an embodiment, the touch electrode pattern630may have a low resistance value and a high conductivity, and a metal material having a flexible property may be provided to be deposited or printed on the substrate620. For example, the touch electrode pattern630may be made of gold, silver, aluminum, copper, neodymium, molybdenum, nickel, or an alloy thereof, but is not limited thereto. The touch electrode pattern630may include a driving electrode pattern (Tx)631and a sensing electrode pattern (Rx)632, and the driving electrode pattern631and the sensing electrode pattern632may include a metal mesh pattern having a lattice structure. According to an embodiment, the driving electrode pattern631and the sensing electrode pattern632are provided on different layers so that, when a material (e.g., a hand of a user or a pen) having a capacitance comes into contact therewith, the contact position may be determined. According to an embodiment, the electronic device may further include an insulating layer640, and the insulating layer640is configured to surround the driving electrode pattern631and the sensing electrode pattern632to insulate the driving electrode pattern631and the sensing electrode pattern632. For example, the insulating layer640may include a first insulating layer641positioned between the driving electrode pattern631and the sensing electrode pattern632, a second insulating layer642positioned on the sensing electrode pattern632, and/or a third insulating layer643positioned under (or at a rear surface) the driving electrode pattern631. In the drawings, only the embodiment in which the sensing electrode pattern632is positioned above the driving electrode pattern631is illustrated, but in some embodiments (not illustrated), it is obvious that the driving electrode pattern631can be positioned above the sensing electrode pattern632. The driving electrode pattern631and the sensing electrode pattern632may be electrically connected to a trace wire (e.g., the trace wire440inFIG.4) via pads (e.g., the pad445inFIG.4), respectively, and the detailed description thereof will be made later. FIG.7is a view illustrating an electrical connection relation between a trace wire and a touch electrode pattern of an electronic device according to certain embodiments. According to an embodiment, a touch electrode pattern730(e.g., the touch electrode pattern430inFIG.4), pads744and745(e.g., the pad445inFIG.4), a trace wire740(e.g., the trace wire440inFIG.4) may be arranged on a substrate720(e.g., the substrate420inFIG.4) of an electronic device700(e.g., the electronic device400inFIG.4). According to an embodiment, the touch electrode pattern730may be disposed at a view area VA (e.g., the view area VA inFIG.4) of a window (e.g., the window410inFIG.4), an active area (e.g., the active area inFIG.4) of a display panel (e.g., the display panel460inFIG.4), or an area721and722(e.g.,421and422inFIG.4) on the substrate720corresponding to an edge area (e.g., the edge area inFIG.4). The trace wire740may be arranged in an area723(e.g.,423inFIG.4) on the substrate720corresponding to a non-view area N-VA (e.g., the non-view area N-VA inFIG.4) of the window, and the plurality of pads744and745may be arranged in the area722(e.g.,422inFIG.4) on the substrate720corresponding to the edge area (e.g., the edge area inFIG.4) of the display panel. The touch electrode pattern730and the trace wire740may be electrically connected via the plurality of pads744and745. According to an embodiment, the touch electrode pattern730may include a driving electrode pattern (Tx)731and a sensing electrode pattern (Rx)732, and the driving electrode pattern731and the sensing electrode pattern732may be provided on separate layers. In an example, the sensing driving pattern732may be positioned above the driving electrode pattern731, and in another example, the driving electrode pattern731may be positioned above the sensing electrode pattern732. In some embodiments, the driving electrode pattern731and the sensing electrode pattern732may be arranged to intersect with each other when viewed from above the substrate720. The driving electrode pattern731and the sensing electrode pattern732may include a metal mesh pattern having a lattice structure, and in an example, the driving electrode pattern731and the driving electrode pattern732may include a metal mesh pattern having the same width, height, and/or pitch. Here, the metal mesh pattern may indicate a pattern provided in a mesh shape, and a plurality of quadrangular or rhombic lattices may be provided between the metal mesh pattern due to the structure of the mesh shape. The area722on the substrate corresponding to the edge area of the display panel may include a first edge722aprovided along one side surface of the area721on the substrate corresponding to the active area of the display panel, a second edge722bprovided along one side surface in a direction opposite to the first edge722a, a third edge722cprovided along one side surface in a direction perpendicular to the first edge722aand the second edge722b, and a fourth edge722dprovided along one side surface in a direction opposite to the third edge722cand perpendicular to the first edge722aand the second edge722b. The plurality of pads744and745may be arranged at the first edge722a, the second edge722b, the third edge722c, and the fourth edge722d, respectively, to form a row. According to an embodiment, the trace wire740may include a driving trace wire (Tx trace)741electrically connected to the driving electrode pattern731and a sensing trace wire (Rx trace)742electrically connected to the sensing electrode pattern732. Via the electrical connection relation described above, a touch IC (not illustrated) may apply an electrical signal to the driving electrode pattern731or may receive an electrical signal from the sensing electrode pattern732so as to recognize a touch position according to an input (e.g., a pen or a hand of a user). In an example, the sensing trace wire742may be disposed in an area adjacent to the first edge722aand the second edge722bin the area723on the substrate720corresponding to the non-view area of the window. The trace wire732may be electrically connected to the sensing electrode pattern732via the pad744disposed at the first edge722aand the second edge722b. In another example, the driving trace wire741may be disposed in an area adjacent to the third edge722cand the fourth edge722din the area723on the substrate720corresponding to the non-view area of the window and may be electrically connected to the driving electrode pattern732via the pad745disposed at the third edge722cand the fourth edge722d. According to an embodiment (area A indicated by a dotted box on upper left corner ofFIG.7), the first edge722aand the second edge722bare an area in which the sensing trace wire742and the sensing electrode pattern732are electrically connected to each other, but the driving electrode pattern731is positioned under (or above) the sensing electrode pattern732so that a coupling capacitance ΔCm may be provided between the sensing trace wire742and the driving electrode pattern731. According to another embodiment (area B indicated by a dotted box inFIG.7), the third edge722cand the fourth edge722dare also an area in which the driving trace wire741and the driving electrode pattern731are electrically connected to each other, but the sensing electrode pattern732is positioned above (or under) the driving electrode pattern731so that a coupling capacitance may be provided between the driving trace wire741and the sensing electrode pattern732. The coupling capacitance provided between the sensing trace wire742and the driving electrode pattern731or the coupling capacitance provided between the driving trace wire741and the sensing electrode pattern732may cause a malfunction (e.g., a ghost touch) of the electronic device, and thus the teachings of present disclosure provides an electronic device capable of minimizing the coupling capacitance described above to prevent a malfunction, as explained hereinafter. Note that the ghost touch may mean that a touch unintended by a user is recognized, and the ghost touch may be variously referred to as any other name. FIG.8Ais a view illustrating a disconnection part851disposed in a partial area of a driving electrode pattern831disposed on a substrate area822acorresponding to an edge area of a display panel.FIG.8Bis a view illustrating a disconnection part852disposed in a partial area of a sensing electrode pattern832disposed on a substrate area822ccorresponding to an edge area of a display panel. Referring toFIG.8AandFIG.8B, an electronic device800according to an embodiment may include a substrate820, a touch electrode pattern830including a driving electrode pattern831and a sensing electrode pattern832, a trace wire including a driving trace wire841and a driving trace wire842, and/or pads844and845configured to electrically connect the touch electrode pattern830and the trace wire. At least one of the components of the electronic device800may be identical or similar to at least one of the components of the electronic device (e.g., the electronic device700inFIG.7) inFIG.7, and a redundant description will be omitted for clarity and simplicity. According to an embodiment, a driving electrode pattern831and a sensing electrode pattern832including a metal mesh pattern may be arranged to intersect with each other in an area821and822on the substrate820corresponding to a view area (e.g., the view area VA inFIG.4) of a window. The area821and822on the substrate820corresponding to the view area of the window may be divided into an area821corresponding to an active area (the active area inFIG.4) of a display panel (e.g., the display panel460inFIG.4) and an area822corresponding to an edge area (the edge area inFIG.4) of the display panel. The area822corresponding to the edge area of the display panel may include a first edge822aprovided along one side surface of the area821corresponding to the active area of the display panel, a second edge (e.g., the second edge722binFIG.7) provided along one side surface in a direction opposite to the first edge822a, a third edge822cprovided along one side surface in a direction perpendicular to the first edge822aand the second edge, and a fourth edge (e.g., the fourth edge722dinFIG.7) provided along one side surface in a direction opposite to the third edge822c. Referring toFIG.8A, according to an embodiment, the plurality of pads844may be arranged to form a row at the first edge822a. The sensing trace wire842arranged in the area823on the substrate820corresponding to a non-view area (e.g., the non-view area N-VA inFIG.4) of the window may be electrically connected to the sensing electrode pattern832via the plurality of pads844arranged at the first edge822a. According to an embodiment, the disconnection part851configured to disconnect a connection line of the driving electrode pattern831may be provided at a partial area of the first edge822a. The disconnection part851may disconnect the driving electrode pattern831arranged at a partial area of the first edge822aso as to divide the driving electrode pattern831into a driving area831aand a dummy area831b. That is, the electronic device800may minimize a coupling capacitance (e.g., the coupling capacitance ΔCm inFIG.7) unintentionally incurred between the sensing trace wire842and the driving electrode pattern831disposed under (or above) the sensing electrode pattern832via the disconnection part851. Although not illustrated in the drawings, the disconnection part851may be also provided at a partial area of the second edge to minimize the coupling capacitance between the sensing trace wire842and the driving electrode pattern831, which may occur at the second edge. According to an embodiment, the disconnection part851may be provided to be spaced a predetermined distance α apart from an outer circumferential surface of the area821corresponding to the active area of the display panel. When the disconnection part851is provided at a position closer than the predetermined distance α with respect to the outer circumferential surface of the area821corresponding to the active area of the display panel, a problem of not being able to sense the touch input to the edge area of the area821corresponding to the active area AA of the display panel may occur. In contrast, when the disconnection part851is provided at a position farther than the predetermined distance α, the coupling capacitance incurred between the sensing trace wire842and the driving electrode pattern831cannot be prevented, and thus the disconnection part851is preferably provided to space the predetermined distance α apart from the outer circumferential surface of the area821corresponding to the active area of the display panel. In an example, the disconnection part851may be provided in an area spaced 300 μm apart from the outer circumferential surface of the area821corresponding to the active area of the display panel, but is not limited thereto. Referring toFIG.8B, the plurality of pads845may be also arranged to form a row at the third edge822c, and the driving trace wire841disposed in the area823on the substrate820corresponding to the non-view area (e.g., the non-view area N-VA inFIG.4) of the window may be electrically connected to the driving electrode pattern831via the plurality of pads845at the third edge822c. Since, at the third edge822carea, the coupling capacitance may unintentionally occur between the driving trace wire841and the sensing electrode pattern832disposed above (or under) the driving electrode pattern831, the disconnection part852configured to disconnect the connection line of the sensing electrode pattern832may be provided at a partial area of the third edge822c. The disconnection part852may disconnect the sensing electrode pattern832disposed at a partial area of the third edge822cso as to divide the sensing electrode pattern832into a driving area832aand a dummy area832b, and as a result, the coupling capacitance (e.g., the coupling capacitance ΔCm inFIG.7) incurred between the driving trace wire841and the sensing electrode pattern832may be minimized. Although not illustrated in the drawings, the disconnection part852may be also provided at a partial area of the fourth edge so as to minimize the coupling capacitance between the driving trace wire841and the sensing electrode pattern832, which may occur at the fourth edge. According to an embodiment, the disconnection part852may be provided to be spaced a predetermined distance a apart from an outer circumferential surface of the area821corresponding to the active area of the display panel, and a redundant description will be omitted. That is, the electronic device800according to an embodiment may minimize the coupling capacitance incurred between the driving electrode pattern831and the sensing trace wire842or between the sensing electrode pattern832and the driving trace wire841via the disconnection part851provided at at least a partial area of the driving electrode pattern831disposed at the first edge822aand/or the second edge and the disconnection part852provided at at least a partial area of the sensing electrode pattern832disposed at the third edge822cand/or the fourth edge. FIG.9Ais a view illustrating a state in which only a sensing electrode pattern932is disposed in a partial area of a substrate area922acorresponding to an edge area of a display panel.FIG.9Bis a view illustrating a state in which only a driving electrode pattern931is disposed in a partial area922cof a substrate area corresponding to an edge area of a display panel. Referring toFIG.9AandFIG.9B, an electronic device900according to another embodiment may include a substrate920, a touch electrode pattern930including a driving electrode pattern931and a sensing electrode pattern932, a trace wire including a driving trace wire941and a sensing trace wire942, and/or pads944and945configured to electrically connect the trace wire and the touch electrode pattern930. At least one of the components of the electronic device900may be identical or similar to at least one of the components of the electronic device (e.g., the electronic device700inFIG.7) inFIG.7and the electronic device (e.g., the electronic device800inFIG.8AandFIG.8B) inFIG.8AorFIG.8B, a redundant description will be omitted. According to an embodiment, the sensing electrode pattern932and the driving electrode pattern931including a metal mesh pattern may be provided in an area921and922on the substrate920corresponding to a view area (e.g., the view area VA inFIG.4) of a window. The driving electrode pattern931and the sensing electrode pattern932may be provided on separate layers, and the driving electrode pattern931and the sensing electrode pattern932may be arranged to intersect with each other when viewed from above the substrate920. The area921and922on the substrate920corresponding to the view area of the window may be divided into an area921corresponding to an active area (the active area inFIG.4) of a display panel (e.g., the display panel460inFIG.4) and an area922corresponding to an edge area (the edge area inFIG.4) of the display panel. The area922corresponding to the edge area of the display panel may include a first edge922aprovided along one side surface of the area921corresponding to the active area of the display panel, a second edge (e.g., the second edge722binFIG.7) provided along one side surface in a direction opposite to the first edge922a, a third edge922cprovided along one side surface in a direction perpendicular to the first edge922aand the second edge, and a fourth edge (e.g., the fourth edge722dinFIG.7) provided along one side surface in a direction opposite to the third edge922c. Referring toFIG.9A, the plurality of pads944may be arranged to form a row at the first edge922a. The sensing trace wire942arranged in the area923on the substrate920corresponding to a non-view area (e.g., the non-view area N-VA inFIG.4) of the window may be electrically connected to the sensing electrode pattern932via the plurality of pads944arranged at the first edge922a. At the first edge922a, an unintended coupling capacitance (e.g., the coupling capacitance ΔCm inFIG.7) may occur between the sensing trace wire942and the driving electrode pattern931positioned under (or above) the sensing electrode pattern932. The coupling capacitance incurred between the sensing trace wire942and the driving electrode pattern931may cause a malfunction such as a ghost touch. The electronic device900according to an embodiment may arrange only the sensing electrode pattern932at a partial area of the first edge922awithout the driving electrode pattern931so as to minimize the coupling capacitance incurred between the sensing trace wire942and the driving electrode pattern931. According to an embodiment, the first edge922amay be divided into an area in which both the driving electrode pattern931and the sensing electrode pattern932are arranged and an area in which only the sensing electrode pattern932is arranged without the driving electrode pattern931. For example, both the driving electrode pattern931and the sensing electrode pattern932may be arranged in an area9221of the first edge922apositioned within a predetermined distance α from an outer circumferential surface of the area921corresponding to the active area of the display panel. In another example, only the sensing electrode pattern932may be arranged, without the driving electrode pattern931, in an area9222distanced, by a predetermined distance a or more, from the outer circumferential surface of the area921corresponding to the active area of the display panel in the area of the first edge922a. When only the sensing electrode pattern932is also disposed in the area9221positioned within the predetermined distance α from the outer circumferential surface of the area921corresponding to the active area of the display panel, a problem of not being able to sense the touch input to the edge area of the area921corresponding to the active area AA of the display panel may occur. In contrast, when both the driving electrode pattern931and the sensing electrode pattern932are arranged in the area9222distanced, by the predetermined distance a or more, from the outer circumferential surface of the area921corresponding to the active area of the display panel, there is a problem of not being able to prevent the coupling capacitance incurred between the sensing trace wire942and the driving electrode pattern931. Accordingly, in the first edge922a, only the sensing electrode pattern932is preferably disposed in the area9222distanced, by the predetermined distance α or more, from the outer circumferential surface of the area921corresponding to the active area of the display panel, and both the driving electrode pattern931and the sensing electrode pattern932is preferably disposed in the area9221positioned within the predetermined distance α. In an example, the predetermined distance α may be 300 μm, but is not limited thereto. Although not illustrated in the drawings, also at the second edge, both the driving electrode pattern931and the sensing electrode pattern932may be arranged in the area9221positioned within the predetermined distance a from the outer circumferential surface of the area921corresponding to the active area of the display panel, and only the sensing electrode pattern932may be disposed in the area9222distanced by the determined distance α or more therefrom, thereby minimizing the coupling capacitance incurred between the sensing trace wire942and the driving electrode pattern931. Referring toFIG.9B, the plurality of pads945may be also arranged to form a row at the third edge922c. The driving trace wire941disposed in the area923on the substrate920corresponding to a non-view area (e.g., the non-view area N-VA inFIG.4) of the window may be electrically connected to the driving electrode pattern931via the plurality of pads945arranged at the third edge922c. Also, at the third edge922c, the unintended coupling capacitance (e.g., the coupling capacitance ΔCm inFIG.7) may occur between the driving trace wire941and the sensing electrode pattern932positioned above (or under) the driving electrode pattern931. The coupling capacitance incurred between the driving trace wire941and the sensing electrode pattern932may cause a malfunction such as a ghost touch. In the electronic device900according to an embodiment, only the sensing electrode pattern932may be disposed at a partial area of the third edge922cwithout the driving electrode pattern931so as to minimize the coupling capacitance incurred between the driving trace wire941and the sensing electrode pattern932. The third edge922cmay be divided into an area in which both the driving electrode pattern931and the sensing electrode pattern932are arranged and an area in which only the driving electrode pattern931is disposed without the sensing electrode pattern932. In an example, both the driving electrode pattern931and the sensing electrode pattern932may be arranged in an area9223of the third edge922cpositioned within a predetermined distance α from the outer circumferential surface of the area921corresponding to the active area of the display panel. In another example, only the driving electrode pattern931may be disposed, without the sensing electrode pattern932, in an area9224distanced, by the predetermined distance α or more, from the outer circumferential surface of the area921corresponding to the active area of the display panel in an area of the third edge922c, and a redundant description will be omitted. Although not illustrated in the drawings, also at the fourth edge, both the driving electrode pattern931and the sensing electrode pattern932may be arranged in the area9223positioned within a predetermined distance a from the outer circumferential surface of the area921corresponding to the active area of the display panel, and only the driving electrode pattern931may be disposed in the area9224distanced by the predetermined distance α therefrom, thereby minimizing the coupling capacitance incurred between the driving trace wire941and the sensing electrode pattern932. That is, in the electronic device900according to an embodiment, only the driving electrode pattern931may be disposed, without the sensing electrode pattern932, in an area (e.g., a partial area of the first edge922aand the second edge) adjacent to the driving trace wire941, and only the sensing electrode pattern932may be disposed, without the driving electrode pattern931, in an area (e.g., the third edge922cand the fourth edge) adjacent to the sensing trace wire942, thereby preventing a malfunction such as a ghost touch from occurring. FIG.10Ais a view illustrating a metal pattern960and a dummy pattern970arranged in a lattice configured by a sensing electrode pattern931.FIG.10Bis a view illustrating a metal pattern961and a dummy pattern971arranged in a lattice configured by a driving electrode pattern932. Referring toFIG.10A, the electronic device900according to an embodiment may further include the metal pattern960and the dummy pattern970. As shown, only the sensing electrode pattern932may be disposed, without the driving electrode pattern931, at the partial area9222of the first edge922aof the electronic device900according to an embodiment, and the metal pattern960and the dummy pattern970may be provided on the partial area9222of the first edge922a, in which only the sensing electrode pattern932is disposed. The sensing electrode pattern932may include a metal mesh pattern having a lattice structure, and a plurality of quadrangular or rhombic lattices may be provided by the sensing electrode pattern932. The plurality of lattices provided by the sensing electrode pattern932may be divided into a lattice932aprovided in the sensing electrode pattern932electrically connected to the sensing trace wire942and a lattice932bprovided between the plurality of sensing electrode patterns932. According to an embodiment, the metal pattern960may be disposed between the lattices932aprovided in the sensing electrode pattern932, and the dummy pattern970may be disposed at the lattice932bprovided between the plurality of sensing electrode patterns932. In an example, the metal pattern960and the dummy pattern970may be made of a metal material identical to that of the sensing electrode pattern932, but in some embodiments, the metal pattern960and the dummy pattern970may be made of a metal material different from that of the sensing electrode pattern932. In an example, the metal pattern960and the dummy pattern970may be configured in a shape of filling an inner space of the lattices932aand932b. The metal pattern960may be disposed between the lattices932aprovided in the sensing electrode pattern932so as to reduce a resistance value of the sensing electrode pattern932. As the resistance value of the sensing electrode pattern932is reduced, the touch recognition performance of the touch electrode pattern may be improved, and as a result, the electronic device900may accurately recognize a hovering touch input (e.g., a gesture of a user). In another example, the metal pattern960may fill an inner space of the lattice932aprovided in the sensing electrode pattern932, and the dummy pattern970may fill an inner space of the lattice932bprovided between the plurality of sensing electrode patterns932, thereby preventing the sensing electrode pattern932from being viewed from the outside of the electronic device900. Although not illustrated in the drawings, it is obvious that the metal pattern960and the dummy pattern970can be arranged in an area of the second edge, in which only the sensing electrode pattern932is disposed. Referring toFIG.10B, as illustrated inFIG.9B, only the driving electrode pattern931may be disposed, without the sensing electrode pattern932, at the partial area9224of the third edge922c. The driving electrode pattern931may include a metal mesh pattern having a lattice structure so that a plurality of quadrangular or rhombic lattices may be provided at the partial area9224of the third edge922c. The plurality of lattices provided by the driving electrode pattern931may be divided into a lattice931aprovided in the driving electrode pattern931electrically connected to the driving trace wire941and a lattice931bprovided between the plurality of driving electrode patterns931, and the metal pattern961and the dummy pattern971may be arranged in the plurality of lattices931aand931b. In an example, the metal pattern961and the dummy pattern971may be made of a metal material identical to that of the driving electrode pattern931, but is not limited thereto. The metal pattern961may be disposed between the lattices931aprovided in the driving electrode pattern931to reduce the resistance value of the driving electrode pattern931, and as a result, the touch recognition performance of the driving electrode pattern931may be improved (e.g., a hovering touch input recognition and the like). According to an embodiment, the metal pattern961and the dummy pattern971may be configured in a shape of filling the inner space of the lattices931aand931b. The metal pattern961and the dummy pattern971may be arranged to fill the inner space of the lattices931aand931bso as to prevent the driving electrode pattern931disposed on the area922on the substrate920corresponding to the edge area of the display from being viewed from the outside of the electronic device900. Although not illustrated in the drawings, also in an area of the fourth edge in which only the driving electrode pattern931is provided, it is obvious that the metal pattern961and the dummy pattern971can be arranged between the lattices provided by the driving electrode pattern931. According to certain embodiments of the disclosure, an electronic device (e.g., the electronic device400inFIG.4) may include a display panel (e.g., the display panel460inFIG.4) including an active area in which data is displayed and an edge area which is disposed along an outer circumferential surface of the active area, a substrate (e.g., the substrate420inFIG.4) which is positioned on the display panel and includes a first area (e.g.,421inFIG.4) corresponding to the active area and a second area (e.g.,422inFIG.4) corresponding to the edge area, a metal mesh electrode pattern (e.g., the touch electrode pattern430inFIG.4) which is disposed on the first area and the second area, and a plurality of trace wires (e.g., the trace wire440inFIG.4) which are electrically connected to the metal mesh electrode pattern, wherein the metal mesh electrode pattern includes a first electrode pattern (e.g., the driving electrode pattern631inFIG.6) which has a lattice structure and a second electrode pattern (e.g., the sensing electrode pattern632inFIG.6) which is positioned above the first electrode pattern and has a lattice structure, the first and second electrode patterns being disposed to intersect with each other, and a disconnection part (e.g., the disconnection part851and852inFIG.8AandFIG.8B) may be disposed at at least a partial area of the metal mesh electrode pattern disposed on the second area. According to an embodiment, the disconnection part may be disposed at the position spaced a predetermined distance (e.g., a inFIG.8AandFIG.8B) apart from an outer circumferential surface of the first area. According to an embodiment, the second area (e.g.,722inFIG.7) may include a first edge (e.g., the first edge722ainFIG.7) provided along one side surface of the first area, a second edge (e.g., the second edge722binFIG.7) provided in a direction opposite to the first edge, a third edge (e.g., the third edge722cinFIG.7) provided in a direction perpendicular to the first edge and the second edge, and a fourth edge (e.g., the fourth edge722dinFIG.7) provided in a direction opposite to the third edge. According to an embodiment, the disconnection part (e.g., the disconnection part851inFIG.8A) may be provided at at least a partial area of the first electrode pattern provided on the first edge or the second edge to disconnect the first electrode pattern. According to an embodiment, the disconnection part (e.g., the disconnection part852inFIG.8B) may be provided at at least a partial area of the second electrode pattern provided on the third edge or the fourth edge to disconnect the second electrode pattern. According to an embodiment, a polarizing layer (e.g., the polarizing layer450inFIG.4) positioned above the metal mesh electrode pattern may be further included. According to an embodiment, a polarizing layer (e.g., the polarizing layer550inFIG.5) positioned between the substrate and the display panel may be further included. According to an embodiment, a first insulating layer (e.g., the first insulating layer641inFIG.6) positioned between the first electrode pattern and the second electrode pattern may be further included According to an embodiment, a second insulating layer (e.g., the second insulating layer642inFIG.6) positioned above the second electrode pattern and a third insulating layer (e.g., the third insulating layer643inFIG.6) positioned between the first electrode pattern and the substrate may be further included. According to certain embodiments of the disclosure, an electronic device (e.g., the electronic device400inFIG.4) may include a display panel (e.g., the display panel460inFIG.4) including an active area in which data is displayed and an edge area which is disposed along an outer circumferential surface of the active area, a substrate (e.g., the substrate420inFIG.4) which is positioned on the display panel and includes a first area (e.g.,421inFIG.4) corresponding to the active area and a second area (e.g.,422inFIG.4) corresponding to the edge area, a metal mesh electrode pattern (e.g., the touch electrode pattern430inFIG.4) which is disposed on the first area and the second area, and a plurality of trace wires (e.g., the trace wire440inFIG.4) positioned on the substrate and electrically connected to the metal mesh electrode pattern, wherein the metal mesh electrode pattern includes a first electrode pattern (e.g., the driving electrode pattern631inFIG.6) which has a lattice structure and a second electrode pattern (e.g., the sensing electrode pattern632inFIG.6) which is positioned above the first electrode pattern and has a lattice structure, the first and second electrode patterns being disposed to intersect with each other, the first electrode pattern and the second electrode pattern are provided at the first area, and only one of the first electrode pattern and the second electrode pattern is provided at at least a partial area (e.g.,9222inFIG.9A and9224inFIG.9B) of the second area. According to an embodiment, the first electrode pattern and the second electrode pattern may be provided in an area (e.g.,9221inFIG.9A and9223inFIG.9B) of the second area positioned within a predetermined distance from an outer circumferential surface of the first area, and only one of the first electrode pattern and the second electrode pattern may be provided in an area (e.g.,9222inFIG.9A and9224inFIG.9B) of the second area distanced, by a predetermined distance or more, from the outer circumferential surface of the first area. According to an embodiment, the second area (e.g.,722inFIG.7) may include a first edge (e.g., the first edge722ainFIG.7) provided along one side surface of the first area, a second edge (e.g., the second edge722binFIG.7) provided in a direction opposite to the first edge, a third edge (e.g., the third edge722cinFIG.7) provided in a direction perpendicular to the first edge and the second edge, and a fourth edge (e.g., the fourth edge722dinFIG.7) provided in a direction opposite to the third edge. According to an embodiment, only the first electrode pattern (e.g., the sensing electrode pattern932inFIG.9A) may be provided at at least a partial area (e.g.,9222inFIG.9a) of the first edge or the second edge. According to an embodiment, a first metal pattern (e.g., the metal pattern960) inFIG.10A) and a first dummy pattern (e.g., the dummy pattern970inFIG.10A) positioned on an area in which only the first electrode pattern of the substrate is disposed may be further included, wherein the first metal pattern and the first dummy pattern are arranged between lattices (e.g.,932aand932binFIG.10A) configured by the first electrode pattern. According to an embodiment, only the second electrode pattern (e.g., the driving electrode pattern931inFIG.9B) may be provided at at least a partial area (e.g.,9224inFIG.9B) of the third edge or the fourth edge. According to an embodiment, a second metal pattern (e.g., the metal pattern951inFIG.10B) and a second dummy pattern (e.g., the dummy pattern971inFIG.10B) positioned on an area in which only the second electrode pattern of the substrate is disposed may be further included, and the second metal pattern and the second dummy pattern may be arranged between the lattices (e.g.,931aand931binFIG.10B) provided by the second electrode pattern. According to an embodiment, a polarizing layer (e.g., the polarizing layer450inFIG.4) positioned above the metal mesh electrode pattern may be further included. According to an embodiment, a polarizing layer (e.g., the polarizing layer550inFIG.5) positioned between the substrate and the display panel may be further included. According to an embodiment, a first insulating layer (e.g., the first insulating layer641inFIG.6) positioned between the first electrode pattern and the second electrode pattern may be further included. According to an embodiment, a second insulating layer (e.g., the second insulating layer642inFIG.6) positioned above the second electrode pattern and a third insulating layer (e.g., the third insulating layer643inFIG.6) positioned between the first electrode pattern and the substrate may be further included. In specific embodiments of the disclosure described above, a component included in the disclosure is expressed as singular or plural according to the presented specific embodiment. However, the singular or plural expression is chosen appropriately to the situation presented for convenience of explanation, so that the disclosure is not limited to the singular or plural components, and a component expressed as plural may be configured as singular or a component expressed as singular may be configured as plural. Meanwhile, although specific embodiments have been described in the detailed description of the disclosure, various modifications are possible without departing from the scope of the disclosure. Therefore, the scope of the disclosure should not be limited to the described embodiments and should be defined not only by the claims described below but also by the claims and equivalents thereof. | 75,314 |
11861122 | DETAILED DESCRIPTION The disclosure will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of some embodiments are presented herein for purpose of illustration and description only. It is not intended to be exhaustive or to be limited to the precise form disclosed. The present disclosure provides, inter alia, a mesh structure, an electronic device, and a method of fabricating a mesh structure that substantially obviate one or more of the problems due to limitations and disadvantages of the related art. In one aspect, the mesh structure in an electronic device includes a first insulating layer; one or more mesh lines on a first side of the first insulating layer; and one or more protruding structures on a second side of the first insulating layer, the second side being opposite to the first side. Optionally, an orthographic projection of a respective protruding structure on a projection plane containing a surface of the first insulating layer at least partially overlaps with an orthographic projection of a respective mesh line on the projection plane. Optionally, a refractive index of the one or more protruding structures is greater than a refractive index of the first insulating layer. The inventors of the present disclosure discover that, surprisingly and unexpectedly, light transmittance through the one or more mesh lines (for example, in an electronic device) can be significantly increased by having a mesh structure with an intricate structure provided by the present disclosure. In one example, the light transmittance through the one or more mesh lines can be enhanced by more than 10% (e.g., more than 12%, more than 14%, more than 16%). Moreover, the significant increase in the light transmittance through the one or more mesh lines is observed throughout the visible light spectrum. FIG.1Ais a perspective view of a portion of a mesh structure in some embodiments according to the present disclosure. Referring toFIG.1A, the mesh structure in some embodiments includes a first insulating layer IN1; one or more mesh lines ML on a first side S1of the first insulating layer IN1; and one or more protruding structures PDS on a second side of the first insulating layer IN1, the second side S2being opposite to the first side S1. FIG.1Bis a plan view of a portion of a mesh structure in some embodiments according to the present disclosure. Referring toFIG.1AandFIG.1B, in some embodiments, an orthographic projection of the one or more protruding structures PDS on a projection plane PP containing a surface of the first insulating layer IN1at least partially overlaps with an orthographic projection of the one or more mesh lines ML on the projection plane PP. Optionally, the orthographic projection of the one or more mesh lines ML on the projection plane PP substantially (e.g., at least 75%, at least 80%, at least 85%, at least 90%, at least 95%, at least 99%, or 100%) covers the orthographic projection of the one or more protruding structures PDS on the projection plane PP. Optionally, the orthographic projection of the one or more mesh lines ML on the projection plane PP completely covers the orthographic projection of the one or more protruding structures PDS on the projection plane PP. FIG.1Cis a cross-sectional view along an A-A′ line inFIG.1B. Referring toFIG.1C, in some embodiments, an orthographic projection of a respective protruding structure RPD on a projection plane PP containing a surface of the first insulating layer IN1at least partially overlaps with an orthographic projection of a respective mesh line RML on the projection plane PP. Optionally, the orthographic projection of the respective mesh line RML on the projection plane PP substantially (e.g., at least 75%, at least 80%, at least 85%, at least 90%, at least 95%, at least 99%, or 100%) covers the orthographic projection of the respective protruding structure RPD on the projection plane PP. Optionally, the orthographic projection of the respective mesh line RML on the projection plane PP completely covers the orthographic projection of the respective protruding structure RPD on the projection plane PP. In some embodiments, a refractive index of the one or more protruding structures is greater than a refractive index of the first insulating layer. Optionally, the refractive index of the one or more protruding structures is greater than the refractive index of the first insulating layer by at least 0.01, e.g., at least 0.02, at least 0.03, at least 0.04, at least 0.05, at least 0.06, at least 0.07, at least 0.08, at least 0.09, at least 0.10, at least 0.11, at least 0.12, at least 0.13, at least 0.14, at least 0.15, at least 0.16, at least 0.17, at least 0.18, or at least 0.20. In some embodiments, referring toFIG.1BandFIG.1C, a cross-section width csw of a cross-section of the respective protruding structure RPD along a plane ISP intersecting the respective protruding structure RPD and the respective mesh line RML and perpendicular to a longitudinal direction Dlg of the respective mesh line RML decreases along a protruding direction Dpd from the first side S1to the second side S2. FIG.1Dis a plan view of a portion of a mesh structure in some embodiments according to the present disclosure.FIG.1Eis a cross-sectional view along a B-B′ line inFIG.1D. Referring toFIG.1DandFIG.1E, in some embodiments, the orthographic projection of the respective mesh line RML on the projection plane PP partially overlaps with the orthographic projection of the respective protruding structure RPD on the projection plane PP. A first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and a second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart from each other by a distance d. InFIG.1BandFIG.1C, the first central line and the second central line overlap with each other, and the distance d equals to zero. InFIG.1DandFIG.1E, the distance d is greater than zero, e.g., the first central line Pcl1and the second central line Pcl2are offset from each other. Optionally, the first central line Pcl1and the second central line Pcl2are substantially parallel to the longitudinal direction Dlg. As used herein, the term “substantially parallel” means that an angle is in the range of 0 degree to approximately 45 degrees, e.g., 0 degree to approximately 5 degrees, 0 degree to approximately 10 degrees, 0 degree to approximately 15 degrees, 0 degree to approximately 20 degrees, 0 degree to approximately 25 degrees, 0 degree to approximately 30 degrees. Optionally, the first central line Pcl1and the second central line Pcl2are parallel to the longitudinal direction Dlg. Referring toFIG.1DandFIG.1E, in some embodiments, the mesh structure further includes a second insulating layer IN2in direct contact with the one or more protruding structures (e.g., the respective protruding structure RPD). Optionally, the mesh structure further includes an optical clear adhesive layer OCA adhering the first insulating layer IN1and the second insulating layer IN2together. In one example, the respective mesh line RML has a line width1wof 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm. The respective protruding structure RPD has a refractive index of 1.65.FIG.2illustrates a correlation between light transmittance and a distance between a first central line and a second central line. Referring toFIG.2, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 un to 0.68 μm when the one or more protruding structures are present and the distance d is zero; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the distance d is 1 μm; the curve D represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the distance d is 2 μm; and the curve E represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the distance d is 3 μm. As compared to the light transmittance in the curve A, the light transmittance in the curve B is significantly enhanced, e.g., by 16% at a wavelength around 0.56 μm. The light transmittance in the curve E is lower than the light transmittance in the curve A wherein the one or more protruding structures are absent in the mesh structure, indicating that the distance by which the first central line Pcl1and the second central line Pcl2are offset from each other should be maintained in a certain range in order to effectively enhance light transmittance. In some embodiments, a first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and a second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart from each other by a distance in a range of 0 to 75% (e.g., 0 to 5%, 5% to 10%, 10% to 15%, 15% to 20%, 20% to 25%, 25% to 30%, 30% to 35%, 35% to 40%, 40% to 45%, 45% to 50%, 50% to 55%, 55% to 60%, 60% to 65%, or 65% to 70%) of a line width lw of the respective mesh line RML. Optionally, the first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance in a range of 0 to 5% of a line width lw of the respective mesh line RML. In some embodiments, a first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and a second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart from each other by a distance in a range of 0 to 75% (e.g., 0 to 5%, 5% to 10%, 10% to 15%, 15% to 20%, 20% to 25%, 25% to 30%, 30% to 35%, 35% to 40%, 40% to 45%, 45% to 50%, 50% to 55%, 55% to 60%, 60% to 65%, or 65% to 70%) of a maximum value of the cross-section width csw. Optionally, the first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance in a range of 0 to 5% of a maximum value of the cross-section width csw. In some embodiments, the first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart from each other by a distance no more than 3.0 μm, e.g., no more than 2.5 μm, no more than 2.0 μm, no more than 1.5 μm, no more than 1.0 μm, no more than 0.5 μm, no more than 0.4 μm, no more than 0.3 μm, no more than 0.2 un, or no more than 0.1 μm. In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm. The respective protruding structure RPD has a refractive index of 1.65.FIG.3illustrates a correlation between light transmittance and a maximum value of the cross-section width. Referring toFIG.3, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a short axis diameter of 3 μm; the curve D represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a short axis diameter of 2 μm; the curve E represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a short axis diameter of 1 μm; and the curve F represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a short axis diameter of 0.5 μm. As shown inFIG.3, as compared to a mesh structure in which the one or more protruding structures are absent (the curve A), the light transmittance of the mesh structure having the one or more protruding structures can be enhanced at various values of short axis diameter (e.g., from 0.5 μm to 4 μm; the curve B to the curve F). In some embodiments, a maximum value of the cross-section width csw is in a range of 12.5% to 150% (e.g., 12.5% to 25.0%, 25.0% to 50.0%, 50.0% to 75.0%, 75.0% to 100.0%, 100.0% to 125.0%, or 125.0% to 150.0%) of a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd. Optionally, the maximum value of the cross-section width csw is in a range of 75% to 125% (e.g., 75.0% to 100.0%, 100.0% to 125.0%) of the thickness tpd of the respective protruding structure RPD along the protruding direction Dpd. In some embodiments, a maximum value of the cross-section width csw is in a range of 12.5% to 150% (e.g., 12.5% to 25.0%, 25.0% to 50.0%, 50.0% to 75.0%, 75.0% to 100.0%, 100.0% to 125.0%, or 125.0% to 150.0%) of a line width lw of the respective mesh line RML. Optionally, the maximum value of the cross-section width csw is in a range of 75% to 125% (e.g., 75.0% to 100.0%, 100.0% to 125.0%) of the line width lw of the respective mesh line RML. In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis radius (equivalent to a maximum value of the cross-section width csw) of 4 μm. The respective protruding structure RPD has a refractive index of 1.65.FIG.4illustrates a correlation between light transmittance and a thickness of the respective protruding structure along the protruding direction. Referring toFIG.4, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a long axis radius of 3 μm; the curve D represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a long axis radius of 2 μm; the curve E represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a long axis radius of 1 μm; and the curve F represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the cross-section of the respective protruding structure RPD along the plane ISP has a long axis radius of 0.5 μm. The light transmittance in the curve E is lower than the light transmittance in the curve A wherein the one or more protruding structures are absent in the mesh structure, indicating that the long axis radius of the half elliptical shape (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) should be maintained in a certain range in order to effectively enhance light transmittance. In some embodiments, a thickness of the respective protruding structure RPD along the protruding direction Dpd is in a range of 25% to 175% (e.g., 25.0% to 50.0%, 50.0% to 75.0%, 75.0% to 100.0%, 100.0% to 125.0%, 125.0% to 150.0%, or 150.0% to 175.0%) of a maximum value of the cross-section width csw. Optionally, the thickness of the respective protruding structure RPD along the protruding direction Dpd is in a range of 75% to 125% (e.g., 75.0% to 100.0%, 100.0% to 125.0%) of the maximum value of the cross-section width csw. In some embodiments, a thickness of the respective protruding structure RPD along the protruding direction Dpd is in a range of 25% to 175% (e.g., 25.0% to 50.0%, 50.0% to 75.0%, 75.0% to 100.0%, 100.0% to 125.0%, 125.0% to 150.0%, or 150.0% to 175.0%) of a line width lw of the respective mesh line RML. Optionally, the thickness of the respective protruding structure RPD along the protruding direction Dpd is in a range of 75% to 125% (e.g., 75.0% to 100.0%, 100.0% to 125.0%) of the line width lw of the respective mesh line RML. Referring toFIG.1DandFIG.1E, in some embodiments, the mesh structure further includes a second insulating layer IN2in direct contact with the one or more protruding structures (e.g., the respective protruding structure RPD). Optionally, the mesh structure further includes an optical clear adhesive layer OCA adhering the first insulating layer IN1and the second insulating layer IN2together. In one example, the respective mesh line RML has a line width1wof 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm. The respective protruding structure RPD has a refractive index of 1.65. The first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance of zero.FIG.5illustrates a con-elation between light transmittance and a refractive index of a second insulating layer. Referring toFIG.5, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the second insulating layer IN2is made of a cyclic olefin copolymer material having a refractive index of 1.53; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the second insulating layer IN2is made of silicon oxide having a refractive index of 1.45; and the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the second insulating layer IN2is made of silicon nitride having a refractive index of 2.0. As shown inFIG.5, when the refractive index of the second insulating layer IN2is similar to the refractive index of the respective protruding structure RPD (e.g., the curve B and the curve C), the light transmittance of the mesh structure can be significantly enhanced as compared to the mesh structure in which the one or more protruding structures are absent. When the difference between the refractive index of the second insulating layer IN2and the refractive index of the refractive index of the respective protruding structure RPD exceeds a certain value (e.g., the curve D), the light transmittance of the mesh structure follows the same trend of variation across the range of wavelengths, however, with a far more intense resonance of the oscillation peak. This phenomenon is caused by a stronger Fabry-Perot cavity truncation effect due to the large difference between the refractive index of the second insulating layer IN2and the refractive index of the refractive index of the respective protruding structure RPD. In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The second insulating layer IN2has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm. The first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance of zero.FIG.6illustrates a correlation between light transmittance and a refractive index of a respective protruding structure. Referring toFIG.6, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 un to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a refractive index of 1.65; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD is made of silicon nitride having a refractive index of 2.0. As shown inFIG.6, when the refractive index of the respective protruding structure RPD is similar to the refractive index of the second insulating layer IN2(e.g., the curve B), the light transmittance of the mesh structure can be significantly enhanced as compared to the mesh structure in which the one or more protruding structures are absent. When the difference between the refractive index of the respective protruding structure RPD and the refractive index of the refractive index of the second insulating layer IN2exceeds a certain value (e.g., the curve C), the light transmittance of the mesh structure follows the same trend of variation across the range of wavelengths, however, with a far more intense resonance of the oscillation peak. This phenomenon is caused by a stronger Fabry-Perot cavity truncation effect due to the large difference between the refractive index of the second insulating layer IN2and the refractive index of the refractive index of the respective protruding structure RPD. In some embodiments, a difference between a refractive index of the second insulating layer and the refractive index of the of the one or more protruding structures is equal to or less than 0.2, e.g., less than 0.15, less than 0.10, or less than 0.05. The present disclosure may be implemented with various appropriate shapes for the respective protruding structure RPD. Examples of appropriate shapes include a truncated ellipsoidal shape, a truncated cone shape, a truncated pyramid shape, a pyramid shape, a polygonal pyramid shape, a lens shape, a cone shape, a polygonal cone shape, a half sphere shape, and so on. Optionally, the respective protruding structure RPD has a truncated ellipsoidal shape, e.g., a half ellipsoidal shape. The cross-section of the respective protruding structure RPD along the plane ISP may have various appropriate shapes, including a truncated elliptical shape (e.g., a half elliptical shape), a trapezoidal shape, a truncated circle (e.g., a half circle). In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The respective protruding structure RPD has a refractive index of 1.65. The first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance of zero.FIG.7illustrates a correlation between light transmittance and a shape of the respective protruding structure RPD. Referring toFIG.7, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a truncated ellipsoidal shape and the cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm; the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a truncated pyramid shape, with a maximum value of the cross-section width csw of 4 μm, a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd of 1 μm, and a minimum value of the cross-section width of 1 μm; the curve D represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a pyramid shape, with a maximum value of the cross-section width csw of 4 μm, a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd of 1 μm, and a minimum value of the cross-section width of 0 μm. As shown inFIG.7, when the respective protruding structure RPD has a truncated ellipsoidal shape, the light transmittance of the mesh structure can be significantly enhanced as compared to the mesh structure in which the one or more protruding structures are absent. When the respective protruding structure RPD has a shape other than the truncated ellipsoidal shape, the light transmittance of the mesh structure can still be enhanced, however, with a far more intense resonance of the oscillation peak. This phenomenon is caused by a stronger Fabry-Perot cavity truncation effect due to a surface that is not continuously smoothly transitioned, e.g., not continuously smoothly curved. For example, a surface of a truncated pyramid shape or a pyramid shape includes several side surfaces. Adjacent side surfaces form an edge dividing two adjacent side surfaces. Thus, in a truncated pyramid shape or a pyramid shape, a curvature of the shape undergoes an abrupt change in at least one region of the surface. In some embodiments, the respective protruding structure RPD has a continuously curved protruding surface. Optionally, the respective protruding structure RPD has a continuously smoothly curved protruding surface, e.g., a paraboloid surface. Optionally, a curvature of the continuously smoothly curved protruding surface is either constant or only undergoes graduate changes throughout the surface. Referring toFIG.1E, in some embodiments, the second insulating layer IN2is between the one or more protruding structures (e.g., including the respective protruding structure RPD) and the first insulating layer IN1. FIG.8is a cross-sectional view of a mesh structure in some embodiments according to the present disclosure. Referring toFIG.8, in some embodiments, the one or more protruding structures (e.g., including the respective protruding structure RPD) are between the second insulating layer IN2and the first insulating layer IN. In some embodiments, referring toFIG.1BandFIG.8, a cross-section width csw of a cross-section of the respective protruding structure RPD along a plane ISP intersecting the respective protruding structure RPD and the respective mesh line RML and perpendicular to a longitudinal direction Dlg of the respective mesh line RML decreases along a protruding direction Dpd from the second side S2to the first side S1. In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm. The respective protruding structure RPD has a refractive index of 1.65. The first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance of zero.FIG.9illustrates a correlation between light transmittance and a position of a respective protruding structure. Referring toFIG.9, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 un to 0.68 μm when the one or more protruding structures are present and the second insulating layer IN2is between the one or more protruding structures (e.g., including the respective protruding structure RPD) and the first insulating layer IN1; and the curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the one or more protruding structures (e.g., including the respective protruding structure RPD) are between the second insulating layer IN2and the first insulating layer IN1. Referring toFIG.9, as compared to the mesh structure in which the one or more protruding structures are absent, the light transmittance of the mesh structure can be significantly enhanced when the one or more protruding structures are present and the second insulating layer IN2is between the one or more protruding structures and the first insulating layer IN1, or when the one or more protruding structures are between the second insulating layer IN2and the first insulating layer IN1. In one example, the respective mesh line RML has a line width lw of 4 μm and a thickness tml of 700 nm. The first insulating layer IN1has a refractive index of 1.53. The optical clear adhesive layer OCA has a refractive index of 1.6. The second insulating layer IN2has a refractive index of 1.53. The respective protruding structure RPD has a refractive index of 1.65. The first central line Pcl1of an orthographic projection of the respective protruding structure RPD on the projection plane PP and the second central line Pcl2of an orthographic projection of the respective mesh line RML on the projection plane PP are spaced apart by a distance of zero.FIG.10illustrates a correlation between light transmittance and a shape and a position of a respective protruding structure. Referring toFIG.10, the curve A represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 un to 0.68 μm when the one or more protruding structures are absent; the curve B represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present, the respective protruding structure RPD has a truncated ellipsoidal shape and the cross-section of the respective protruding structure RPD along the plane ISP has a half elliptical shape, with a short axis diameter (equivalent to a maximum value of the cross-section width csw) of 4 μm and a long axis radius (equivalent to a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd) of 4 μm, and the second insulating layer IN2is between the one or more protruding structures and the first insulating layer NI. The curve C represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a truncated pyramid shape, with a maximum value of the cross-section width csw of 4 μm, a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd of 1 μm, and a minimum value of the cross-section width of 1 μm, and the one or more protruding structures are between the second insulating layer IN2and the first insulating layer IN1. The curve D represents light transmittance of the mesh structure for light having a wavelength in a range of 0.40 μm to 0.68 μm when the one or more protruding structures are present and the respective protruding structure RPD has a pyramid shape, with a maximum value of the cross-section width csw of 4 μm, a thickness tpd of the respective protruding structure RPD along the protruding direction Dpd of 1 μm, and a minimum value of the cross-section width of 0 μm, and the one or more protruding structures are between the second insulating layer IN2and the first insulating layer NI. As shown inFIG.10, when the respective protruding structure RPD has a truncated ellipsoidal shape, the light transmittance of the mesh structure can be significantly enhanced as compared to the mesh structure in which the one or more protruding structures are absent. When the respective protruding structure RPD has a shape other than the truncated ellipsoidal shape, the light transmittance of the mesh structure can still be enhanced, however, with a far more intense resonance of the oscillation peak. This phenomenon is caused by a stronger Fabry-Perot cavity truncation effect due to a surface that is not continuously smoothly transitioned, e.g., not continuously smoothly curved. For example, a surface of a truncated pyramid shape or a pyramid shape includes several side surfaces. Adjacent side surfaces form an edge dividing two adjacent side surfaces. Thus, in a truncated pyramid shape or a pyramid shape, a curvature of the shape undergoes an abrupt change in at least one region of the surface. Referring toFIG.1AandFIG.1B, in some embodiments, the one or more protruding structures PDS includes a plurality of protrusions spaced apart from each other. An orthographic projection of a respective one of the plurality of protrusions on the projection plane PP at least partially overlaps with the orthographic projection of a portion of the one or more mesh lines ML on the projection plane PP. FIG.11is a perspective view of a portion of a mesh structure in some embodiments according to the present disclosure. Referring toFIG.11, in some embodiments, the one or more protruding structures PDS includes a continuous protruding ridge CPR. An orthographic projection of the continuous protruding ridge CPR on the projection plane PP at least partially overlaps with the orthographic projection of the respective mesh line RML on the projection plane PP. In another aspect, the present disclosure provides an electronic device. The electronic device includes a mesh structure described herein or fabricated by a method described herein, and a semiconductor component. Examples of appropriate electronic devices include, but are not limited to, a touch control structure, a display apparatus, a computer, a tablet, a media player, a cellular phone, a gaming device, a television, and a monitor. In another aspect, the present disclosure provides a touch control structure. The touch control structure includes a mesh structure described herein or fabricated by a method described herein. In some embodiments, the one or more mesh lines are one or more mesh lines of touch electrodes of the touch control structure. FIG.12Ais a schematic diagram illustrating one or more mesh lines in a touch control structure in some embodiments according to the present disclosure.FIG.12Bis a schematic diagram illustrating one or more protruding structures in a touch control structure in some embodiments according to the present disclosure.FIG.12Cis a schematic diagram illustrating one or more mesh lines and one or more protruding structures in a touch control structure in some embodiments according to the present disclosure. Referring toFIG.12AtoFIG.12C, in some embodiments, the one or more protruding structures PDS include a plurality of protrusions. The plurality protrusions are arranged in an array. As shown inFIG.12C, an orthographic projection of a respective protruding structure on a base substrate at least partially overlaps with the orthographic projection of a portion of the one or more mesh lines ML on the base substrate. FIG.12Dis a schematic diagram illustrating a continuous protruding ridge in a touch control structure in some embodiments according to the present disclosure.FIG.12Eis a schematic diagram illustrating one or more mesh lines and one or more protruding structures in a touch control structure in some embodiments according to the present disclosure. Referring toFIG.12A, andFIG.12DtoFIG.12E, in some embodiments, the one or more protruding structures includes a continuous providing ridge CPR. As shown inFIG.12E, an orthographic projection of the continuous protruding ridge CPR on a base substrate at least partially overlaps with the orthographic projection of a respective mesh line RML on the base substrate. In another aspect, the present disclosure provides a display panel. In some embodiments, the display panel includes light emitting elements; and the mesh structure described herein or fabricated by a method described herein. The one or more mesh lines are on a side of the one or more protruding structures away from the light emitting elements. The mesh structure provided by the present disclosure significantly enhances light transmittance of the display panel, as discussed above. In another aspect, the present disclosure provides a display apparatus. In some embodiments, the display apparatus includes a display panel having the mesh structure described herein or fabricated by a method described herein; and an integrated circuit connected to the display panel. Examples of appropriate display apparatuses include, but are not limited to, an electronic paper, a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital album, a GPS, etc. Optionally, the display apparatus is an organic light emitting diode display apparatus. Optionally, the display apparatus is a liquid crystal display apparatus. In another aspect, the present disclosure provides a method of enhancing light transmittance in a display panel having a mesh structure comprising one or more mesh lines. In some embodiments, the method of enhancing light transmittance includes providing the one or more mesh lines on a first side of a first insulating layer; providing one or more protruding structures on a second side of the first insulating layer, the second side being opposite to the first side; and diffracting light emitted from light emitting elements of the display panel by the one or more protruding structures to enhance light transmittance of the display panel. Optionally, an orthographic projection of a respective protruding structure on a projection plane containing a surface of the first insulating layer at least partially overlaps with an orthographic projection of a respective mesh line on the projection plane. Optionally, a refractive index of the one or more protruding structures is greater than a refractive index of the first insulating layer. Optionally, a cross-section width of a cross-section of the respective protruding structure along a plane intersecting the respective protruding structure and the respective mesh line and perpendicular to a longitudinal direction of the respective mesh line decreases along a protruding direction from the first side to the second side. In another aspect, the present disclosure provides a method of fabricating a mesh structure. In some embodiments, the method includes forming one or more mesh lines on a first side of a first insulating layer; and forming one or more protruding structures on a second side of the first insulating layer, the second side being opposite to the first side. Optionally, an orthographic projection of a respective protruding structure on a projection plane containing a surface of the first insulating layer at least partially overlaps with an orthographic projection of a respective mesh line on the projection plane. Optionally, a refractive index of the one or more protruding structures is greater than a refractive index of the first insulating layer. Optionally, a cross-section width of a cross-section of the respective protruding structure along a plane intersecting the respective protruding structure and the respective mesh line and perpendicular to a longitudinal direction of the respective mesh line decreases along a protruding direction from the first side to the second side. Optionally, a first central line of an orthographic projection of the respective protruding structure on the projection plane and a second central line of an orthographic projection of the respective mesh line on the projection plane are spaced apart from each other by a distance in a range of 0 to 75% of a line width of the respective mesh line. Optionally, a first central line of an orthographic projection of the respective protruding structure on the projection plane and a second central line of an orthographic projection of the respective mesh line on the projection plane are spaced apart from each other by a distance in a range of 0 to 75% of a maximum value of the cross-section width. Optionally, the first central line and the second central line are substantially parallel to the longitudinal direction. Optionally, a maximum value of the cross-section width is in a range of 12.5% to 150% of a thickness of the respective protruding structure along the protruding direction. Optionally, the maximum value of the cross-section width is in a range of 75% to 125% of the thickness of the respective protruding structure along the protruding direction. Optionally, a thickness of the respective protruding structure along the protruding direction is in a range of 25% to 175% of a maximum value of the cross-section width. In some embodiments, the method further includes forming a second insulating layer. The second insulating layer is formed to be in direct contact with the one or more protruding structures. Optionally, a difference between a refractive index of the second insulating layer and the refractive index of the of the one or more protruding structures is equal to or less than 0.2. Optionally, the respective protruding structure is formed to have a continuously curved protruding surface. Optionally, the respective protruding structure is formed to have a continuously smoothly curved protruding surface. Optionally, the respective protruding structure has a truncated ellipsoidal shape. Optionally, the one or more protruding structures are formed using an optically clear material. In some embodiments, the method further includes forming a second insulating layer and forming an optical clear adhesive layer. The second insulating layer is formed to be in direct contact with the one or more protruding structures. The optical clear adhesive layer is formed to adhere the first insulating layer and the second insulating layer together. Optionally, the second insulating layer is formed between the one or more protruding structures and the first insulating layer. Optionally, the one or more protruding structures are formed between the second insulating layer and the first insulating layer. In some embodiments, forming the one or more protruding structures includes forming a plurality of protrusions. Optionally, an orthographic projection of a respective one of the plurality of protrusions on the projection plane at least partially overlaps with the orthographic projection of a portion of the one or more mesh lines on the projection plane. In some embodiments, forming the one or more protruding structures includes forming a continuous protruding ridge. Optionally, an orthographic projection of the continuous protruding ridge on the projection plane at least partially overlaps with the orthographic projection of the respective mesh line on the projection plane. FIG.13illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure. Referring toFIG.13, in some embodiments, forming the one or more protruding structures includes forming a mold plate MP having a plurality of first micro-cavities MC1, the plurality of first micro-cavities MC1having a same pattern as a plurality of protrusions of the one or more protruding structures. In some embodiments, forming the mold plate includes forming (e.g., spin-coating) a second photoresist layer PS2on an etchable base substrate EBS; exposing (e.g., electron beam lithography) and developing the second photoresist layer PS2to form a pattern; etching (e.g., by deep reactive ion etching) the etchable base substrate EBS to form a plurality of micro-cavities MC in the etchable base substrate EBS, the plurality of micro-cavities MC having a same pattern as a plurality of protrusions of the one or more protruding structures; placing a fusible substrate (e.g., a glass substrate) on the etchable base substrate EBS to cover the plurality of micro-cavities MC; forming a fused substrate FS having a plurality of first protrusions PD1by thermally fusing (optionally with a heat reflow process) the fusible substrate to render a fusible material of the fusible substrate protruding into to the plurality of micro-cavities MC; separating the fused substrate FS from the etchable base substrate EBS; forming a flexible polymer material layer FML on the fused substrate FS and in direct contact with the plurality of first protrusions PD1; and separating the flexible polymer material layer FML from the fused substrate FS thereby forming the mold plate MP having the plurality of first micro-cavities MC1. The method further includes forming a first photoresist layer PS1on the mold plate MP, a photoresist material of the first photoresist layer PS1filling the plurality of first micro-cavities MC1; bonding a base substrate BS to the first photoresist layer PS1on a side of the first photoresist layer PS1away from the mold plate MP; and separating the mold plate MP from the first photoresist layer PS1, thereby forming the one or more protruding structures PDS comprising the plurality of protrusions PD. The example illustrated inFIG.13is particularly suitable for forming the one or more protruding structures on a base substrate that is not highly heat-resistant, for example, a base substrate made of an organic material such as a polymer material. FIG.14illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure.FIG.15illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure. Referring toFIG.14andFIG.15, in some embodiments, forming the one or more protruding structures includes forming (e.g., spin-coating) a photoresist layer PS on a base substrate BS; exposing (e.g., electron beam lithography) and developing the photoresist layer PS to form a plurality of first protrusions PD1comprising a photoresist material on the base substrate BS, the plurality of first protrusions PD1having a same pattern as a plurality of protrusions of the one or more protruding structures; and heating the plurality of first protrusions PD1following by cooling the plurality of first protrusions PD1to modify shapes of the plurality of first protrusions PD1, thereby forming the plurality of protrusions PD of the one or more protruding structures. InFIG.14, during the heating and cooling processes, the plurality of first protrusions PD1remain on top of the base substrate BS. The example illustrated inFIG.14is particularly suitable for forming the one or more protruding structures having a relatively small thickness. InFIG.15, during the heating and cooling processes, the base substrate BS is flipped upside down, and the plurality of first protrusions PD1are facing downward. By the action of gravity, the plurality of first protrusions PD1are elongated, thereby forming the one or more protruding structures having a relatively large thickness. The examples illustrated inFIG.14andFIG.15are particularly suitable for forming the one or more protruding structures on a base substrate that is highly heat-resistant, for example, a base substrate made of an inorganic material such as silicon oxide or a heat-resistant organic material. FIG.16illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure. Referring toFIG.16, in some embodiments, forming the one or more protruding structures includes forming (e.g., by plasma-enhanced chemical vapor deposition) a dielectric material layer DML on a base substrate BS; forming (e.g., by spin-coating) a photoresist layer PS on a side of the dielectric material layer DML away from the base substrate BS; exposing (e.g., electron beam lithography) and developing the photoresist layer PS; and etching (e.g., by inductively coupled plasma etching) the dielectric material layer DML to form a plurality of protrusions PD of the one or more protruding structures. The example illustrated inFIG.16is particularly suitable for forming the one or more protruding structures made of a rigid material, and is particularly suitable for forming the one or more protruding structures having a respective protruding structure having a truncated cone shape or a truncated pyramid shape. FIG.17illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure. Referring toFIG.17, in some embodiments, forming the one or more protruding structures includes forming (e.g., by spin-coating) a first photoresist layer PS1on a base substrate BS; forming (e.g., by spin-coating) a second photoresist layer PS2on a side of the first photoresist layer PS1away from the base substrate BS; forming (e.g., by sputtering) a lift-off material layer LOM on a side of the second photoresist layer PS2away from the first photoresist layer PS1; exposing (e.g., electron beam lithography) and developing the first photoresist layer PS1and the second photoresist layer PS2, the first photoresist layer PS1being exposed at a rate greater than an exposure rate of the second photoresist layer PS2, thereby forming a plurality of micro-cavities MC; depositing (e.g., by plasma-enhanced chemical vapor deposition) a dielectric material DM on the base substrate BS, a portion of the dielectric material DM being deposited on a remaining lift-off material layer RLO, a portion of the dielectric material DM being deposited in the plurality of micro-cavities MC; lifting-off a dielectric material deposited on the remaining lift-off material layer RLO; and removing remaining photoresist material, thereby forming a plurality of protrusions PD of the one or more protruding structures. The example illustrated inFIG.17is particularly suitable for forming the one or more protruding structures made of a rigid material, and is particularly suitable for forming the one or more protruding structures having a respective protruding structure having a truncated cone shape or a truncated pyramid shape. In particular, the respective protruding structure can be made to have a large difference between an area of an upper surface and an area of a lower surface, according to the method illustrated inFIG.17. FIG.18illustrates a method of fabricating a mesh structure in an electronic device in some embodiments according to the present disclosure. Referring toFIG.18, in some embodiments, forming the one or more protruding structures includes forming (e.g., by plasma-enhanced chemical vapor deposition) a dielectric material layer DML on a base substrate BS; forming (e.g., by spin-coating) an embossing adhesive layer EBL on a side of the dielectric material layer DML away from the base substrate BS; nano-imprinting the embossing adhesive layer to form a plurality of micro-cavities MC; removing (e.g., by inductively coupled plasma etching) an embossing adhesive material in the plurality of micro-cavities MC, thereby forming a remaining embossing adhesive material layer REB; and etching (e.g., by inductively coupled plasma etching) the dielectric material layer DML using the remaining embossing adhesive material layer REB as a mask plate, thereby forming a plurality of protrusions PD of the one or more protruding structures. The example illustrated inFIG.18is particularly suitable for forming the one or more protruding structures made of a rigid material, and is particularly suitable for forming the one or more protruding structures having a respective protruding structure having a truncated cone shape or a truncated pyramid shape. The fabrication process illustrated inFIG.18is highly efficient, suitable for large-scale manufacturing. Various appropriate materials and various appropriate fabricating methods may be used for making the one or more mesh lines. For example, a conductive material may be deposited on the substrate by a plasma-enhanced chemical vapor deposition (PECVD) process. Examples of appropriate metallic materials for making the one or more mesh lines include metallic materials such as aluminum, copper, silver, and gold; carbon nano-tubes, and graphene. Optionally, the mesh line has a line width in a range of 2.0 μm to 6.0 μm, e.g., 2.0 μm to 3.0 μm, 3.0 μm to 4.0 μm, 4.0 μm to 5.0 μm, or 5.0 μm to 6.0 μm. Optionally, the mesh line has a line width of 4.0 μm. Various appropriate materials may be used for making the first insulating layer. Examples of appropriate metallic materials for making the first insulating layer include silicon oxide, silicon nitride, cyclic olefin copolymer, polyimide, and polyethylene terephthalate. Various appropriate materials may be used for making the second insulating layer. Examples of appropriate metallic materials for making the first insulating layer include silicon oxide, silicon nitride, cyclic olefin copolymer, polyimide, and polyethylene terephthalate. Various appropriate materials may be used for making the second insulating layer. Examples of appropriate metallic materials for making the second insulating layer include silicon oxide, silicon nitride, cyclic olefin copolymer, polyimide, and polyethylene terephthalate. Various appropriate materials may be used for making the one or more protruding structures. Examples of appropriate metallic materials for making the one or more protruding structures include silicon oxide, silicon nitride, polydimethylsiloxane, polystyrene, and photoresist materials (e.g., SU-8 photoresist). Optionally, the one or more protruding structures are formed using an optically clear insulating material. The foregoing description of the embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form or to exemplary embodiments disclosed. Accordingly, the foregoing description should be regarded as illustrative rather than restrictive. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The embodiments are chosen and described in order to explain the principles of the invention and its best mode practical application, thereby to enable persons skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated. Therefore, the term “the invention”, “the present invention” or the like does not necessarily limit the claim scope to a specific embodiment, and the reference to exemplary embodiments of the invention does not imply a limitation on the invention, and no such limitation is to be inferred. The invention is limited only by the spirit and scope of the appended claims. Moreover, these claims may refer to use “first”, “second”, etc. following with noun or element. Such terms should be understood as a nomenclature and should not be construed as giving the limitation on the number of the elements modified by such nomenclature unless specific number has been given. Any advantages and benefits described may not apply to all embodiments of the invention. It should be appreciated that variations may be made in the embodiments described by persons skilled in the art without departing from the scope of the present invention as defined by the following claims. Moreover, no element and component in the present disclosure is intended to be dedicated to the public regardless of whether the element or component is explicitly recited in the following claims. | 61,880 |
11861123 | DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various exemplary embodiments. It is apparent, however, that various exemplary embodiments may be practiced without these specific details or with one or more equivalent arrangements. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring various exemplary embodiments. Further, various exemplary embodiments may be different, but do not have to be exclusive. For example, specific shapes, configurations, and characteristics of an exemplary embodiment may be implemented in another exemplary embodiment without departing from the spirit and the scope of the disclosure. Unless otherwise specified, the illustrated exemplary embodiments are to be understood as providing exemplary features of varying detail of some exemplary embodiments. Therefore, unless otherwise specified, the features, components, modules, layers, films, panels, regions, aspects, etc. (hereinafter individually or collectively referred to as “elements”), of the various illustrations may be otherwise combined, separated, interchanged, and/or rearranged without departing from the spirit and the scope of the disclosure. The use of cross-hatching and/or shading in the accompanying drawings is generally provided to clarify boundaries between adjacent elements. As such, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, dimensions, proportions, commonalities between illustrated elements, and/or any other characteristic, attribute, property, etc., of the elements, unless specified. Further, in the accompanying drawings, the size and relative sizes of elements may be exaggerated for clarity and/or descriptive purposes. When an exemplary embodiment may be implemented differently, a specific process order may be performed differently from the described order. For example, two consecutively described processes may be performed substantially at the same time or performed in an order opposite to the described order. Also, like reference numerals denote like elements. When an element is referred to as being “on,” “connected to,” or “coupled to” another element, it may be directly on, connected to, or coupled to the other element or intervening elements may be present. When, however, an element is referred to as being “directly on,” “directly connected to,” or “directly coupled to” another element, there are no intervening elements present. To this end, the term “connected” may refer to physical, electrical, and/or fluid connection. Further, the D1-axis, the D2-axis, and the D3-axis are not limited to three axes of a rectangular coordinate system, and may be interpreted in a broader sense. For example, the D1-axis, the D2-axis, and the D3-axis may be perpendicular to one another, or may represent different directions that are not perpendicular to one another. For the purposes of this disclosure, “at least one of X, Y, and Z” and “at least one selected from the group consisting of X, Y, and Z” may be construed as X only, Y only, Z only, or any combination of two or more of X, Y, and Z, such as, for instance, XYZ, XYY, YZ, and ZZ. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another element. Thus, a first element discussed below could be termed a second element without departing from the teachings of the disclosure. Spatially relative terms, such as “beneath,” “below,” “under,” “lower,” “above,” “upper,” “over,” “higher,” “side” (e.g., as in “sidewall”), and the like, may be used herein for descriptive purposes, and, thereby, to describe one element's relationship to another element(s) as illustrated in the drawings. Spatially relative terms are intended to encompass different orientations of an apparatus in use, operation, and/or manufacture in addition to the orientation depicted in the drawings. For example, if the apparatus in the drawings is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the exemplary term “below” can encompass both an orientation of above and below. Furthermore, the apparatus may be otherwise oriented (e.g., rotated 90 degrees or at other orientations), and, as such, the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms, “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Moreover, the terms “comprises,” “comprising,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is also noted that, as used herein, the terms “substantially,” “about,” and other similar terms, are used as terms of approximation and not as terms of degree, and, as such, are utilized to account for inherent deviations in measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art. Various exemplary embodiments are described herein with reference to sectional and/or exploded illustrations that are schematic illustrations of idealized exemplary embodiments and/or intermediate structures. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, exemplary embodiments disclosed herein should not be construed as limited to the particular illustrated shapes of regions, but are to include deviations in shapes that result from, for instance, manufacturing. In this manner, regions illustrated in the drawings are schematic in nature and shapes of these regions may not illustrate the actual shapes of regions of a device, and, as such, are not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure is a part. Terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense, unless expressly so defined herein. FIG.1is a perspective view illustrating a display device according to some exemplary embodiments.FIG.2is a plan view illustrating a display panel of the display device ofFIG.1according to some exemplary embodiments.FIG.3is a plan view illustrating a touch sensor of the display device ofFIG.1according to some exemplary embodiments. Referring toFIGS.1to3, a display device may be provided in various shapes. For example, the display device may be provided in a quadrangular plate shape having two pairs of sides parallel to each other. When the display device is provided in the rectangular plate shape, any one pair of sides among the two pairs of sides may be provided longer than the other pair of sides. For illustrative and descriptive convenience, a case where the display device has a pair of long sides and a pair of short sides is provided. In this case, the extending direction of the short side is represented as a first direction DR1, the extending direction of the long side is represented as a second direction DR2, and the extending direction of a thickness is represented as a third direction DR3. The display device may include a display panel100provided with display elements (not shown) that display an image, and a touch sensor200that recognizes a touch. The display device may include a display region DA in which an image generated via the display panel100is displayed, a non-display region NDA provided at, at least one side of the display region DA, a sensing region SA in which a touch interaction of a user on or near the touch sensor200and/or a pressure of the touch interaction is sensed, and a peripheral region PA provided at, at least one side of the sensing region SA. The sensing region SA may overlap with the display region DA. The sensing region SA may have an area substantially equal to or larger than that of the display region DA. For convenience, the sensing region SA will be described as corresponding to the display region DA. A touch interaction may include actual contact with the display device in association with the sensing region SA, a hovering interaction over the sensing region SA, an approach of a touch interaction with the sensing region SA, and/or the like. For descriptive convenience, a touch interaction will generally be referred to as a touch. The display panel100may display arbitrary visual information, e.g., a text, a video, a picture, a two-dimensional or three-dimensional image, etc. Hereinafter, the arbitrary visual information is referred to as an “image,” however, the kind of the display panel100is not limited to ones that display images. The display panel100may include a first base substrate BS1including the display region DA and the non-display region NDA. Here, the display region DA is located at a central portion of the display panel100, and may have a relatively large area as compared with the non-display region NDA. The display region DA may have various shapes. For example, the display region DA may be provided in various shapes, such as a closed-shape polygon including linear sides, a circle, an ellipse, etc., including curved sides, and a semicircle, a semi-ellipse, etc., including linear and curved sides. In this manner, the display region DA may include polygonal and/or free-form (or irregular) shapes (or contours). When the display region DA includes a plurality of regions, each region may also be provided in various shapes, such as a closed-shape polygon including linear sides, a circle, an ellipse, etc., including curved sides, and a semicircle, a semi-ellipse, etc., including linear and curved sides. In addition, areas of the plurality of regions may be equal to or different from one another. For convenience, a case where the display region DA is provided as one region having a quadrangular shape including linear sides is described and illustrated as an example. The non-display region NDA may be provided at, at least one side of the display region DA. In some exemplary embodiments, the non-display region NDA may surround the circumference of the display region DA. In an exemplary embodiment, the non-display region NDA may include a lateral part extending in the first direction DR1and a longitudinal part extending in the second direction DR2. The longitudinal part of the non-display region NDA may be provided as a pair of longitudinal parts spaced apart from each other along, for instance, the width direction of the display region DA. The display region DA may include a plurality of pixel regions in which a plurality of pixels PXL are provided. As will become more apparent below, a pad unit (or area) provided with pads of lines and a data driver DDV that provides a data signal to the pixels PXL are provided in the non-display region NDA. The data driver DDV may provide the data signal to the respective pixels PXL through data lines (not shown). Here, the data driver DDV may be disposed at a lateral part of the non-display region NDA, and extend long along the width direction of the non-display region NDA. For convenience, a scan driver, an emission driver, and a timing controller are not illustrated inFIG.2, but the timing controller, the emission driver, and the scan driver may also be provided in the non-display region NDA or may be connected to the non-display region NDA. The first base substrate SB1may be made of various materials, e.g., glass, polymer, metal, and/or the like. For instance, the first base substrate BS1may be an insulative substrate made of a polymer organic material. The material of the insulative substrate including the polymer organic material, may include at least one of polystyrene, polyvinyl alcohol, polymethyl methacrylate, polyethersulfone, polyacrylate, polyetherimide, polyethylene naphthalate, polyethylene terephthalate, polyphenylene sulfide, polyarylate, polyimide, polycarbonate, triacetate cellulose, and cellulose acetate propionate. However, the material constituting the first base substrate BS1is not limited thereto or thereby. For example, the first base substrate BS1may be made of a fiber reinforced plastic (FRP), carbon nanotubes, etc. To this end, the first base substrate BS1may have a singular or multilayer configuration. In a multilayer configuration, some layers of the first base substrate BS1may be different than other layers of the first base substrate BS1. The first base substrate BS1may include a plurality of signal lines (not shown) connected to the plurality of pixels PXL and a plurality of thin film transistors (not shown) connected to the plurality of signal lines. For instance, the signal lines may form data lines, scan lines, emission lines, etc. As will become more apparent below, each of the plurality of pixels PXL may be an organic light emitting device including an organic layer. However, exemplary embodiments are not limited thereto or thereby, and each of the plurality of pixels PXL may be implemented in various forms, such as a liquid crystal device, an electrophoretic device, an electrowetting device, etc. The plurality of pixels PXL may be provided in (or overlapping) the display region DA of the first base substrate BS1. Each pixel PXL may be considered a minimum unit that displays an image, and may be provided in plurality. The pixel PXL may include an organic light emitting device that emits white light and/or colored light. The pixel PXL may emit light of any one color among red, green, and blue; however, exemplary embodiments are not limited thereto or thereby. For instance, the pixel PXL may emit light of any one color among cyan, magenta, yellow, and the like. It is also contemplated that the pixel PXL may be configured to emit light of different colors. The pixel PXL may include a thin film transistor (not shown) connected to the plurality of signal lines (not shown), and the organic light emitting device connected to the thin film transistor. The pixel PXL, the plurality of signal lines, and the plurality of thin film transistors will be described later. The touch sensor200may be provided on a surface on which an image of the display panel100is displayed. In some exemplary embodiments, the touch sensor200may be integrally provided with the display panel100, e.g., inside the display panel100. For convenience, a case where the touch sensor200is provided on a surface (e.g., top surface) of the display panel100is described and illustrated. The top surface may be considered a surface furthest away from the first base substrate BS1. The touch sensor200may include a second base substrate BS2including the sensing region SA and the peripheral region PA. The second base substrate BS2may be made of an insulative material having flexibility. Here, the second base substrate BS2may be provided in a shape substantially identical to that of the first base substrate BS1, but exemplary embodiments are not limited thereto or thereby. For instance, the second base substrate BS2may have an area equal to or larger than that of the first base substrate BS1. The sensing region SA corresponds to the display region DA of the display panel100, and may be provided in a shape identical to that of the display region DA, but exemplary embodiments are not limited thereto or thereby. The peripheral region PA may be disposed adjacent to the sensing region SA. Also, the peripheral region PA may correspond to the non-display region NDA of the display panel100, and may include at least one lateral part and at least one longitudinal part. The touch sensor200may include a touch sensing unit (or touch sensor) provided in the sensing region SA, a line unit (or lines) provided in the peripheral region PA, and a touch sensor pad unit (or touch sensor pads) connected to the line unit. The touch sensing unit may recognize a touch event with the display device through a hand of a user or a separate input means, e.g., stylus, etc. In some exemplary embodiments, the touch sensing unit may be driven according to a mutual capacitance method. In the mutual capacitance method, a change in capacitance, caused by an interaction between two sensing electrodes, is sensed. In some exemplary embodiments, the touch sensing unit may be driven according to a self-capacitance method. In the self-capacitance method, when a user touches a region, a change in capacitance of a sensing electrode in the touched region is sensed using sensing electrodes arranged in a matrix shape and sensing lines connected to the respective sensing electrodes. The touch sensing unit may include a touch sensor SR provided in the sensing region SA, sensing lines SL connected to the touch sensor SR, and a touch sensor pad unit TP connected to end portions of the sensing lines SL. When a touch of a user is applied to (or with respect to) the display device, the touch sensor SR is used to sense the touch of the user and/or a pressure of the touch, and may be provided in the sensing region SA. When viewed on a plane, e.g., in a view normal to a surface of the second base substrate BS2, the touch sensor SR may correspond to the display region DA. The touch sensor SR may include a plurality of first sensing units SR1that extend in the first direction DR1of the second base substrate BS2and is applied with a sensing voltage, and a plurality of second sensing units SR2that extend in the second direction DR2intersecting the first direction DR1. The first sensing units SR1may be capacitively coupled to the second sensing units SR2, and the voltage of the first sensing units SR1may be changed by the capacitive coupling. Each first sensing unit SR1may include a plurality of first sensing electrodes SSE1arranged in the first direction DR1and a plurality of first bridges BR1through which adjacent first sensing electrodes SSE1are connected to each other. The first sensing electrodes SSE1may be provided in various shapes, e.g., a bar shape, a polygonal shape including a quadrangular shape, such as a diamond, etc. In some exemplary embodiments, the first sensing electrodes SSE1and the first bridges BR1may be provided as a whole plate shape or may be provided in the shape of a mesh including fine lines. Each second sensing unit SR2may include a plurality of second sensing electrodes SSE2arranged in the second direction DR2and a plurality of second bridges BR2through which adjacent second sensing electrodes SSE2are connected to each other. The second sensing electrodes SSE2may be provided in various shapes, e.g., a bar shape, a polygonal shape including a quadrangular shape, such as a diamond, etc. In some exemplary embodiments, the second sensing electrodes SSE2and the second bridges BR2may be provided as a whole plate shape or may be provided in the shape of a mesh including fine lines. The first sensing electrodes SSE1and the second sensing electrodes SSE2may be alternately arranged in a matrix form on the second base substrate SB2. The first sensing electrodes SSE1and the second sensing electrodes SSE2may be insulated from each other. For instance, as seen inFIG.3, the first bridges BR1and the second bridges BR2intersect each other; however, the first bridges BR1and the second bridges BR2may be insulated from each other with an insulating layer (not shown) interposed therebetween, as will become more apparent below. The first sensing unit SR1and the second sensing unit SR2may be provided on different layers, but exemplary embodiments are not limited thereto or thereby. In some exemplary embodiments, the first sensing electrodes SSE1and the second sensing electrodes SSE2may be provided on the same layer, and the first bridges BR1and the second bridges BR2may be provided on different layers. The sensing lines SL are used to connect the touch sensor SR to a driver (not shown) that drives the touch sensor SR, and may be provided in the peripheral region PA. The driver may be provided on the first base substrate BS1of the display panel100or be provided at the outside, e.g., on a separate printed circuit board or the like. The driver may include a position detection circuit. The sensing lines SL may transmit a sensing input signal from the driver to the first sensing units SR1and the second sensing units SR2, or transmit sensing output signals from the first sensing units SR1and the second sensing units SR2to the driver. In some exemplary embodiments, the sensing lines SL may include a plurality of first sensing lines SL1and a plurality of second sensing lines SL2. The first sensing lines SL1may be connected to the first sensing units SR1. Each first sensing line SL1may be connected to a corresponding row of the first sensing units SR1. When viewed on a plane, the first sensing lines SL1may be bent plural times in the peripheral region PA, and extend along the second direction DR2. As seen inFIG.3, the first sensing lines SL1may be provided in a right longitudinal part of the peripheral region PA to be connected to corresponding rows of the first sensing units SR1. In some exemplary embodiments, the first sensing lines SL1may be provided in a left longitudinal part of the peripheral region PA and/or the right longitudinal part of the peripheral region. The second sensing lines SL2may be connected to the second sensing units SR2. Each second sensing line SL2may be connected to a corresponding column of the second sensing units SR2. When viewed on a plane, the second sensing lines SL2may be bent plural times in the peripheral region PA, and extend along the first direction DR1. As seen inFIG.3, the second sensing lines SL2may be provided in a lower lateral part of the peripheral region PA to be connected to corresponding columns of the second sensing units SR2. In some exemplary embodiments, the second sensing lines SL2may be provided in an upper lateral part of the peripheral region PA and/or the lower lateral part of the peripheral region PA. The touch sensor pad unit TP may be a component provided to transmit a signal to the driver between the touch sensor SR and the driver or to transmit a signal to the touch sensor SR. The touch sensor pad unit TP is provided in the peripheral region PA, and may be connected to end portions of the sensing lines SL. The touch sensor pad unit TP may be connected to pad units (not shown) of the display panel100through a conductive member (not shown), etc. In some exemplary embodiments, the touch sensor pad unit TP may include a first touch sensor pad unit TP1connected to end portions of the first sensing lines SL1and a second touch sensor pad unit TP2connected to end portions of the second sensing lines SL2. When viewed on a plane, the first touch sensor pad unit TP1and the second touch sensor pad unit TP2may be provided in the peripheral region PA to be adjacent to each other and be spaced apart from each other at a certain distance. In some exemplary embodiments, the first touch sensor pad unit TP1and the second touch sensor pad unit TP2may be provided in the peripheral region PA to be spaced apart from each other, but exemplary embodiments are not limited thereto or thereby. For example, the first touch sensor pad unit TP1and the second touch sensor pad unit TP2may be implemented as one touch sensor pad unit in the peripheral region PA. According to some exemplary embodiments, the touch sensor SR, the sensing lines SL, and the touch sensor pad unit TP may be made of a conductive material. The conductive material may include at least one of a metal, any alloy thereof, a conductive polymer, a conductive metal oxide, a nano-conductive material, and the like. In some exemplary embodiments, examples of the metal may be copper, silver, gold, platinum, palladium, nickel, tin, aluminum, cobalt, rhodium, iridium, iron, ruthenium, osmium, manganese, molybdenum, tungsten, niobium, tantalum, titanium, bismuth, antimony, lead, and the like. Examples of the conductive polymer may be polythiophene-based, polypyrrole-based, polyaniline-based, polyacetylene-based, and polyphenylene-based compounds, mixtures thereof, and the like. In particular, a PEDOT/PSS compound among the polythiophene-based compounds may be used as the conductive polymer. Examples of the conductive metal oxide may be indium tin oxide (ITO), indium zinc oxide (IZO), antimony zinc oxide (AZO), indium tin zinc oxide (ITZO), zinc oxide (ZnO), tin oxide (SnO2), and the like. In addition, examples of the nano-conductive compound may be silver nanowire (AgNW), carbon nano tube (CNT), graphene, and the like. FIG.4is a sectional view taken along sectional line I-I′ ofFIG.1according to some exemplary embodiments. Referring toFIGS.1and4, the display device may include a display panel100and a touch sensor200. The display panel100may include a first base substrate BS1, a thin film transistor TFT provided on the first base substrate BS1, and a light emitting device OLED connected to the thin film transistor TFT. The touch sensor200may include a second base substrate BS2, and a sensing line SL, and a touch sensor SR, which are provided on the second base substrate BS2. Hereinafter, the display device will be described according to a stacking order of the various elements. For convenience, the display panel100will be first described, and the touch sensor200will be then described. The first base substrate BS1may include a display region DA and a non-display region NDA provided at a side of the display region DA. Here, the thin film transistor TFT and the light emitting device OLED may be provided in (e.g., overlapping) the display region DA, and a power line PL may be provided in the non-display region NDA. For convenience, the display region DA will be first described. A buffer layer BFL may be provided on the first base substrate BS1. The buffer layer BFL may prevent impurities from being diffused into the thin film transistor TFT provided on the first base substrate BS1, and improve the flatness of the first base substrate BS1. The buffer layer BFL may be provided as a single layer, but may be formed as a multi-layer structure including at least two layers. The buffer layer BFL may be an inorganic insulating layer made of an inorganic material. For example, the buffer layer BFL may be formed of silicon nitride, silicon oxide, silicon oxynitride, or the like. When the buffer layer BFL is provided as the multi-layer structure, the layers may be formed of the same material or different materials. The buffer layer BFL may be omitted. An active pattern ACT may be provided on the buffer layer BFL. The active pattern ACT may be formed of a semiconductor material. The active pattern ACT may include a source region, a drain region, and a channel region provided between the source region and the drain region. The active pattern ACT may be a semiconductor pattern made of poly-silicon, amorphous silicon, semiconductor oxide, or the like. The channel region is a semiconductor pattern undoped with impurities, and may be an intrinsic semiconductor. The source region and the drain region may be semiconductor patterns doped with impurities. A gate insulating layer GI is disposed on the buffer layer BFL having the active pattern ACT provided thereon. The gate insulating layer GI may be an inorganic insulating layer including an inorganic material. Alternatively, the gate insulating layer GI may be an organic insulating layer including an organic material. A gate electrode GE may be provided on the gate insulating layer GI. The gate electrode GE may be formed to cover (or overlap) a region corresponding to the channel region of the active pattern ACT. The gate electrode GE may be made of a metal. For example, the gate electrode GE may be made of at least one of metals, such as gold (Au), silver (Ag), aluminum (Al), molybdenum (Mo), chromium (Cr), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. In addition, the gate electrode GE may be formed in a single layer, but exemplary embodiments are not limited thereto or thereby. For example, the gate electrode GE may be formed as a multi-layer structure in which two or more materials among the metals and the alloys are stacked. In some exemplary embodiments, although not shown in the drawings, a gate line that provides a scan signal to the thin film transistor TFT may be provided in the same layer as the gate electrode GE and include the same material. An interlayer insulating layer ILD is provided on the gate insulating layer GI having the gate electrode GE provided thereon. The interlayer insulating layer ILD may be an inorganic insulating layer including an inorganic material. The inorganic material may include polysiloxane, silicon nitride, silicon oxide, silicon oxynitride, and the like. A source electrode SE and a drain electrode DE may be provided on the interlayer insulating layer ILD. The source electrode SE and the drain electrode DE may be connected to the source region and the drain region of the active pattern ACT through contact holes sequentially passing through the interlayer insulating layer ILD and the gate insulating layer GI, respectively. The source electrode SE and the drain electrode DE may be made of a metal. For example, the source electrode SE and the drain electrode DE may be made of at least one of metals such as gold (Au), silver (Ag), aluminum (Al), molybdenum (Mo), chromium (Cr), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. In addition, the source electrode SE and the drain electrode DE may be formed in a single layer, but exemplary embodiments are not limited thereto or thereby. For example, the source electrode SE and the drain electrode DE may be formed as a multi-layer structure in which two or more materials among the metals and the alloys are stacked. In some exemplary embodiments, the thin film transistor TFT may include the active pattern ACT, the gate electrode GE, the source electrode SE, and the drain electrode DE. A case where the thin film transistor TFT is a thin film transistor having a top gate structure is illustrated as an example, but exemplary embodiments are not limited thereto or thereby. For example, the thin film transistor TFT may be a thin film transistor having a bottom gate structure, a dual gate structure, etc. A protective layer PSV that covers the thin film transistor TFT may be provided on the interlayer insulating layer ILD on which the source electrode SE and the drain electrode DE are provided. The protective layer PSV may be an organic insulating layer including an organic material. Examples of the organic material may be organic insulating materials including a polyacryl-based compound, a polyimide-based compound, a fluorine-based compound, such as Teflon, a benzocyclobutene-based compound, and the like. The light emitting device OLED may be provided on the protective layer PSV. The light emitting device OLED may include a first electrode AD, an emitting layer EML provided on the first electrode AD, and a second electrode CD provided on the emitting layer EML. The first electrode AD is provided on the protective layer PSV, and may be connected to the drain electrode DE through a contact hole passing through the protective layer PSV. A pixel defining layer PDL may be provided on the first electrode AD. The pixel defining layer PDL may allow a region corresponding to a light emitting region of each pixel (see, e.g., PXL ofFIG.2) to be exposed therethrough. For example, the pixel defining layer PDL may allow a top surface of the first electrode AD to be exposed therethrough, and protrude from the protective layer PSV along the circumference of each pixel PXL. The emitting layer EML may be provided on the first electrode AD exposed by the pixel defining layer PDL. The second electrode CD may be provided on the emitting layer EML. The first electrode AD may be an anode electrode and the second electrode CD may be a cathode electrode. In addition, when the light emitting device OLED is a top-emission light emitting device, the first electrode AD may be a reflective electrode, and the second electrode CD may be a transmissive electrode. As described above, when the first electrode AD is the anode electrode and the reflective electrode, the first electrode AD may include a reflective layer (not shown) and a transparent conductive layer (not shown) disposed on the top or bottom of the reflective layer. At least one of the transparent conductive layer and the reflective layer may be connected to the drain electrode DE. The pixel defining layer PDL may include an organic insulating material. For example, the pixel defining layer PDL may include at least one of polystyrene, polymethylmethacrylate (PMMA), polyacrylonitrile (PAN), polyamide (PA), polyimide (PI), polyarylether (PAE), heterocyclic polymer, parylene, epoxy, benzocyclobutene (BCB), a siloxane based resin, and a silane based resin. The emitting layer EML may have a multi-layered thin film structure including a light generation layer that generates colored light or white light. The second electrode CD may be provided on the emitting layer EML. The second electrode CD may extend from display region DA to a partial region of the non-display region NDA. An encapsulation layer SLM may be provided over the second electrode CD. The encapsulation layer SLM may prevent (or reduce) oxygen and moisture from penetrating into the light emitting device OLED. The non-display region NDA of the first base substrate BS1will now be described according to a stacking order. The buffer layer BFL, the gate insulating layer GI, and the interlayer insulating layer ILD may be sequentially provided on the first base substrate BS1. The power line PL for driving the thin film transistor TFT and the light emitting device OLED may be provided on the interlayer insulating layer ILD. The protective layer PSV may be provided over the power line PL. A connection line CNL may be provided on the protective layer PSV. The connection line CNL may be connected to the power line PL through a contact hole passing through the protective layer PSV. The pixel defining layer PDL may be provided on the connection line CNL. In addition, the encapsulation layer SLM may be provided over (e.g., covering) the pixel defining layer PDL. Hereinafter, the touch sensor200will be described according to a stacking order. The second base substrate BS2including a sensing region SA and a peripheral region PA may be provided on the encapsulation layer SLM. The touch sensor SR and the sensing line SL may be provided on the second base substrate BS2. The touch sensor SR may be provided in (e.g., overlapping) the sensing region SA of the second base substrate BS2, and the sensing line SL may be provided in the peripheral region PA of the second base substrate BS2. The sensing line SL may be provided on the peripheral region PA to correspond to a portion of the second electrode CD of the light emitting device OLED. For example, the sensing line SL may be disposed on the second electrode CD to cover a portion of the second electrode CD, and, thereby, overlap with a portion of the second electrode CD. An insulating layer IL may be provided over the touch sensor SR and the sensing line SL. The insulating layer IL may function to protect the touch sensor SR and the sensing line SL from the outside. In some exemplary embodiments, the portion of the second electrode CD, which overlaps with the sensing line SL, may be an interference prevention layer that prevents (or reduces) voltages applied to the display panel100from interfering with sensing inputs and/or output signals, applied to the sensing line SL. The sensing inputs and/or the output signals, applied to the sensing line SL, may be signals for sensing a touch event in the sensing region SA. In some cases, the sensing inputs and/or the output signals, applied to the sensing line SL, may be distorted while including noise due to influence of voltages applied to the display panel100, e.g., a data signal, a scan signal, an emission control signal, first and second driving voltages, and the like, to be provided to the touch sensor SR. In these cases, it may be difficult for the touch sensor200to detect an accurate touch event. According to some exemplary embodiments, however, the sensing line SL may be disposed in the peripheral region PA to overlap with a portion of the second electrode CD when viewed on a plane, so that the interference caused by the voltages applied to the display device100can be blocked by the second electrode CD. In this manner, the sensing inputs and/or the output signals, applied to the sensing line SL, are not distorted. FIG.5is an enlarged plan view of portion E1of the touch sensor ofFIG.3according to some exemplary embodiments.FIG.6is a sectional view taken along sectional lines II-II′ and III-III′ ofFIG.5according to some exemplary embodiments. Referring toFIGS.3,5, and6, first sensing lines SL1may be provided in the peripheral region PA of the second base substrate BS2, and extend toward the first touch sensor pad unit TP1along the second direction DR2of the second base substrate BS2from first sensing electrodes SSE1. The first sensing lines SL1may include (1-1)th to (1-8)th sensing lines SL1_1, SL1_2, SL1_3, SL1_4, SL1_5, SL1_6, SL1_7, and SL1_8. Line widths of the (1-1)th to (1-8)th sensing lines SL1_1, SL1_2, SL1_3, SL1_4, SL1_5, SL1_6, SL1_7, and SL1_8may be different from one another. For example, the line width of the (1-8)th sensing line SL1_8most adjacent to the sensing region SA may be relatively narrow, and the line width of the (1-1)th sensing line SL1_1most distant from the sensing region SA may be relatively wide. That is, the line width of each of the first sensing lines SL1may be narrowed as the first sensing line SL1becomes more adjacent to the sensing region SA. The line widths of the first sensing lines SL1are designed different from one another so as to allow line resistance values of the first sensing lines SL1to be uniform (or substantially uniform). That is, in some exemplary embodiments, the line widths of the first sensing lines SL1are designed different from one another, so that the first sensing lines SL1can be implemented to have the same (or substantially the same) resistance value. According to various exemplary embodiments, the (1-1)th sensing line SL1_1is located at an outermost portion of the peripheral region PA, which is most distant from the sensing region SA, and may be connected to the first sensing electrodes SSE1disposed on a row most distant from the first touch sensor pad unit TP1. On the other hand, the (1-8)th sensing line SL1_8is located in the peripheral region PA to be most adjacent to the sensing region SA, and may be connected to the first sensing electrodes SSE1on a row closest to the first touch sensor pad unit TP1. Therefore, the (1-1)th sensing line SL1_1may have a line length longer than that of the (1-8)th sensing line SL1_8. In general, a line resistance value is in proportion to a line length. When the (1-1)th sensing line SL1_1and the (1-8)th sensing line SL1_8have the same line width, the line resistance value of the (1-1)th sensing line SL1_1having a relatively long line length may be greater than that of the (1-8)th sensing line SL1_8having a relatively short line length. The difference in line resistance value between the (1-1)th sensing line SL1_1and the (1-8)th sensing line SL1_8may differently distort sensing input signals respectively applied to the (1-1)th sensing line SL1_1and the (1-8)th sensing line SL1_8. When a uniform sensing input signal is not provided to the whole of the sensing region SA, the touch recognition rate of the touch sensor200may be degraded. In order to prevent (or reduce) these differences, in some exemplary embodiments, the widths of the first sensing lines SL1are designed different from one another, so that the line resistance values of the first sensing lines SL1can be uniform, thereby improving the touch recognition rate of the touch sensor200. Each of the first sensing lines SL1may include a first contact hole CH1. For convenience, it is illustrated and described that each of the first sensing lines SL1includes one first contact hole CH1, but exemplary embodiments are not limited thereto or thereby. For example, the first contact hole CH1may be provided in plurality in a corresponding first sensing line SL1. For instance, a plurality of first contact holes CH1may be spaced apart along the length and/or width of a corresponding first sensing line SL1. The first contact hole CH1may have a size corresponding to the line width of a corresponding first sensing line SL1. For example, the size of the first contact hole CH1included in the (1-1)th sensing line SL1_1having a relatively wide line width may be larger than that of the first contact hole CH1included in the (1-8)th sensing line SL1_8having a relatively narrow line width. The size of the first contact hole CH1is changed to correspond to the line width of each of the first sensing lines SL1, so that contact resistances of the first sensing lines SL1become uniform. According to some exemplary embodiments, each of the first sensing lines SL1may be provided as a double layer, e.g., a multi-layer structure on the second base substrate BS2so as to have a low resistance. For example, each of the first sensing lines SL1may include a first metal layer MTL1provided on the second base substrate BS2and a second metal layer MTL2connected to the first metal layer MTL1through the first contact hole CH1. Here, the first metal layer MTL1and the second metal layer MTL2may have the same width when viewed on a plane. As described above, the line widths of the first sensing lines SL1are designed different from one another, so that the first sensing lines SL1can be implemented to have the same resistance. Further, the touch recognition rate of the touch sensor200becomes uniform in the whole region of the touch sensor200, so that a touch event of a user can be accurately recognized. Hereinafter, the first sensing lines SL1will be described according to a stacking order with reference toFIG.6. Referring toFIG.6, a first metal layer MTL1may be provided on the second base substrate BS2. The first metal layer MTL1may be made of a conductive material. The conductive material may include a metal, any alloy thereof, a conductive polymer, a conductive metal oxide, a nano-conductive material, and the like. In some exemplary embodiments, examples of the metal may be at least one of copper, silver, gold, platinum, palladium, nickel, tin, aluminum, cobalt, rhodium, iridium, iron, ruthenium, osmium, manganese, molybdenum, tungsten, niobium, tantalum, titanium, bismuth, antimony, lead, and the like. Examples of the conductive polymer may be polythiophene-based, polypyrrole-based, polyaniline-based, polyacetylene-based, and polyphenylene-based compounds, mixtures thereof, and the like. For instance, a PEDOT/PSS compound among the polythiophene-based compounds may be used as the conductive polymer. Examples of the conductive metal oxide may be indium tin oxide (ITO), indium zinc oxide (IZO), antimony zinc oxide (AZO), indium tin zinc oxide (ITZO), zinc oxide (ZnO), tin oxide (SnO2), and/or the like. In addition, examples of the nano-conductive compound may be silver nanowire (AgNW), carbon nano tube (CNT), graphene, and/or the like. A first insulating layer IL1may be provided over the first metal layer MTL1. The first insulating layer IL1may be an inorganic insulating layer including an inorganic material or be an organic insulating layer including an organic material. Examples of the inorganic material may be inorganic insulating materials including polysiloxane, silicon nitride, silicon oxide, silicon oxynitride, and/or the like. Examples of the organic material may be organic insulating materials including a polyacryl-based compound, a polyimide-based compound, a fluorine-based compound, such as Teflon, a benzocyclobutene-based compound, and/or the like. A second metal layer MTL2may be provided on the first insulating layer MTL1. The second metal layer MTL2may be made of the same material as the first metal layer MTL1. The first metal layer MTL1and the second metal layer MTL2may have the same width with the first insulating layer IL1interposed therebetween. Here, the width d1of the first metal layer MTL1and the second metal layer MTL2of the (1-1)th sensing line SL1_1may be wider than the width d2of the first metal layer MTL1and the second metal layer MTL2of the (1-8)th sensing line SL1_8. A second insulating layer IL2may be provided over the second metal layer MTL2. The second insulating layer IL2may cover the first sensing lines SL1and allow the first sensing lines SL1to be electrically insulated from each other. The first insulating layer IL1and the second insulating layer IL2may constitute the insulating layer (see, e.g., IL ofFIG.4) of the touch sensor200. FIG.7is a graph comparing line widths and resistances of first sensing lines according to some exemplary embodiments. InFIG.7, R1 shows line resistance characteristics of 33 first sensing lines when the first sensing lines have the same line width, and R2 shows line resistance characteristics of 33 first sensing lines SL1when the first sensing lines SL1have different line widths. In the graph ofFIG.7, number 1 on an X-axis may mean a first sensing line disposed most adjacent to the sensing region SA among the 33 first sensing lines. On the X axis of the graph ofFIG.7, as a number increases from number 1, the number may mean a first sensing line disposed more distant from the sensing region SA. Referring toFIG.7, it can be seen that, when the first sensing lines SL1have different line widths, the line resistance values of the first sensing lines become more uniform. For example, as shown in association with R1, it can be seen that the line resistance value of each of the first sensing lines increases as the first sensing line is disposed more distant from the sensing region SA. This is because the line length of each of the first sensing lines increases as the first sensing line becomes more distant from the sensing region SA, and, hence, there occurs a difference in line resistance value between the first sensing lines. However, as shown in association with R2, it can be seen that, although each of the first sensing lines SL1having different line widths is disposed distant from the sensing region SA, the line resistance values of the first sensing lines SL1become more uniform as compared with R1. This is because each of the first sensing lines SL1is designed to have a line width that is in reverse proportion to the line length of the first sensing line SL1. FIG.8is an enlarged plan view of portion E2of the touch sensor ofFIG.3according to some exemplary embodiments.FIG.9is a sectional view taken along sectional lines IV-IV′ and V-V′ ofFIG.8according to some exemplary embodiments. Referring toFIGS.3,8, and9, second sensing lines SL2may be provided in the peripheral region PA of the second base substrate BS2, and extend toward the second touch sensor pad unit TP2along the second direction DR2of the second base substrate BS2from second sensing electrodes SSE2. The second sensing lines SL2may include (2-1)th to (2-5)th sensing lines SL2_1, SL2_2, SL2_3, SL2_4, and SL2_5. The (2-1)th to (2-5)th sensing lines SL2_1, SL2_2, SL2_3, SL2_4, and SL2_5may have different line widths. For example, the line width of the (2-5)th sensing line SL2_5most adjacent to the sensing region SA may be relatively narrow, and the line width of the (2-1)th sensing line SL2_1may be relatively wide. That is, the line width of each of the second sensing lines SL2may become narrower as the second sensing line SL2becomes more adjacent to the sensing region SA. The line widths of the second sensing lines SL2are designed different from one another so as to allow line resistance values of the second sensing lines SL2to become uniform (or substantially uniform). That is, in some exemplary embodiments, the line widths of the second sensing lines SL2are designed different from one another, so that the second sensing lines SL2can be implemented to have the same (or substantially the same) resistance. Each of the second sensing lines SL2may include a second contact hole CH2. For convenience, it is described and illustrated that each of the second sensing lines SL2includes one second contact hole CH2, but exemplary embodiments are not limited thereto or thereby. For example, the second contact hole CH2may be provided in plurality in a corresponding second sensing line SL2. For instance, a plurality of second contact holes CH2may be spaced apart along the length and/or width of a second sensing line SL2. The second contact hole CH2may have a size corresponding to the line width of a corresponding second sensing line SL2. For example, the size of the second contact hole CH2included in the (2-1)th sensing line SL2_1having a relatively wide line width may be larger than that of the second contact hole CH2included in the (2-5)th sensing line SL2_5having a relatively narrow line width. Like the first sensing lines SL1, each of the second sensing lines SL2may be formed as a multi-layer structure on the second base substrate BS2so as to have a low resistance. For example, each of the second sensing lines SL2may include a first metal layer MTL1provided on the second base substrate BS2and a second metal layer MTL2connected to the first metal layer MTL1through the second contact hole CH2passing through a first insulating layer IL1. The first metal layer MTL1and the second metal layer MTL2may have the same width when viewed on a plane. The width d1of the first metal layer MTL1and the second metal layer MTL2of the (2-1)th sensing line SL2_1may be wider than width d2of the first metal layer MTL1and the second metal layer MTL2of the (2-5)th sensing line SL2_5. FIG.10is an enlarged plan view of portion E3of the touch sensor ofFIG.3according to some exemplary embodiments.FIG.11is a sectional view taken along sectional line VI-VI′ ofFIG.10according to some exemplary embodiments. Referring toFIGS.3,10, and11, the touch sensor200may include a second base substrate BS2, first sensing units SR1, and second sensing units SR2provided on the second base substrate BS2. The first sensing unit SR1includes a first sensing electrode SSE1and a first bridge BR1that connects adjacent first sensing electrodes SSE1to each other. The second sensing unit SR1includes a second sensing electrode SSE2and a second bridge BR2that connects adjacent second sensing electrodes SSE2to each other. The first sensing electrode SSE1and the second sensing electrode SSE2may be provided on the second base substrate BS2and may be disposed in the same layer. In this case, the first sensing electrode SSE1and the second sensing electrode SSE2may be formed as independent patterns that are not connected to each other. Two first sensing electrodes SSE1adjacent to each other may be connected to each other by the first bridge BR1disposed in the same layer as the first sensing electrodes SSE1. Two second sensing electrodes SSE2adjacent to each other may be connected to each other by the second bridge BR2through third contact holes CH3passing through the first insulating layer IL1. A second insulating layer IL2may be provided over the second bridge BR2and the first insulating layer IL1. According to various exemplary embodiments, the display device can be employed in various electronic devices. For example, the display device is applicable to televisions, notebook computers, cellular phones, smart phones, smart pads, personal media players, personal digital assistants, navigational devices, various wearable devices, such as smart watches, and the like. According to various exemplary embodiments, it is possible to provide a touch sensor having a uniform (or substantially uniform) touch recognition rate. Also, according to various exemplary embodiments, it is possible to provide a display device having the touch sensor capable of providing a uniform (or substantially uniform) touch recognition rate. Although certain exemplary embodiments and implementations have been described herein, other embodiments and modifications will be apparent from this description. Accordingly, the inventive concepts are not limited to such embodiments, but rather to the broader scope of the presented claims and various obvious modifications and equivalent arrangements. | 53,705 |
11861124 | DETAILED DESCRIPTION Embodiments of the present disclosure will be described in detail below with reference to accompanying drawings. It is noted that implementation modes may be implemented in multiple different forms. Those of ordinary skills in the art may easily understand such a fact that implementation modes and contents may be transformed into various forms without departing from spirit and scope of the present disclosure. Therefore, the present disclosure should not be construed as only being limited to the contents recorded in the following implementation modes. The embodiments in the present disclosure and features in the embodiments may be combined randomly with each other if there is no conflict. In the specification, for convenience, wordings indicating orientations or positional relationships, such as “center”, “upper”, “lower”, “front”, “back”, “vertical”, “horizontal”, “top”, “bottom”, “inside”, and “outside”, are used for describing positional relationships between constituent elements with reference to the accompanying drawings, which are merely for facilitating describing the specification and simplifying the description, rather than indicating or implying that referred apparatuses or elements must have particular orientations, and be constructed and operated in particular orientations. Thus, they cannot be construed as limitations on the present disclosure. The positional relationships between the constituent elements appropriately change according to directions according to which the constituent elements are described. Therefore, they are not limited to the wordings described in the specification, which may be replaced appropriately according to a situation. In the specification, unless otherwise specified and defined explicitly, terms “mounted”, “mutually connected”, and “connection” should be understood in a broad sense. For example, a connection may be a fixed connection, or a detachable connection, or an integral connection, it may be a mechanical connection or an electrical connection, it may be a direct connection, or an indirect connection through an intermediate, or an internal communication between two elements. Those of ordinary skills in the art may understand the meanings of the above terms in the present disclosure according to situations. In the present disclosure, “about” refers to that a boundary is defined not so strictly and numerical values in process and measurement error ranges are allowed. A main structure of a touch panel according to an embodiment of the present disclosure includes: a base substrate, multiple first touch electrodes which are provided on the base substrate at intervals and extend along a first direction, and multiple second touch electrodes which are provided at intervals and extend along the first direction. The first touch electrodes and the second touch electrodes are insulated from each other, and the first touch electrodes and the second touch electrodes cross each other to form a first interdigital structure. First bending parts extending toward outer sides of the first touch electrodes are formed on side parts of the first touch electrodes respectively. Alternatively, second bending parts extending toward outer sides of the second touch electrodes are formed on side parts of the second touch electrodes respectively. In some embodiments, first bending parts are formed on side parts of the first touch electrodes, and second bending parts are formed on side parts of the second touch electrodes. According to the touch panel of the embodiment of the present disclosure, the first interdigital structure is formed by the first touch electrodes and the second touch electrodes, which results in increased extension lengths of the first touch electrodes and the second touch electrodes, an increased mutual capacitance inductance during touching, and an improved sensitivity of the touch panel. According to the touch panel of the embodiment of the present disclosure, the first bending parts are formed on the side parts of the first touch electrodes, and/or the second bending parts are formed on the side parts of the second touch electrodes, so that the extension lengths of the first touch electrodes and the second touch electrodes, the mutual capacitance inductance during touching, and the sensitivity of the touch panel are further increased. FIG.3is a top view of a touch panel according to an embodiment of the present disclosure. As shown inFIG.3, a planar structure of the touch panel includes a base substrate1, multiple first touch electrodes2which are provided on the base substrate1at intervals and extend along a first direction and multiple second touch electrodes3which are provided on the base substrate1at intervals and extend along the first direction, wherein the first touch electrodes2and the second touch electrodes3are insulated from each other. The first touch electrodes2and the second touch electrodes3cross each other to form a first interdigital structure, and the first touch electrodes2and the second touch electrodes3are alternately provided along a second direction. First bending parts201extending toward outer sides of the first touch electrodes2are formed on side parts of the first touch electrodes2respectively, and second bending parts301extending toward outer sides of the second touch electrodes3is formed on side parts of the second touch electrodes3respectively. In the embodiment, the first bending parts201and the second bending parts301extend along a second direction. The first direction intersects with the second direction, for example, the first direction is perpendicular to the second direction. In an exemplary embodiment, the first touch electrodes2in the first direction are all respectively located in gaps formed by the second touch electrodes3in the first direction, and similarly, the second touch electrodes3in the first direction are all respectively located in gaps formed by the first touch electrodes2in the first direction, thus constituting the first interdigital structure. As shown inFIG.3, each first bending part201is located at a side part of a first touch electrode2closing to a second touch electrode3, that is, the first bending part201is located between the first touch electrode2and the second touch electrode3. A second bending part301is located at a side part a second touch electrode3closing to a first touch electrode2, that is, the second bending part301is located between the first touch electrode2and the second touch electrode3. The first bending parts201and the second bending parts301cross each other to form a second interdigital structure. Moreover, the first bending parts201and the second bending parts301are alternately provided along the first direction. According to the embodiment of the present disclosure, the second interdigital structure is formed by the first bending part201and the second bending part301, thereby further increasing the mutual capacitance inductance during touching and improving the sensitivity of the touch panel. In an exemplary embodiment, the first bending parts201are all respectively located in gaps formed by the second bending parts301, and similarly, the second bending parts301are all respectively located in gaps formed by the first bending parts201, thus constituting the second interdigital structure. As shown inFIG.3, the touch panel of the embodiment of the present disclosure further includes multiple first connection electrodes202which are provided on the base substrate1at intervals and extend along the second direction, and multiple first touch electrodes2arranged at intervals along the second direction are disposed on the first connection electrodes202respectively. The first touch electrodes2on each first connection electrode202are electrically communicated through the first connection electrode202, thereby reducing resistance of the first touch electrodes2. In the embodiment, the base substrate1is provided with two first connection electrodes202extending along the second direction, and multiple first touch electrodes2arranged at intervals along the second direction are formed on two sides of each of the two first connection electrodes202, respectively. As shown inFIG.3, two second connection electrodes203are connected between adjacent first connection electrodes202, and the adjacent first connection electrodes202are electrically connected by the second connection electrodes203. The two second connection electrodes203are located on opposite sides of the first touch electrode2on the first connection electrode202, so that the adjacent first connection electrodes202and the two second connection electrodes203form a loop structure, thereby reducing the resistance of the first touch electrodes2. As shown inFIG.3, third bending parts204are formed on the opposite sides of the second connection electrodes203respectively, wherein the third bending parts204can increase extension lengths of the second connection electrodes203and reduce resistance of the second connection electrodes203. As shown inFIG.3, the touch panel of the embodiment of the present disclosure further includes multiple third connection electrodes302which are provided on the base substrate1at intervals and extend along the second direction, and multiple second touch electrodes3arranged at intervals along the second direction are provided on the third connection electrodes302. The second touch electrodes3on each third connection electrodes302are electrically communicated through the third connection electrode302, thereby reducing resistance of the second touch electrodes3. In an exemplary embodiment, the first connection electrodes202, the second connection electrodes203and the third connection electrodes302may be disposed on a same layer as the first touch electrodes2and the second touch electrodes3, by direct connection. The first connection electrodes202, the second connection electrodes203and the third connection electrodes302, may be disposed on the same or different layer as the first touch electrodes2and the second touch electrodes3by bridging connection. As shown inFIG.3, adjacent second touch electrodes3along the first direction are connected through bridging structures7, thereby reducing the resistance of the second touch electrodes3. In an exemplary embodiment, the touch panel is provided with at least two bridging structures7. For example, as shown inFIG.3, the touch panel in the embodiment of the present disclosure is provided with four bridging structures7. As shown inFIG.3, the touch panel of the embodiment of the present disclosure further includes multiple grooves4formed on the base substrate1, and an orthogonal projection of the first touch electrodes2and the second touch electrodes3on the base substrate1does not overlap with an orthogonal projection of the grooves4on the base substrate1, that is, the first touch electrode2and the second touch electrode3are routed in peripheral areas of the grooves4. The first touch electrodes2and/or the second touch electrodes3form first bending parts201and/or second bending parts301at the grooves4. For example, the first touch electrodes2and the second touch electrodes3form the first bending parts201and the second bending part301at the grooves4. Alternatively, the first touch electrodes2form the first bending parts201at the grooves4. Alternatively, the second touch electrodes3forms the second bending parts301at the grooves4. The grooves4may be strip grooves extending along the first direction or strip grooves extending along the second direction. In an exemplary embodiment, the grooves4are used for providing a deformation amount when the touch panel is deformed. Herein, the grooves4may have various shapes, such as rectangle, rhombus or irregular polygon, which is not limited in this embodiment. According to the touch panel of the embodiment of the present disclosure, the grooves4are formed so that when the touch panel undergoes a flexible deformation (such as stretching, rolling and folding), the deformation amount is concentrated in the grooves4, such that coupling capacitance areas of the first touch electrodes2and the second touch electrodes3are located in the non-groove area with smaller stretching deformation, thereby reducing variations of the coupling capacitance between the first touch electrodes2and the second touch electrodes3, and simultaneously preventing the first touch electrodes2and the second touch electrodes3from breaking during the flexible deformation. As shown inFIG.3, the touch panel of the embodiment of the present disclosure further includes floating electrodes6provided on the base substrate1, wherein the floating electrodes6are disposed on a same layer as the first touch electrodes2and the second touch electrodes3, and the floating electrodes6are insulated from the first touch electrodes2and the second touch electrodes3. The floating electrodes6are used for reducing loads and improving touch performance in case of weak grounding. The floating electrodes6may be located in the first touch electrodes2and the second touch electrodes3, wherein the first touch electrodes2and the second touch electrodes3are routed in peripheral areas of the floating electrodes6respectively, and the first touch electrodes2and the second touch electrodes3form the first bending parts201and the second bending part301at the floating electrodes6respectively. The floating electrodes6may also be located in peripheral areas of the first touch electrodes2and the second touch electrodes3. In some embodiments, the floating electrodes may also be disposed in a different layer from the first touch electrodes and the second touch electrodes. For example, the floating electrodes are disposed on a same layer as the bridging structures. When touch control is performed on the touch panel of the embodiment of the present disclosure, capacitance of the coupling capacitance area between the first touch electrodes2and the second touch electrodes3changes, so that an induced signal changes correspondingly, and then a touch position is determined. Herein, one of a first touch electrode2and a second touch electrode3is a Tx (driving) electrode and the other is an Rx (sensing) electrode, therefore the first touch electrode2and the second touch electrode3cooperate with each other to complete a touch response. In implementation, the first touch electrode2may be the Tx electrode, the second touch electrode3may be the Rx electrode, alternatively, the second touch electrode3may be the Tx electrode and the first touch electrode2may be the Rx electrode. FIG.4is a partially enlarged diagram view of a touch panel according to an embodiment of the present disclosure. As shown inFIG.4, each of the first touch electrodes2and the second touch electrodes3is in a grid structure. The touch panel of the embodiment of the present disclosure further includes multiple sub-pixels5provided on the base substrate1, and the grid structure is provided on the sub-pixels5. In the grid structure, each grid corresponds to one or one group of sub-pixels5, wherein an projection of the one or the group of sub-pixels5on a plane where the grid is located falls into a corresponding grid, in an RBGB arrangement or pentile arrangement for example, and a shape of the grid may be changed according to the sub-pixels. FIG.5is a cross sectional view of a bridging structure according to an embodiment of the present disclosure. In a plane perpendicular to the base substrate, as shown inFIG.5, a first touch electrode2and a second touch electrode3are located on a same side of an insulating layer8, and a bridging structure7is provided on a side of the insulating layer8opposite to the side of the insulating layer8where the first touch electrode2and the second touch electrode3are provided, and adjacent second touch electrodes3along the first direction are communicated through the bridging structures7. In a plane perpendicular to the base substrate, the touch panel of the present disclosure also includes a light-emitting unit provided on the base substrate, and the first touch electrode and the second touch electrode are provided on the light-emitting unit. The light-emitting unit includes a driving structure layer provided on the base substrate and a light-emitting structure layer provided on the driving structure layer, wherein the light-emitting structure layer is used for emitting display light, and the driving structure layer is connected with the light-emitting structure layer and used for controlling and driving the light-emitting structure layer. The driving structure layer mainly includes a pixel driving circuit constituted of multiple Thin Film Transistors (TFTs), and the light-emitting structure layer mainly includes an anode, a light-emitting layer, and a cathode. Simulation Test Table 1 shows a stack structure of a common mobile phone. Table 2 shows a simulation result of a single touch unit in a touch panel according to an embodiment of the present disclosure. Herein, a size of a single touch unit in Table 2 is 4200 mm×4200 mm. Compared with the stack structure the common mobile phone shown in Table 1, a simulation for the touch panel of the embodiment of the present disclosure is performed. From the simulation result shown in Table 2, it can be concluded that the touch panel of the embodiment of this disclosure fully meets requirements of a touch chip for a touch structure, taking the laminate structure of the common mobile phone shown in Table 1 as an example. TABLE 1Stack structure of common mobile phoneStackLayerthicknessRs(ohm/sq)ErMaterial 1550NA7Material 250NA3.6Material 365μmNA3.4M20.45μm0.08NAMaterial 40.35μmNA6M20.2μm0.1NABuffer0.35NA6Material 511μmNA7Cathode0.0110NA TABLE 2Simulation result of a single touch unit in thetouch panel of the embodiment of this disclosure5 mm Cylinder Sim. ResultTouch indexSimulation valueIC specCs(pF)9.05—Cm(pF)0.820.5-2 pFdCm(fF)0.0745 phi copper columndCm/Cm(%)9.02%>5%Rx/Ry(ohm)27/25— An embodiment of the present disclosure further provides a method for preparing a touch panel. The method for preparing the touch panel of the embodiment of the present disclosure includes: forming, on a base substrate, multiple first touch electrodes which are provided at intervals and extend along a first direction and multiple second touch electrodes which are provided at intervals and extend along the first direction; wherein, the first touch electrodes and the second touch electrodes are insulated from each other, and the first touch electrodes and the second touch electrodes cross each other to form a first interdigital structure, and first bending parts extending toward outer side of the first touch electrodes are formed on side parts of the first touch electrodes, alternatively, second bending parts extending toward outer sides of the second touch electrodes are formed on side parts of the second touch electrodes. An embodiment of the present disclosure further provides a display apparatus including the touch panel according to the embodiments described above. The display apparatus may be any products or components with a display function, such as a mobile phone, a tablet computer, a television, a display, a laptop computer, a digital photo frame, and a navigator, or a product or component with functions of VR, AR and 3D display. The drawings of the present disclosure only involve the structures involved in the present disclosure, and the other structures may refer to conventional designs. The embodiments in the present disclosure, i.e., the features in the embodiments, may be combined with each other to obtain new embodiments if there is no conflict. Those of ordinary skills in the art should know that modifications or equivalent replacements may be made to the technical solutions of the present disclosure without departing from the spirit and scope of the technical solutions of the present disclosure, and the modifications or equivalent replacements shall all fall within the scope of the claims of the present disclosure. | 20,148 |
11861125 | DETAILED DESCRIPTION In the following description of examples or embodiments of the present disclosure, reference will be made to the accompanying drawings in which it is shown by way of illustration specific examples or embodiments that can be implemented, and in which the same reference numerals and signs can be used to designate the same or like components even when they are shown in different accompanying drawings from one another. Further, in the following description of examples or embodiments of the present disclosure, detailed descriptions of well-known functions and components incorporated herein will be omitted when it is determined that the description may make the subject matter in some embodiments of the present disclosure rather unclear. The terms such as “including”, “having”, “containing”, “constituting” “make up of”, and “formed of” used herein are generally intended to allow other components to be added unless the terms are used with the term “only”. As used herein, singular forms are intended to include plural forms unless the context clearly indicates otherwise. Terms, such as “first”, “second”, “A”, “B”, “(A)”, or “(B)” may be used herein to describe elements of the present disclosure. Each of these terms is not used to define essence, order, sequence, or number of elements, etc., but is used merely to distinguish the corresponding element from other elements. When it is mentioned that a first element “is connected or coupled to”, “contacts or overlaps”, etc. a second element, it should be interpreted that, not only can the first element “be directly connected or coupled to” or “directly contact or overlap” the second element, but a third element can also be “interposed” between the first and second elements, or the first and second elements can “be connected or coupled to”, “contact or overlap”, etc., each other via a fourth element. Here, the second element may be included in at least one of two or more elements that “are connected or coupled to”, “contact or overlap”, etc., each other. The shapes, sizes, dimensions (e.g., length, width, height, thickness, radius, diameter, area, etc.), ratios, angles, number of elements, and the like illustrated in the accompanying drawings for describing the embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. A dimension including size and a thickness of each component illustrated in the drawing are illustrated for convenience of description, and the present disclosure is not limited to the size and the thickness of the component illustrated, but it is to be noted that the relative dimensions including the relative size, location and thickness of the components illustrated in various drawings submitted herewith are part of the present disclosure. When time relative terms, such as “after,” “subsequent to,” “next,” “before,” and the like, are used to describe processes or operations of elements or configurations, or flows or steps in operating, processing, manufacturing methods, these terms may be used to describe non-consecutive or non-sequential processes or operations unless the term “directly” or “immediately” is used together. In addition, when any dimensions, relative sizes, etc., are mentioned, it should be considered that numerical values for an elements or features, or corresponding information (e.g., level, range, etc.) include a tolerance or error range (e.g., about 5% to 10%) that may be caused by various factors (e.g., process factors, internal or external impact, noise, etc.) even when a relevant description is not specified. Further, the term “may” fully encompasses all the meanings of the term “can”. Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. FIG.1is a view illustrating a system configuration of a transparent touch display device100according to embodiments of the disclosure. Referring toFIG.1, a transparent touch display device100may include a display panel110and display driving circuits for driving the display panel110, as components for displaying images. The display driving circuits may include a data driving circuit120, a gate driving circuit130, and a display controller140. The display panel110may include a display area DA in which images are displayed and a non-display area NDA in which no image is displayed. The non-display area NDA may be an outer area of the display area DA and be referred to as a bezel area. The whole or part of the non-display area NDA may be an area visible from the front surface of the transparent touch display device100or an area that is bent and not visible from the front surface of the transparent touch display device100. The display panel110may include a plurality of subpixels SP. The display panel110may further include various types of signal lines to drive the plurality of subpixels SP. The transparent touch display device100according to embodiments of the disclosure may be a liquid crystal display device or a self-emission display device in which the display panel110emits light by itself. When the transparent touch display device100according to the embodiments of the disclosure is a self-emission display device, each of the plurality of subpixels SP may include a light emitting element. For example, the transparent touch display device100according to embodiments of the disclosure may be an organic light emitting diode display in which the light emitting element is implemented as an organic light emitting diode (OLED). As another example, the transparent touch display device100according to embodiments of the disclosure may be an inorganic light emitting display device in which the light emitting element is implemented as an inorganic material-based light emitting diode. As another example, the transparent touch display device100according to embodiments of the disclosure may be a quantum dot display device in which the light emitting element is implemented as a quantum dot which is self-emission semiconductor crystal. The structure of each of the plurality of subpixels SP may vary according to the type of the transparent touch display device100. For example, when the transparent touch display device100is a self-emission display device in which the subpixels SP emit light by themselves, each subpixel SP may include a light emitting element that emits light by itself, one or more transistors, and one or more capacitors. For example, various types of signal lines may include a plurality of data lines transferring data signals (also referred to as data voltages or image signals) and a plurality of gate lines transferring gate signals (also referred to as scan signals). The plurality of data lines and the plurality of gate lines may cross each other. Each of the plurality of data lines may be disposed while extending in a first direction. Each of the plurality of gate lines may be disposed while extending in a second direction. Here, the first direction may be a column direction and the second direction may be a row direction. Alternatively, the first direction may be the row direction, and the second direction may be the column direction. The data driving circuit120is a circuit for driving the plurality of data lines and may supply data signals to the plurality of data lines. The gate driving circuit130is a circuit for driving the plurality of gate lines and may supply gate signals to the plurality of gate lines. The display controller140is a device for controlling the data driving circuit120and the gate driving circuit130and may control driving timings for the plurality of data lines and driving timings for the plurality of gate lines. The display controller140may supply a data driving control signal to the data driving circuit120to control the data driving circuit120and may supply a gate driving control signal to the gate driving circuit130to control the gate driving circuit130. The data driving circuit120may supply data signals to the plurality of data lines according to the driving timing control of the display controller140. The data driving circuit120may receive digital image data from the display controller140and may convert the received image data into analog data signals and output the analog data signals to the plurality of data lines. The gate driving circuit130may supply gate signals to the plurality of gate lines GL according to the timing control of the display controller140. The gate driving circuit130may receive a first gate voltage corresponding to a turn-on level voltage and a second gate voltage corresponding to a turn-off level voltage, along with various gate driving control signals (e.g., start signal and reset signal), generate gate signals, and supply the generated gate signals to the plurality of gate lines. For example, the data driving circuit120may be connected with the display panel110by a tape automated bonding (TAB) method or connected to a bonding pad of the display panel110by a chip on glass (COG) or chip on panel (COP) method or may be implemented by a chip on film (COF) method and connected with the display panel110. The gate driving circuit130may be connected with the display panel110by TAB method or connected to a bonding pad of the display panel110by a COG or COP method or may be connected with the display panel110according to a COF method. Alternatively, the gate driving circuit130may be formed in a gate in panel (GIP) type, in the non-display area NDA of the display panel110. The gate driving circuit130may be disposed on the substrate or may be connected to the substrate. In other words, the gate driving circuit130that is of a GIP type may be disposed in the non-display area NDA of the substrate. The gate driving circuit130that is of a chip-on-glass (COG) type or chip-on-film (COF) type may be connected to the substrate. Meanwhile, at least one of the data driving circuit120and the gate driving circuit130may be disposed in the display area DA of the display panel110. For example, at least one of the data driving circuit120and the gate driving circuit130may be disposed not to overlap the subpixels SP or to overlap all or some of the subpixels SP. The data driving circuit120may be connected to one side (e.g., an upper or lower side) of the display panel110. Depending on the driving scheme or the panel design scheme, data driving circuits120may be connected with both the sides (e.g., both the upper and lower sides) of the display panel110, or two or more of the four sides of the display panel110. The gate driving circuit130may be connected to one side (e.g., a left or right side) of the display panel110. Depending on the driving scheme or the panel design scheme, gate driving circuits130may be connected with both the sides (e.g., both the left and right sides) of the display panel110, or two or more of the four sides of the display panel110. The display controller140may be implemented as a separate component from the data driving circuit120, or the display controller140and the data driving circuit120may be integrated into an integrated circuit (IC). The display controller140may be a timing display controller used in typical display technology, a control device that may perform other control functions as well as the functions of the timing display controller, or a control device other than the timing controller, or may be a circuit in the control device. The display controller140may be implemented as various circuits or electronic components, such as an integrated circuit (IC), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a processor. The display controller140may be mounted on a printed circuit board or a flexible printed circuit and may be electrically connected with the data driving circuit120and the gate driving circuit130through the printed circuit board or the flexible printed circuit. The display controller140may transmit/receive signals to/from the data driving circuit120according to one or more predetermined interfaces. The interface may include, e.g., a low voltage differential signaling (LVDS) interface, an EPI interface, and a serial peripheral interface (SP). To provide a touch sensing function as well as an image display function, the transparent touch display device100according to embodiments of the disclosure may include a touch panel and a touch sensing circuit150that senses the touch panel to detect whether a touch occurs by a touch object, such as a finger or pen, or the position of the touch. The touch sensing circuit150may include a touch driving circuit160that drives and senses the touch panel and generates and outputs touch sensing data and a touch controller170that may detect an occurrence of a touch or the position of the touch using touch sensing data. The touch panel may include a plurality of touch electrodes as touch sensors. The touch panel may further include a plurality of touch routing lines for electrically connecting the plurality of touch electrodes and the touch driving circuit160. The touch panel or touch electrode is also referred to as a touch sensor. The touch panel may exist outside or inside the display panel110. When the touch panel exists outside the display panel110, the touch panel is referred to as an external type. When the touch panel is of the external type, the touch panel and the display panel110may be separately manufactured or may be combined during an assembly process. The external-type touch panel may include a substrate and a plurality of touch electrodes on the substrate. When the touch panel exists inside the display panel110, the touch panel is referred to as an internal type. When the touch panel is of the internal type, the touch panel may be formed in the display panel110during a manufacturing process of the display panel110. The touch driving circuit160may supply a touch driving signal to at least one of the plurality of touch electrodes and may sense at least one of the plurality of touch electrodes to generate touch sensing data. The touch sensing circuit150may perform touch sensing in a self-capacitance sensing scheme or a mutual-capacitance sensing scheme. When the touch sensing circuit150performs touch sensing in the self-capacitance sensing scheme, the touch sensing circuit150may perform touch sensing based on capacitance between each touch electrode and the touch object (e.g., finger or pen). According to the self-capacitance sensing scheme, each of the plurality of touch electrodes may serve both as a driving touch electrode and as a sensing touch electrode. The touch driving circuit160may drive all or some of the plurality of touch electrodes and sense all or some of the plurality of touch electrodes. When the touch sensing circuit150performs touch sensing in the mutual-capacitance sensing scheme, the touch sensing circuit150may perform touch sensing based on capacitance between the touch electrodes. According to the mutual-capacitance sensing scheme, the plurality of touch electrodes are divided into driving touch electrodes and sensing touch electrodes. The touch driving circuit160may drive the driving touch electrodes and sense the sensing touch electrodes. The touch driving circuit160and the touch controller170included in the touch sensing circuit150may be implemented as separate devices or as a single device. The touch driving circuit160and the data driving circuit120may be implemented as separate devices or as a single device. FIGS.2A,2B, and2Care views illustrating examples of an arrangement of light emitting areas EA and transmissive areas TA in a transparent touch display device100according to embodiments of the disclosure. Referring toFIG.2A, the subpixels SP of the transparent touch display device100may include subpixels emitting three colors of light. In other words, the subpixels SP of the transparent touch display device100may include a first color subpixel emitting a first color of light, a second color subpixel emitting a second color of light, and a third color subpixel emitting a third color of light. For example, each of the first color, the second color, and the third color may be one of red, green, and blue. Hereinafter, for convenience of description, it may be assumed that the first color is red, the second color is green or blue, and the third color is blue or green. The subpixels SP of the transparent touch display device100may include subpixels emitting four or more colors of light. In other words, the subpixels SP of the transparent touch display device100may include a first color subpixel emitting a first color of light, a second color subpixel emitting a second color of light, a third color subpixel emitting a third color of light, and a fourth color subpixel emitting a fourth color of light. For example, each of the first color, the second color, the third color, and the fourth color may be one of red, green, blue, and white. Referring toFIG.2A, the display panel110of the transparent touch display device100may include a plurality of light emitting areas EA of subpixels SP and a plurality of transmissive areas TA adjacent to the plurality of light emitting areas EA. The light emitting area EA of the subpixel SP may correspond to the area where the pixel electrode (or pixel electrode layer) is disposed. The light emitting area EA of the subpixel SP may correspond to an area where the pixel electrode, the light emitting layer, and the common electrode constituting the light emitting element of the subpixel SP overlap. Based on the light emitting characteristics of the light emitting element ED for each color, the area of the light emitting area EA for each color may be designed in various ways. The area of the light emitting area EA of the first color subpixel Red SP, the area of the light emitting area EA of the second color subpixel Green SP, and the area of the light emitting area EA of the third color subpixel Blue SP are all the same or similar within a predetermined range. Alternatively, as shown inFIG.2A, the area of the light emitting area EA of the first color subpixel Red SP may be smaller than the area of the light emitting area EA of the second color subpixel Green SP and the area of the light emitting area EA of the third color subpixels Blue SP. The area of the light emitting area EA of the second color subpixel Green SP and the area of the light emitting area EA of the third color subpixel Blue SP may be the same or similar within a predetermined range. Based on the desired transmittance and the area of each light emitting area EA, the area of each transmissive area TA may be determined. As the ratio of the area of the transmissive areas TA to the area of the emission areas EA increases, the transmittance of the display panel110may increase. As the ratio of the area of the transmissive areas TA to the area of the emission areas EA decreases, the transmittance of the display panel110may decrease. The transmittance and transmissive area TA described herein may also be referred to as transparency and transparent area. Each of the transmissive areas TA included in the display panel110may be the same. Alternatively, the area of some of the transmissive areas TA included in the display panel110may be different from the area of the others. The light emitting areas EA and the transmissive areas TA may be arranged in various patterns. Referring toFIG.2A, the ith light emitting area row EAR(i), the i+1th light emitting area row EAR(i+1), and the i+2th light emitting area row EAR(i+2) may be arranged adjacent to each other in a first direction (e.g., a column direction). Referring toFIG.2A, in any ith light emitting area row EAR(i), the light emitting areas EA of the first color subpixels Red SP may be arranged in a second direction (e.g., a row direction). Between the light emitting areas EA of the first color subpixels Red SP, the light emitting area EA of the second color subpixel Green SP and the light emitting area EA of the third color subpixel Blue SP may be alternately arranged. In the i+1th light emitting area row EAR(i+1), the light emitting areas EA of the first color subpixels Red SP may be arranged in a second direction (e.g., a row direction). Between the light emitting areas EA of the first color subpixels Red SP, the light emitting area EA of the third color subpixel Blue SP and the light emitting area EA of the second color subpixel Green SP may be alternately arranged. In the i+2th light emitting area row EAR(i+2), the light emitting areas EA of the first color subpixels Red SP may be arranged in the second direction (e.g., the row direction). Between the light emitting areas EA of the first color subpixels Red SP, the light emitting area EA of the second color subpixel Green SP and the light emitting area EA of the third color subpixel Blue SP may be alternately arranged. Transmissive areas TA may be arranged between light emitting area rows EAR(i), EAR(i+1), and EAR(i+2) adjacent to each other in the first direction (e.g., the column direction). For example, the transmissive areas TA may be arranged between the ith light emitting area row EAR(i) and the i+1th light emitting area row EAR(i+1). The transmissive areas TA may be arranged between the i+1th light emitting area row EAR(i+1) and the i+2th light emitting area row EAR(i+2). Referring toFIG.2A, the transmissive area TA disposed between the light emitting areas EA of the first color subpixels Red SP disposed adjacent to each other in the first direction (e.g., the column direction) may extend up to the area between the light emitting area EA of the second color subpixel Green SP and the light emitting area EA of the third color subpixel Blue SP disposed adjacent to each other in the first direction (e.g., the row direction). Referring toFIG.2A, data lines DL may be disposed to overlap at least a part of the light emitting areas EA of the second color subpixel Greens SP and at least a part of the light emitting areas EA of the third color subpixels Blue SP. Each of the data lines DL may be disposed to extend in the first direction (e.g., the column direction). Each of the data lines DL may be disposed while avoiding the transmissive areas TA. Each of the data lines DL may pass between two transmissive areas TA adjacent to each other in the second direction. Each of the gate lines GL may be disposed to extend in the second direction (e.g., the row direction). The gate lines GL may be disposed to overlap at least a part of the ith light emitting area row EAR(i), the i+1th light emitting area row EAR(i+1), and the i+2th light emitting area row EAR(i+2). Each of the gate lines GL may pass between two transmissive areas TA adjacent to each other in the first direction. Referring to the example ofFIG.2B, in each light emitting area row EAR(i) and EAR(i+1), the light emitting area EA of the first color subpixel Red SP, the light emitting area EA of the second color subpixel Green SP, and the light emitting area EA of the third color subpixel Blue SP may be arranged adjacent to each other in the second direction (e.g., the row direction). Referring to the example ofFIG.2B, the transmissive areas TA may be disposed between the light emitting area rows EAR(i) and EAR(i+1) adjacent to each other. Referring to the example ofFIG.2B, each of the data lines DL may be disposed to extend in the first direction (e.g., the column direction). Each of the data lines DL may be disposed while avoiding the transmissive areas TA. Each of the gate lines GL may be disposed to extend in the second direction (e.g., the row direction). The gate lines GL may be disposed to overlap at least a part of each light emitting area row EAR(i) and EAR(i+1). Referring toFIG.2C, in each light emitting area column EAR(j) and EAR(j+1), the light emitting area EA of the first color subpixel Red SP, the light emitting area EA of the second color subpixel Green SP, the light emitting area EA of the third color subpixel Blue SP, and the light emitting area EA of the fourth color subpixel White SP may be arranged adjacent to each other in the first direction (e.g., the column direction). Referring to the example ofFIG.2C, the transmissive areas TA may be disposed between the two light emitting area columns EAR(j) and EAR(j+1) adjacent to each other in the second direction (e.g., the row direction). Referring to the example ofFIG.2C, each of the data lines DL may be disposed to extend in the first direction (e.g., the column direction). Each of the data lines DL may be disposed while avoiding the transmissive areas TA. Each of the gate lines GL may be disposed to extend in the second direction (e.g., the row direction). The gate lines GL may be disposed between the light emitting area rows EA Row. The gate lines GL may be disposed between the transmissive areas TA. FIG.3is a view schematically illustrating a structure of a display panel of a transparent touch display device according to embodiments of the disclosure. Referring toFIG.3, the display panel110of the transparent touch display device100may have a built-in touch panel TSP. In other words, in the transparent touch display device100, the touch panel TSP may be of a built-in type embedded in the display panel110. The built-in touch panel TSP is also referred to as an in-cell type or on-cell type touch panel TSP. Each subpixel SP in the display area DA of the display panel110may include a light emitting element ED, a driving transistor DRT for driving the light emitting element ED, a scan transistor SCT for transferring a data voltage VDATA to a first node N1of the driving transistor DRT, and a storage capacitor Cst for maintaining a constant voltage during one frame. The driving transistor DRT may include the first node N1to which the data voltage may be applied, a second node N2electrically connected with the light emitting element ED, and a third node N3to which a driving voltage VDD is applied from a driving voltage line DVL. The first node N1may be a gate node, the second node N2may be a source node or a drain node, and the third node N3may be the drain node or the source node. The light emitting element ED may include a pixel electrode PE (or a pixel electrode layer PE), a light emitting layer EL, and a common electrode CE (or a common electrode layer CE). The pixel electrode PE may be disposed in each subpixel SP and may be electrically connected to the second node N2of the driving transistor DRT of each subpixel SP. The common electrode CE may be jointly disposed in a plurality of subpixels SP, and a base voltage VSS may be applied to the common electrode CE. For example, the light emitting element ED may be an organic light emitting diode (OLED), an inorganic light emitting diode, or a quantum dot light emitting element. In this case, when the light emitting element ED is an organic light emitting diode, the light emitting layer EL of the light emitting element ED may include an organic light emitting layer including an organic material. The scan transistor SCT may be on/off controlled by a scan signal SCAN, which is a gate signal, applied via the gate line GL and be electrically connected between the first node N1of the driving transistor DRT and the data line DL. The storage capacitor Cst may be electrically connected between the first node N1and second node N2of the driving transistor DRT. Each subpixel SP may have a 2T (transistor) 1C (capacitor) structure which includes two transistors DRT and SCT and one capacitor Cst as shown inFIG.3and, in some cases, each subpixel SP may further include one or more transistors or one or more capacitors. The capacitor Cst may be an external capacitor intentionally designed to be outside the driving transistor DRT, but not a parasite capacitor (e.g., Cgs or Cgd) which is an internal capacitor that may be present between the first node N1and the second node N2of the driving transistor DRT. Each of the driving transistor DRT and the scan transistor SCT may be an n-type transistor or a p-type transistor. Since the circuit elements (particularly, the light emitting element ED) in each subpixel SP are vulnerable to external moisture or oxygen, an encapsulation layer ENCAP may be disposed on the display panel110to prevent penetration of external moisture or oxygen into the circuit elements (particularly, the light emitting element ED). Meanwhile, in the transparent touch display device100, the touch panel TSP may be formed on the encapsulation layer ENCAP. In other words, in the transparent touch display device100, the touch sensor included in the touch panel TSP may be disposed on the encapsulation layer ENCAP. The touch sensor may include a plurality of touch electrodes TE. Upon touch sensing, a touch driving signal or a touch sensing signal may be applied to at least one of the plurality of touch electrodes TE included in the touch sensor. Accordingly, upon touch sensing, a potential difference may be formed between the touch electrodes TE and the common electrode CE disposed with the encapsulation layer ENCAP interposed therebetween, causing unnecessary parasitic capacitance. The parasitic capacitance may degrade touch sensitivity. In one embodiment, the distance between the touch electrode TE and the common electrode CE may be designed to be a predetermined value (e.g., 5 μm) or more considering, e.g., panel thickness, panel manufacturing process, and display performance, so as to reduce the parasitic capacitance. To that end, the thickness of the encapsulation layer ENCAP may be at least 5 μm or more, as an example. FIG.4is a view illustrating an example of a self-capacitance-type touch sensor structure of a transparent touch display device100according to embodiments of the disclosure. Referring toFIG.4, the transparent touch display device100according to embodiments of the disclosure may include a touch sensor in the touch sensing area TSA of the display panel110. The touch sensor may include a plurality of touch electrodes TE, X-TE, and Y-TE. Referring toFIG.4, the transparent touch display device100according to embodiments of the disclosure may include a self-capacitance type touch sensor to sense a touch based on self-capacitance. Referring toFIG.4, the self-capacitance type touch sensor may include a plurality of touch electrodes TE separated from each other and disposed in the touch sensing area TSA. Referring toFIG.4, the self-capacitance type touch sensor may further include a plurality of touch routing lines TL for electrically connecting each of the plurality of touch electrodes TE to the touch driving circuit160. Referring toFIG.4, In the self-capacitance type touch sensor, the plurality of touch electrodes TE does not electrically cross each other. In the self-capacitance type touch sensor, each of the plurality of touch electrodes TE may be one touch node corresponding to touch coordinates. Referring toFIG.4, when sensing a touch based on self-capacitance, the touch driving circuit160may supply a touch driving signal to at least one of the plurality of touch electrodes TE and may sense the touch electrode TE to which the touch driving signal is supplied. The sensing value for the touch electrode TE to which the touch driving signal is supplied is a value corresponding to capacitance or a change in capacitance in the touch electrode TE to which the touch driving signal is supplied. The capacitance in the touch electrode TE to which the touch driving signal is supplied may be a capacitance between the touch electrode TE to which the touch driving signal is supplied and a touch object, such as a finger. FIG.5Ais a view illustrating an example of a mutual-capacitance-type touch sensor structure of a transparent touch display device100according to embodiments of the disclosure. Referring toFIG.5A, the mutual-capacitance type touch sensor may include a plurality of first touch electrode lines and a plurality of second touch electrode lines. Referring toFIG.5A, each of the plurality of first touch electrode lines may include a plurality of touch electrodes X-TE disposed in the same row and electrically connected to each other. Each of the plurality of second touch electrode lines may be a touch electrode Y-TE disposed in one column. Alternatively, each of the plurality of first touch electrode lines may be one touch electrode X-TE disposed in one row. Alternatively, each of the plurality of second touch electrode lines may include a plurality of touch electrodes Y-TE disposed in the same column and electrically connected to each other. The touch electrodes X-TE included in each of the plurality of first touch electrode lines and electrically connected may be electrically connected within the display panel110. In a different scheme, the touch electrodes X-TE included in each of the plurality of first touch electrode lines and electrically connected may be electrically separated in the display panel110and may be electrically connected inside the touch driving circuit160. The plurality of first touch electrode lines may be disposed in different rows and may be electrically separated from each other. The plurality of second touch electrode lines may be disposed in different columns and may be electrically separated from each other. The plurality of first touch electrode lines and the plurality of second touch electrode lines may cross and overlap each other. Accordingly, the plurality of first touch electrode lines and the plurality of second touch electrode lines may correspond to each other to form capacitances (mutual-capacitances). Referring toFIG.5A, the mutual-capacitance type touch sensor may include a plurality of touch routing lines TL, X-TL, and Y-TL for electrically connecting each of the plurality of touch electrodes TE, X-TE, and Y-TE to the touch driving circuit160. Referring toFIG.5A, in the mutual-capacitance type touch sensor, points where a plurality of first touch electrode lines and a plurality of second touch electrode lines overlap may be touch nodes corresponding to touch coordinates. Referring toFIG.5A, upon sensing a touch based on mutual-capacitance, the touch driving circuit160may supply a touch driving signal to at least one of the plurality of first touch electrode lines and sense each of the plurality of second touch electrode lines. In this case, the plurality of first touch electrode lines may be driving touch electrode lines (also referred to as transmitting touch electrode lines), and the plurality of second touch electrode lines may be sensing touch electrode lines (also referred to as receiving touch electrode lines). The sensing value sensed by the touch driving circuit160through each of the second touch electrode lines is a value corresponding to a capacitance or a change in capacitance between the first and second touch electrode lines. Alternatively, upon sensing a touch based on mutual-capacitance, the touch driving circuit160may supply a touch driving signal to at least one of the plurality of second touch electrode lines and sense each of the plurality of first touch electrode lines. In this case, the plurality of second touch electrode lines may be driving touch electrode lines (also referred to as transmitting touch electrode lines), and the plurality of first touch electrode lines may be sensing touch electrode lines (also referred to as receiving touch electrode lines). The sensing value sensed by the touch driving circuit160through each of the first touch electrode lines is a value corresponding to a capacitance (mutual-capacitance) or a change in capacitance between the first and second touch electrode lines. FIG.5Bis a plan view illustrating a display panel110of a transparent touch display device100according to embodiments of the disclosure. Referring toFIG.5B, when the transparent touch display device100performs touch sensing based on mutual capacitance, the touch sensor structure of the transparent touch display device100may include a plurality of first touch electrode lines X-TEL and a plurality of second touch electrode lines Y-TEL. Here, the plurality of first touch electrode lines X-TEL and the plurality of second touch electrode lines Y-TEL may be positioned on the encapsulation layer ENCAP. The plurality of first touch electrode lines X-TEL and the plurality of second touch electrode lines Y-TEL may cross each other. Each of the plurality of second touch electrode lines Y-TEL may be disposed in the first direction (e.g., the column direction). Each of the plurality of first touch electrode lines X-TEL may be disposed in the second direction (e.g., the row direction) different from the first direction. In the disclosure, the first direction and the second direction may be relatively different from each other. For example, the first direction may be a y-axis direction (column direction), and the second direction may be an x-axis direction (row direction). Conversely, the first direction may be the x-axis direction (row direction) and the second direction may be the y-axis direction (column direction). The first direction and the second direction may be, or may not be, perpendicular to each other. In the disclosure, row and column are relative terms, and from a point of view, the terms “row” and “column” may be interchangeably used. The first direction may be a direction parallel to a direction in which the data line DL is disposed, and the second direction may be a direction parallel to a direction in which the gate line GL is disposed. According to the example of the structure of the touch sensor ofFIG.5B, each of the plurality of first touch electrode lines X-TEL may include a plurality of touch electrodes X-TE electrically connected to each other, and each of the plurality of second touch electrode lines Y-TEL may include a plurality of touch electrodes Y-TE electrically connected to each other. The plurality of first touch electrode lines X-TEL and the plurality of second touch electrode lines Y-TEL, respectively, may play different roles. The plurality of first touch electrode lines X-TEL may be driving touch electrode lines driven by allowing a touch driving signal to be applied thereto by the touch driving circuit160, and the plurality of second touch electrode lines Y-TEL may be sensing touch electrode lines sensed by the touch driving circuit160. In this case, the plurality of touch electrodes X-TE constituting each of the plurality of first touch electrode lines X-TEL may be driving touch electrodes, and the plurality of touch electrodes Y-TE constituting each of the plurality of second touch electrode lines Y-TEL may be sensing touch electrodes. Conversely, the plurality of first touch electrode lines X-TEL may be sensing touch electrode lines sensed by the touch driving circuit160, and the plurality of second touch electrode lines Y-TEL may be driving touch electrode lines driven by allowing a touch driving signal to be applied thereto by the touch driving circuit160. In this case, the plurality of touch electrodes X-TE constituting each of the plurality of first touch electrode lines X-TEL may be sensing touch electrodes, and the plurality of touch electrodes Y-TE constituting each of the plurality of second touch electrode lines Y-TEL may be driving touch electrodes. The touch sensor may include a plurality of touch routing lines X-TL and Y-TL in addition to the plurality of first touch electrode lines X-TEL and the plurality of second touch electrode lines Y-TEL. The plurality of touch routing lines X-TL and Y-TL may include one or more first touch routing lines X-TL connected to each of the plurality of first touch electrode lines X-TEL, and one or more second touch routing lines Y-TL connected to each of the plurality of second touch electrode lines Y-TEL. Referring toFIG.5B, each of the plurality of first touch electrode lines X-TEL may include a plurality of touch electrodes X-TE disposed in the same row (or column) and electrically connected and first touch bridge electrodes X-BE electrically connecting touch electrodes X-TE adjacent to each other in the second direction. As shown inFIG.5B, the first touch bridge electrode X-BE connecting the two adjacent touch electrodes X-TE may be a metal integrated with the two adjacent touch electrodes X-TE. Alternatively, the first touch bridge electrode X-BE connecting the two adjacent touch electrodes X-TE may be positioned on a different layer from the two adjacent touch electrodes X-TE and may be electrically connected to the two adjacent touch electrodes X-TE through a contact hole. Referring toFIG.5B, each of the plurality of second touch electrode lines Y-TEL may include a plurality of touch electrodes Y-TE disposed in the same column (or row) and electrically connected and second touch bridge electrodes Y-BE electrically connecting two touch electrodes Y-TE adjacent to each other in the first direction. As shown inFIG.5B, the second touch bridge electrode Y-BE connecting the two adjacent touch electrodes Y-TE may be positioned on a different layer from the two adjacent touch electrodes Y-TE and may be electrically connected to the two adjacent touch electrodes Y-TE through a contact hole. Alternatively, the second touch bridge electrode Y-BE connecting the two adjacent touch electrodes Y-TE may be a metal integrated with the two adjacent touch electrodes Y-TE. In an area where the first touch electrode line X-TEL and the second touch electrode line Y-TEL cross each other (touch electrode line crossing area), the first touch bridge electrode X-BE and the second touch bridge electrode Y-BE may cross each other. In the touch electrode line crossing area, when the first touch bridge electrode X-BE and the second touch bridge electrode Y-BE cross each other, the first touch bridge electrode X-BE and the second touch bridge electrode Y-BE may be positioned on different layers. Accordingly, to arrange the plurality of first touch electrode lines X-TEL and the plurality of second touch electrode lines Y-TEL to cross each other, the plurality of touch electrodes X-TE, the plurality of first touch bridge electrodes X-BE, the plurality of touch electrodes Y-TE, the plurality of second touch electrode lines Y-TEL, and the plurality of second touch bridge electrodes Y-BE may be positioned on two or more layers. Referring toFIG.5B, each of the plurality of first touch electrode lines X-TEL may be electrically connected to a corresponding first touch pad X-TP in the touch pad unit TP (or touch pad circuit TP) through one or more first touch routing lines X-TL. Each of the plurality of second touch electrode lines Y-TEL may be electrically connected to a corresponding second touch pad Y-TP in the touch pad unit TP through one or more second touch routing lines Y-TL. The touch sensor may include a plurality of touch electrodes X-TE constituting each of the plurality of first touch electrode lines X-TEL and a plurality of touch electrodes X-TE constituting each of the plurality of second touch electrode lines Y-TEL and may further include a plurality of first touch bridge electrodes X-BE and a plurality of second touch bridge electrodes Y-BE and may further include a plurality of first touch routing lines X-TL and a plurality of second touch routing lines Y-TL. Some of the components constituting the touch sensor may include a touch sensor metal TSM, and others thereof may include a touch bridge metal. The touch sensor metal TSM and the touch bridge metal may be metals positioned on different layers. For example, the plurality of touch electrodes X-TE configuring each of the plurality of first touch electrode lines X-TEL and the plurality of touch electrodes configuring each of the plurality of second touch electrode lines Y-TEL may include the touch sensor metal TSM. For example, either (e.g., first touch bridge electrodes X-BE) the plurality of first touch bridge electrodes X-BE or the plurality of second touch bridge electrodes Y-BE may include the touch sensor metal TSM, and the others (e.g., the second touch bridge electrodes Y-BE) may include the touch bridge metal positioned on a different layer from the touch sensor metal TSM. For example, both of the plurality of first touch routing lines X-TL and the plurality of second touch routing lines Y-TL may include the touch sensor metal TSM. Alternatively, both of the plurality of first touch routing lines X-TL and the plurality of second touch routing lines Y-TL may include the touch bridge metal. Alternatively, either the plurality of first touch routing lines X-TL or the plurality of second touch routing lines Y-TL may include the touch sensor metal TSM, and the others may include the touch bridge metal. As shown inFIG.5B, the plurality of touch electrodes X-TE and the plurality of first touch bridge electrodes X-BE constituting the plurality of first touch electrode lines X-TEL may be disposed on the encapsulation layer ENCAP positioned on the common electrode CE. The plurality of touch electrodes Y-TE and the plurality of second touch bridge electrodes Y-BE constituting the plurality of second touch electrode lines Y-TEL may be disposed on the encapsulation layer ENCAP. As shown inFIG.5B, each of the plurality of first touch routing lines X-TL electrically connected to the plurality of first touch electrode lines X-TEL may be disposed on the encapsulation layer ENCAP while extending up to where the encapsulation layer ENCAP is not present and may be electrically connected to the plurality of first touch pads X-TP. Each of the plurality of second touch routing lines Y-TL electrically connected to the plurality of second touch electrode lines Y-TEL may be disposed on the encapsulation layer ENCAP while extending up to where the encapsulation layer ENCAP is not present and be electrically connected to the plurality of second touch pads Y-TP. The encapsulation layer ENCAP may be positioned in the display area DA and, in some cases, may extend up to the non-display area NDA. As described above, to prevent any layer (e.g., the encapsulation layer in the OLED panel) in the display area DA from collapsing, a dam portion DAM may be present in the border area between the display area DA and the non-display area NDA or in the non-display area NDA which is positioned around the display area DA. In other words, the dam portion DAM may be positioned near the outermost end of the encapsulation layer ENCAP. The dam portion DAM may include one or more dams DAM1and DAM2. For example, as shown inFIG.5B, the dam portion DAM may include a primary dam DAM1and a secondary dam DAM2. The secondary dam DAM2may be a dam positioned further outside the primary dam DAM1. Unlike in the example shown inFIG.5B, the dam portion DAM may include only the primary dam DAM1or, in some cases, the dam portion DAM may include one or more additional dams as well as the primary dam DAM1and the secondary dam DAM2. Referring toFIG.5B, the encapsulation layer ENCAP may be positioned on the inner side of the dam portion DAM. Alternatively, the encapsulation layer ENCAP may be positioned on the inner side of the dam portion DAM and may be positioned to extend to the upper portion and/or lower portion of the dam portion DAM. The encapsulation layer ENCAP may be further extended to be positioned on the outer side of the dam portion DAM. In the display panel110of the transparent touch display device100, each of the touch electrodes TE, X-TE, and Y-TE may be a plate-shaped touch sensor metal TSM without an opening. In this case, each touch electrode TE may be a transparent electrode. In other words, each touch electrode TE may be formed of a transparent electrode material to allow the light emitted from the plurality of subpixel SPs disposed thereunder to be transmitted upwards. Alternatively, as shown inFIG.5B, each touch electrode TE disposed in the display panel110may be of a mesh type as in case1. To that end, each touch electrode TE may be formed of a touch sensor metal TSM patterned in a mesh type and having a plurality of openings OA. The touch sensor metal TSM of each touch electrode TE is a portion substantially corresponding to the touch electrode TE and may be a portion to which a touch driving signal is applied or a portion where a touch sensing signal is detected. The touch sensor metal TSM corresponding to each touch electrode TE may be positioned on a bank which is disposed in an area other than the light emitting areas EA of the subpixels SP. As shown inFIG.5B, when each touch electrode TE is a touch sensor metal TSM patterned in a mesh type as in case2, the area where the touch electrode TE is formed may have a plurality of openings OA. Each of the plurality of openings OA present in each touch electrode TE may correspond to the light emitting areas EA of one or more subpixels SP or one or more transmissive areas TA. In other words, the plurality of openings OA may serve as a path along which the light emitted from the plurality of subpixels SP disposed thereunder passes upward to thereby create the light emitting area EA or may serve as a light transmissive area to thereby create the transmissive area TA. For example, the contour of the touch electrode TE may be shaped as a diamond or rhombus or may come in other various shapes, such as a triangle, pentagon, or hexagon. Each of the plurality of openings OA may have various shapes depending on the shape of the touch electrode TE or the mesh shape of the touch sensor metal TSM. Referring toFIG.5B, as in case2, in the area of each touch electrode TE, one or more dummy metals DM disconnected from the mesh-type touch sensor metal TSM may be present. The dummy metal DM may be positioned to be surrounded by the touch sensor metal TSM, within the area of the touch electrode TE. Unlike the touch sensor metal TSM, the dummy metal DM is a portion to which a touch driving signal is not applied nor is a touch sensing signal detected, and may be a floating metal. Although the touch sensor metal TSM is electrically connected with the touch driving circuit160, the dummy metal DM is not electrically connected with the touch driving circuit160. In the area of each of all the touch electrodes TE, one or more dummy metals DM may be present, with them disconnected from the touch sensor metal TSM. Unlike this, one or more dummy metals D may be present while being disconnected from the touch sensor metal TSM, only in the area of each of some of the touch electrodes TE, and no dummy metal DM may be present in the area of others of the touch electrodes TE. In relation to the role of the dummy metal DM, in the case where one or more dummy metals DM are absent and only the touch sensor metal TSM is present in a mesh type in the area of the touch electrode TE, a visibility issue may arise in which the contour of the touch sensor metal TSM is shown on the screen. In contrast, where one or more dummy metals DM are present in the area of the touch electrode TE as shown inFIG.5B, the visibility issue that the contour of the touch sensor metal TSM is shown on the screen may be prevented. The magnitude of capacitance may be adjusted per touch electrode TE by adjusting the presence or absence of dummy metal DM or the number (dummy metal ratio) of dummy metals DM per touch electrode TE, thereby enhancing touch sensitivity. The touch sensor metal TSM formed in the area of one touch electrode TE may be cut (or etched out) at some spots and may thus be formed as the dummy metal DM. In other words, the touch sensor metal TSM and the dummy metal DM may be formed of the same material on the same layer. Referring toFIG.5B, in case2, if only the touch sensor metal TSM is shown with the plurality of dummy metals DM omitted from the area of one touch electrode TE, a plurality of dummy areas DMA may be present in the area where the touch sensor metal TSM is disposed. The plurality of dummy areas DMA are areas corresponding to the plurality of dummy metals DM. FIG.6is a cross-sectional view illustrating a display panel110of a transparent touch display device100according to embodiments of the disclosure.FIG.6is a cross-sectional view taken along line X-X′ ofFIG.5B. For convenience of description,FIG.6illustrates four areas included in the X′-X area. The four areas may include a light emitting area EA, a bridge area BA of a touch sensor arrangement area, a transmissive area TA, and a non-bridge area NBA of a touch sensor arrangement area. The bridge area BA of the touch sensor arrangement area may mean an area where the first touch bridge electrode X-BE and the second touch bridge electrode Y-BE cross each other. The non-bridge area NBA of the touch sensor arrangement area may mean an area where the first touch bridge electrode X-BE and the second touch bridge electrode Y-BE do not exist. The area where the touch electrode TE is formed may include a plurality of light emitting areas EA and a plurality of transmissive areas TA. For example, a pixel electrode PE and a light emitting layer EL constituting the light emitting element ED may be disposed in each light emitting area EA. A common electrode CE may be disposed on the light emitting layer EL. A touch sensor may be disposed in the area where the touch electrode TE is formed. For example, the area where the touch electrode TE is formed may include the area NBA where the touch electrodes Y-TE are disposed and the area BA where the touch electrodes Y-TE and the touch bridge electrode Y-BE connecting the touch electrodes Y-TE are formed. The touch electrodes Y-TE may include a touch sensor metal TSM, and the touch bridge electrode Y-BE may include a touch bridge metal positioned on a different layer from the touch sensor metal TSM. In some cases, the touch bridge metal may be positioned on the same layer as the touch sensor metal TSM. The area where the touch electrode TE is formed may include a transmissive area TA. To enhance transmittance, a metal constituting the touch sensor, such as the touch sensor metal TSM and the touch bridge metal, may not be disposed in the transmissive area TA. As is described below, a transparent electrode (or a transparent electrode layer) electrically connected to the touch sensor metal TSM may be additionally disposed in the transmissive area TA to enhance touch sensitivity. Further, to enhance transmittance, an opening of the touch buffer film T-BUF and/or the touch inter-layer insulation film T-ILD may be present in the transmissive area TA. To enhance transmittance, an opening of the common electrode CE may be present in the transmissive area TA. In some cases, a floating metal electrically floated from the common electrode CE may be present in the opening of the common electrode CE. Hereinafter, the cross-sectional structure ofFIG.6is described in more detail. In each subpixel SP in the display area DA, the driving transistor DRT is disposed on the substrate SUB and includes a first node N1corresponding to the gate electrode, a second node N2corresponding to the source electrode or drain electrode, a third node N3corresponding to the drain node or source electrode, and a semiconductor layer SEMI. The first node N1and the semiconductor layer SEMI may overlap each other, with a gate insulation film GI disposed therebetween. The second node N2of the driving transistor DRT may be formed on an insulating layer INS and contact one side of the semiconductor layer SEMI. The third node N3of the driving transistor DRT may be formed on the insulating layer INS and contact the other side of the semiconductor layer SEMI of the driving transistor DRT. The light emitting element ED may include a pixel electrode PE corresponding to the anode electrode, a light emitting layer EL formed on the pixel electrode E, and a common electrode CE formed on the light emitting layer EL and corresponding to the cathode electrode. The pixel electrode PE may be electrically connected to the second node N2of the driving transistor DRT exposed through a contact hole passing through the planarization film PLN. The light emitting layer EL may be formed on the pixel electrode PE of the light emitting area EA provided by the bank BK. The common electrode CE may be formed to face the pixel electrode PE with the light emitting layer EL interposed therebetween. The encapsulation layer ENCAP may block penetration of external moisture or oxygen into the light emitting element ED which is vulnerable to external moisture or oxygen. The encapsulation layer ENCAP may be a single layer or may include a plurality of layers PAS1, PCL, and PAS2as shown inFIG.6. For example, where the encapsulation layer ENCAP is formed of multiple layers PAS1, PCL, and PAS2, the encapsulation part ENCAP may include one or more inorganic encapsulation layers PAS1and PAS2and one or more organic encapsulation layer PCL. As a specific example, the encapsulation layer ENCAP may be configured by sequentially stacking the first inorganic encapsulation layer PAS1, the organic encapsulation layer PCL, and the second inorganic encapsulation layer PAS2. The first inorganic encapsulation layer PAS1may be formed on the substrate SUB where the common electrode CE is formed to be closest to the light emitting element ED. For example, the first inorganic encapsulation layer PAS1may be formed of an inorganic insulation material capable of low-temperature deposition, such as, e.g., silicon nitride (SiNx), silicon oxide (SiOx), silicon oxynitride (SiON), or aluminum oxide (Al2O3). The first inorganic encapsulation layer PAS1may prevent damage to the light emitting layer EL including the organic material vulnerable to high-temperature atmosphere upon deposition. The organic encapsulation layer PCL may be formed in a smaller area than the first inorganic encapsulation layer PAS1. The organic encapsulation layer PCL may be formed to expose two opposite ends of the first inorganic encapsulation layer PAS1. The organic encapsulation layer PCL serves to mitigate stress between the layers due to a warping of the transparent touch display device100which may be an OLED device, while reinforcing the planarization performance. The organic encapsulation layer PCL may be formed of, e.g., an acrylic resin, epoxy resin, polyimide, polyethylene, silicon oxycarbonate (SiOC), or other organic insulation materials. Referring toFIG.6, the dam portion DAM may include a primary dam DAM1closer to the display area DA and a secondary dam DAM2closer to the touch pad unit TP. For example, the dam portion DAM may be positioned between the display area DA and the touch pad unit TP where a plurality of first touch pads X-TP and a plurality of second touch pads Y-TP are formed in the non-display area NDA. The one or more dams DAM1and DAM2disposed in the dam portion DAM may prevent the liquid-state organic encapsulation layer PCL from collapsing to the non-display area NDA and resultantly penetrating into the pad area, where the touch pad unit TP is formed, when the liquid-phase organic encapsulation layer PCL is dropped to the display area DA. This effect may be further increased when the dam portion DAM includes a plurality of dams DAM1and DAM2as shown inFIG.6. Each of the primary dam DAM1and the secondary dam DAM2included in the dam portion DAM may be formed in a single-layer structure or multi-layer structure. Each of the primary dam DAM1and the secondary dam DAM2may be basically formed in a dam formation pattern DFP. The dam formation pattern DFP may be formed of the same material as the bank BK for separating the subpixels SP from each other, or may be formed of the same material as a spacer for maintaining an inter-layer spacing. The dam formation pattern DFP may be formed simultaneously with the bank BK or the spacer, so that the dam structure may be formed without a process for adding a mask and a cost increase. The primary dam DAM1and/or the secondary dam DAM2may be structured so that the first inorganic encapsulation layer PAS1and/or the second inorganic encapsulation layer PAS2are stacked on the dam formation pattern DFP as shown inFIG.6. The organic encapsulation layer PCL including an organic material may be positioned only on an inside surface of the primary dam DAM1as shown inFIG.6. Unlike this, the organic encapsulation layer PCL containing an organic material may also be positioned over at least the primary dam DAM1of the primary dam DAM1and the secondary dam DAM2. The first and second inorganic encapsulation layers PAS1and PAS2may also be positioned over at least the first dam DAM1of the first and second dams DAM1and DAM2. The first and second inorganic encapsulation layers PAS1and PAS2may also be positioned on an outer side surface of at least the first dam DAM1of the first and second dams DAM1and DAM2. The second inorganic encapsulation layer PAS2may be formed over the substrate SUB, where the organic encapsulation layer PCL is formed, to cover or overlay the upper surface and side surfaces of each of the organic encapsulation layer PCL and the first inorganic encapsulation layer PAS1. The second inorganic encapsulation layer PAS2reduces or blocks penetration of external moisture or oxygen into the first inorganic encapsulation layer PAS1and the organic encapsulation layer PCL. A touch buffer film T-BUF may be disposed on the encapsulation layer ENCAP. The touch buffer film T-BUF may be positioned between the touch sensor metal TSM and the common electrode CE of the light emitting element ED. In one embodiment, the touch buffer film T-BUF may be designed to maintain a predetermined minimum spacing (e.g., 5 μm) between the touch sensor metal TSM and the common electrode CE of the light emitting element ED. Thus, it is possible to reduce or prevent the parasitic capacitance formed between the touch sensor metal TSM and the common electrode CE of the light emitting element ED and hence prevent deterioration of touch sensitivity due to parasitic capacitance. The touch buffer film T-BUF may block off penetration, into the organic material-containing light emitting layer EL, of external moisture or the chemical (e.g., developer or etchant) used upon manufacturing the touch sensor metal TSM disposed on the touch buffer film T-BUF. The touch buffer film T-BUF is formed of an organic insulation material with a low permittivity of 1 to 3 and formed at a low temperature which is not more than a predetermined temperature (e.g., 100° C.) to prevent damage to the light emitting layer EL containing the organic material vulnerable to high temperature. For example, the touch buffer film T-BUF may be formed of an acrylic-based, epoxy-based, or siloxane-based material. The touch buffer film T-BUF with planarizability, formed of an organic insulation material, may prevent fracture of the touch sensor metal TSM formed on the touch buffer film T-BUF and damage to each encapsulation layer PAS1, PCL, and PAS2in the encapsulation layer ENCAP due to a warping of the OLED device. According to a mutual-capacitance-based touch sensor structure, the first touch electrode line X-TEL and the second touch electrode line Y-TEL may be formed on the touch buffer film T-BUF, and the first touch electrode line X-TEL and the second touch electrode line Y-TEL may be disposed to cross each other. Each of the plurality of second touch electrode lines Y-TEL may include a plurality of second touch bridge electrodes Y-BE electrically connecting the plurality of touch electrodes Y-TE and the plurality of touch electrodes Y-TE. As shown inFIG.6, the plurality of touch electrodes Y-TE and the plurality of second touch bridge electrodes Y-BE may be positioned on different layers, with a touch inter-layer insulation film ILD disposed therebetween. The plurality of touch electrodes Y-TE may be disposed adjacent to each other in the first direction (y-axis direction) and may be spaced apart from each other at a predetermined interval. Each of the plurality of touch electrodes Y-TE may be electrically connected to another touch electrode Y-TE adjacent in the first direction (y-axis direction) through the second touch bridge electrode Y-BE. The second touch bridge electrode Y-BE may be formed on the touch buffer film T-BUF and be exposed via the touch contact hole passing through the touch inter-layer insulation film ILD and be electrically connected with two touch electrodes Y-TE adjacent in the first direction (y axis direction). All or at least a part of the second touch bridge electrode Y-BE may be disposed to overlap the bank BK. Thus, it is possible to prevent a reduction in the aperture ratio due to the second touch bridge electrode Y-BE. Each of the plurality of first touch electrode lines X-TEL may include a plurality of touch electrodes X-TE and a plurality of first touch bridge electrodes X-BE electrically connecting the plurality of touch electrodes X-TE. The first touch bridge electrode X-BE may be disposed on the same plane as the touch electrode X-TE and be electrically connected with two touch electrodes X-TE adjacent thereto in the second direction (x axis direction) without a separate contact hole or be integrated with the two touch electrodes X-TE adjacent thereto each other in the second direction (x axis direction). All or at least a part of the first touch bridge electrode X-BE may be disposed to overlap the bank BK. Thus, it is possible to prevent a reduction in the aperture ratio due to the first touch bridge electrode X-BE. The second touch electrode line Y-TEL may be electrically connected with the touch driving circuit160via the second touch routing line Y-TL and the second touch pad Y-TP. Likewise, the first touch electrode line X-TEL may be electrically connected with the touch driving circuit160via the first touch routing line X-TL and the first touch pad X-TP. Referring to the example ofFIG.6, the encapsulation layer ENCAP may include an outer inclined surface SLP. The transparent touch display device100may further include the dam portion DAM positioned in the area where the outer inclined surface SLP of the encapsulation layer ENCAP ends, the touch pad unit TP positioned in the non-display area NDA and positioned further outside the dam portion DAM, and the second touch routing line Y-TL electrically connecting some touch electrode Y-TE of the plurality of touch electrodes TE and the second touch pad Y-TP of the touch pad unit TP. The second touch routing line Y-TL may descend along the outer inclined surface SLP of the encapsulation layer ENCAP, pass the upper portion of the dam DAM, and be electrically connected with the second touch pad Y-TP of the touch pad unit TP. The second touch routing line Y-TL may include one or more of a touch sensor metal TSM and a touch bridge metal. For example, the second touch routing line Y-TL may be disposed on a single layer including the touch sensor metal TSM or may be disposed on a single layer including the touch bridge metal, or may be disposed on two layers including both of the touch sensor metal TSM and the touch bridge metal. The second touch routing line Y-TL may be electrically connected with the touch electrode Y-TE via the contact hole or be integrated with the touch electrode Y-TE. The second touch routing line Y-TL may extend up to the non-display area NDA and be electrically connected with the second touch pad Y-TP via the top and side of the encapsulation layer ENCAP and the top and side of the dam DAM. Thus, the second touch routing line Y-TL may be electrically connected with the touch driving circuit160via the second touch pad Y-TP. The second touch routing line Y-TL may transfer the touch sensing signal from the touch electrode Y-TE to the touch driving circuit160or may receive the touch driving signal from the touch driving circuit160and transfer the touch driving signal to the touch electrode Y-TE. The first touch routing line X-TL may be electrically connected with the touch electrode X-TE via the contact hole or be integrated with the touch electrode X-TE. The first touch routing line X-TL may extend up to the non-display area NDA and be electrically connected with the first touch pad X-TP via the top and side of the encapsulation layer ENCAP and the top and side of the dam DAM. Thus, the first touch routing line X-TL may be electrically connected with the touch driving circuit160via the first touch pad X-TP. The first touch routing line X-TL may receive the touch driving signal form the touch driving circuit160and transfer the touch driving signal to the touch electrode X-TE and may transfer the touch sensing signal from the touch electrode X-TE to the touch driving circuit160. Referring toFIG.6, a protection film PAC may be disposed on the touch electrode X-TE and the touch electrode Y-TE. The touch protection film PAC may extend up to before or after the dam DAM and may thus be disposed even on the first touch routing line X-TL and the second touch routing line Y-TL. The cross-sectional view ofFIG.6illustrates a conceptual structure. Depending on the direction or position in which it is viewed, the position, thickness, or width of each pattern (e.g., various layers or electrodes) may be varied, and the connection structure of various patterns may be varied, and an additional layer other than the layers shown may be present as well, or some of the layers may be omitted or combined. For example, the width of the bank BK may be narrower than that shown in the drawings, and the height of the dam portion DAM may be higher or lower than shown. FIGS.7A to7Dare views illustrating examples of a planar structure of an area in which a first touch electrode TE #1and a second touch electrode TE #2are arranged in a transparent touch display device100according to embodiments of the disclosure.FIG.7Eis a view illustrating a first touch bridge metal TBM1and a second touch bridge metal TBM2in the examples of the planar structures ofFIGS.7A to7D.FIGS.8A to8Fare views illustrating examples of the cross-sectional structure taken along line A-A′ ofFIG.7A.FIGS.9A to9Fare views illustrating examples of the cross-sectional structure taken along line B-B′ ofFIG.7A.FIGS.10A to10Fare views illustrating examples of the cross-sectional structure taken along line C-C′ ofFIG.7A.FIGS.11A to11Fare views illustrating examples of the cross-sectional structure taken along line D-D′ ofFIG.7B.FIG.12is a view illustrating an example of a cross-sectional structure of a PTS1area ofFIG.7C.FIG.13is a view illustrating an example of a cross-sectional structure of a PTS2area ofFIG.7C.FIG.14is a view illustrating an example of a cross-sectional structure taken along line E-E′ ofFIG.7B.FIG.15is a view illustrating a cross-sectional structure taken along line F-F′ ofFIG.7B. Referring toFIGS.7A to7D, a display panel110of a transparent touch display device100according to embodiments of the disclosure may include a substrate SUB defined with a display area DA where an image is displayed and a non-display area DA where an image is not displayed. Referring toFIGS.7A to7D, a plurality of subpixels SP may be disposed in the display area DA. The display area DA may include a plurality of light emitting areas EA and a plurality of transmissive areas TA. According to the example shown inFIGS.7A to7D, the plurality of subpixels SP disposed in the area where the first touch electrode TE #1and the second touch electrode TE #2are formed in the display area DA may include a first subpixel SP1, a second subpixel SP2, a third subpixel SP3, a fourth subpixel SP4, a fifth subpixel SP5, a sixth subpixel SP6, a seventh subpixel SP7, and an eighth subpixel SP8. The first subpixel SP1, the third subpixel SP3, the fifth subpixel SP5, the seventh subpixel SP7, and the eighth subpixel SP8may be disposed in the area where the first touch electrode TE #1is formed. In other words, the first subpixel SP1, the third subpixel SP3, the fifth subpixel SP5, the seventh subpixel SP7, and the eighth subpixel SP8may overlap the area of the first touch electrode TE #1. The second subpixel SP2, the fourth subpixel SP4, and the sixth subpixel SP6may be disposed in the area where the second touch electrode TE #2is formed. In other words, the second subpixel SP2, the fourth subpixel SP4, and the sixth subpixel SP6may overlap the area of the second touch electrode TE #2. A pixel electrode PE and a light emitting layer EL may be disposed in the area of each of the first to eighth subpixels SP1to SP8, and a common electrode CE may be disposed in the entire area of the first to eighth subpixels SP1to SP8. Referring toFIGS.7A to7D, the plurality of light emitting areas EA may include a first light emitting area EA1of the first subpixel SP1, a second light emitting area EA2of the second subpixel SP2, a third light emitting area EA3of the third subpixel SP3, a fourth light emitting area EA4of the fourth subpixel SP4, a fifth light emitting area EA5of the fifth subpixel SP5, a sixth light emitting area EA6of the sixth subpixel SP6, a seventh light emitting area EA7of the seventh subpixel SP7, and an eighth light emitting area EA8of the eighth subpixel SP8. The first light emitting area EA1of the first subpixel SP1, the third light emitting area EA3of the third subpixel SP3, the fifth light emitting area EA5of the fifth subpixel SP5, the seventh light emitting area EA7of the seventh subpixel SP7, and the eighth light emitting area EA8of the eighth subpixel SP8may be disposed in the area where the first touch electrode TE #1is formed. The second light emitting area EA2of the second subpixel SP2, the fourth light emitting area EA4of the fourth subpixel SP4, and the sixth light emitting area EA6of the sixth subpixel SP6may be disposed in the area where the second touch electrode TE #2is formed. Referring toFIGS.7A to7D, the touch sensor included in the transparent touch display device100according to embodiments of the disclosure may include touch sensor metals TSM1and TSM2disposed while avoiding the plurality of light emitting areas EA and the plurality of transmissive areas TA and transparent electrodes TAE1and TAE2disposed in the transmissive areas TA and electrically connected with the touch sensor metals TSM1and TSM2. Referring toFIGS.7A to7D, a plurality of subpixels SP and a plurality of light emitting areas EA may be disposed in each of an ith light emitting area row EAR(i), an i+1th light emitting area row EAR(i+1), an i+2th light emitting area row EAR(i+2), and an i+3th light emitting area row EAR(i+3). Referring toFIGS.7A to7D, the transmissive area TA may be disposed between the ith light emitting area row EAR(i) and the i+1th light emitting area row EAR(i+1), the transmissive area TA may be disposed between the i+1th light emitting area row EAR(i+1) and the i+2th light emitting area row EAR(i+2), and the transmissive area TA may be disposed between the i+2th light emitting area row EAR(i+2) and the i+3th light emitting area row EAR(i+3). The arrangement structure of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7Ais equal to the arrangement structure of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7B. The connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7Aare equal to the connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7B.FIGS.7A and7Bdiffer only in the transparent electrode arrangement structure in the border area between the first touch electrode TE #1and the second touch electrode TE #2. The arrangement structure of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7Cis equal to the arrangement structure of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7D. The connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7Care equal to the connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIG.7D.FIGS.7C and7Ddiffer only in the transparent electrode arrangement structure in the border area between the first touch electrode TE #1and the second touch electrode TE #2. The arrangement structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIGS.7A and7Bare different from the arrangement structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIGS.7C and7D. The connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIGS.7A and7Bare different from the connection positions and connection structures of the touch sensor metals TSM1and TSM2and the touch bridge metals TBM1and TBM2inFIGS.7C and7D. Referring toFIGS.7A to7D, transparent electrodes TAE1and TAE2may be disposed in each of the transmissive areas TA. For example, the first transparent electrode TAE1may be disposed in the transmissive areas TA included in the area where the first touch electrode TE #1is formed, and the second transparent electrode TAE2may be disposed in the transmissive areas TA included in the area where the second touch electrode TE #2is formed. Referring toFIGS.7A to7D, at least one of the transparent electrodes TAE1and TAE2disposed in the transmissive areas TA may extend below the adjacent touch sensor metals TSM1and TSM2. Accordingly, the touch sensor metals TSM1and TSM2overlap and connect to the transparent electrodes TAE1and TAE2, so that the touch electrodes TE #1and TE #2may have a double metal structure. Referring toFIGS.8A to8F,9A to9F,10A to10F, and11A to11F, the common electrode CE may be disposed not only in the light emitting areas EA but also in the transmissive areas TA. Referring toFIGS.8A,8D,9A,9D,10A,10D,11A, and11D, the common electrode CE may be disposed not only in the light emitting areas EA but also in the transmissive areas TA. Referring toFIGS.8B,8E,9B,9E,10B,10E,11B, and11E, the common electrode CE may have openings OPEN_CE in the transmissive areas TA. In other words, the positions of the openings OPEN_CE of the common electrode CE may correspond to the positions of the transmissive areas TA. Referring toFIGS.8C,8F,9C,9F,10C,10F,11C, and11F, a floating metal FM may be disposed in each of the openings OPEN_CE of the common electrode CE. Referring toFIGS.8A to8C,9A to9C,10A to10C, and11A to11C, a touch buffer film T-BUF and a touch inter-layer insulation film T-ILD may be disposed not only in the light emitting areas EA but also in the transmissive areas TA. Referring toFIGS.8D to8F,9D to9F,10D to10F, and11D to11F, in the transmissive areas TA, the touch buffer film T-BUF and the touch inter-layer insulation film T-ILD may have openings. In other words, the positions of the openings of the touch buffer film T-BUF and the touch inter-layer insulation film T-ILD may correspond to the positions of the transmissive areas TA. Referring toFIGS.8E,8F,9E,9F,10E,10F,11E, and11F, the positions of the openings of the touch buffer film T-BUF and the touch inter-layer insulation film T-ILD may correspond to the positions of the openings OPEN_CE of the common electrode CE. Referring toFIGS.7A to7E, in the area where the first touch electrode TE #1and the second touch electrode TE #2are disposed, the touch sensor may include touch bridge electrodes as well as the first touch electrode TE #1and the second touch electrode TE #2and, in some cases, may further include touch routing lines TL #1and TL #2. The touch bridge electrodes and/or the touch routing lines TL #1and TL #2may include touch bridge metals TBM1and TBM2. Referring toFIGS.7A to7E, the first touch bridge metal TBM1may be electrically connected to the first touch sensor metal TSM1constituting the first touch electrode TE #1. The touch bridge electrode constituting the first touch electrode TE #1or the first touch routing line TL #1electrically connected to the first touch electrode TE #1may include the first touch bridge metal TBM1. Referring toFIGS.7A to7E, the second touch bridge metal TBM2may be electrically connected to the second touch sensor metal TSM2constituting the second touch electrode TE #2. The touch bridge electrode constituting the second touch electrode TE #2or the second touch routing line TL #2electrically connected to the second touch electrode TE #2may include the second touch bridge metal TBM2. Referring toFIG.7E, the first touch bridge metal TBM1and the second touch bridge metal TBM2may be disposed while avoiding the first light emitting area EA1, the first transmissive area TA1, and the second light emitting area EA2. The touch bridge metals TBM1and TBM2may be disposed to overlap a portion of the touch sensor metals TSM1and TSM2. The plurality of subpixels SP may include subpixels SP emitting three or four colors of light. For example, referring toFIGS.7A to7E, the plurality of subpixels SP may include subpixels SP emitting a first color of light, subpixels SP emitting a second color of light, and subpixels SP emitting a third color of light. For example, it may be assumed that the first color is red, the second color is green or blue, and the third color is blue or green. Referring toFIGS.7A to7E, the size of the light emitting areas EA of the subpixels SP emitting the first color of light among the three colors is smaller than the size of the light emitting areas EA of the subpixels SP emitting the second color of light and is smaller than the size of the light emitting areas EA of the subpixels SP emitting the third color of light. Referring toFIGS.7A to7E, the size of the light emitting areas EA of the subpixels SP emitting the second color of light may be equal to the size of the light emitting areas EA of the subpixels SP emitting the third color of light. Referring toFIGS.7A to7E, the light emitting areas EA of the subpixels SP emitting the second color of light and the light emitting areas EA of the subpixels SP emitting the third color of light may be alternately arranged. Referring toFIGS.7A to7E, each of the touch bridge metals TBM1and TBM2may be disposed between the light emitting area EA of the subpixel SP emitting the first color of light and the light emitting area EA of the subpixel SP emitting the second color of light, between the light emitting area EA of the subpixel SP emitting the first color of light and the light emitting area EA of the subpixel SP emitting the third color of light, and between adjacent transmissive areas TA. Referring toFIG.7E, as an example, the first touch bridge metal TBM1may be disposed between the first light emitting area EA1and the fifth light emitting area EA5, between the first transmissive area TA1and the second transmissive area TA2, and between the second light emitting area EA2and the sixth light emitting area EA6. Referring toFIGS.7A to7D and7E, the first touch bridge metal TBM1may overlap and electrically connect to the first touch sensor metal TSM1, and the first touch bridge metal TBM1may overlap and be electrically insulated from the second touch sensor metal TSM2. Referring toFIGS.8A to8F,9A to9F,10A to10F, and11A to11F, a bank BK may be disposed adjacent to the light emitting layer EL of each of the subpixels SP, and an encapsulation layer ENCAP may be disposed on the light emitting layer EL of each of the subpixels SP and the second bank BK. Referring toFIGS.8A to8F,9A to9F,10A to10F, and11A to11F, the first touch sensor metal TSM1, the second touch sensor metal TSM2, and the transparent electrodes TAE1and TAE2may be positioned on the encapsulation layer ENCAP. Referring toFIGS.8A to8F,9A to9F,10A to10F, and11A to11F, all or at least a part of each of the first touch sensor metal TSM1and the second touch sensor metal TSM2may overlap a bank area BKA where the bank BK is disposed. In some embodiments, the bank area BKA may be between a light emitting area EA and a transmissive area TA. Referring toFIGS.8A to8F,9A to9F,10A to10F, and11A to11F, a touch buffer film T-BUF may be disposed on the encapsulation layer ENCAP, and a touch inter-layer insulation film T-ILD may be disposed on the touch buffer film T-BUF. The first and second touch bridge metals TBM1and TBM2may be positioned between the touch buffer film T-BUF and the touch inter-layer insulation film T-ILD. All or at least a part of each of the first touch sensor metal TSM1and the second touch sensor metal TSM2may overlap the bank BK. All or at least a part of the first and second touch bridge metals TBM1and TBM2may overlap the bank BK. The first and second touch sensor metals TSM1and TSM2may be positioned on the touch inter-layer insulation film T-ILD. FIGS.8A to8Fshow a cross-sectional structure from point A of the seventh light emitting area EA7of the seventh subpixel SP7inFIG.7Ato point A′ of the fifth light emitting area EA5of the fifth subpixel SP5. The color (e.g., green or blue) of the light emitted from the seventh light emitting area EA7of the seventh subpixel SP7may differ from the color (e.g., blue or green) of the light emitted from the fifth light emitting area EA5of the fifth subpixel SP5. Referring toFIGS.8A to8F, the touch buffer layer T-BUF may be disposed on the encapsulation layer ENCAP, and the second touch bridge metal TBM2may be disposed on the touch buffer layer T-BUF. All or at least a part of the second touch bridge metal TBM2may be disposed to overlap the bank BK. The second touch bridge metal TBM2may not be disposed in the transmissive area TA. Referring toFIGS.8A to8F, the touch inter-layer insulation film T-ILD may be disposed on the second touch bridge metal TBM2, and the first transparent electrode TAE1may be disposed on the touch inter-layer insulation film T-ILD. The first transparent electrode TAE1may also be disposed in the transmissive area TA. Referring toFIGS.8A to8F, the first touch sensor metal TSM1may be disposed on the first transparent electrode TAE1. The first touch sensor metal TSM1may directly contact the first transparent electrode TAE1and may be electrically connected to the first transparent electrode TAE1. Accordingly, the first touch sensor metal TSM1and the first transparent electrode TAE1may constitute the first touch electrode TE #1. Referring toFIG.8A to8F, all or at least a part of the first touch sensor metal TSM1may be disposed to overlap the bank BK. But the first touch sensor metal TSM1may not be disposed in the transmissive area TA. Referring toFIG.8A to8F, all or at least a part of the second touch bridge metal TBM2may overlap the first touch sensor metal TSM1. However, the second touch bridge metal TBM2is not electrically connected to the first touch sensor metal TSM1. In other words, the second touch bridge metal TBM2and the first touch sensor metal TSM1may be insulated from each other. The second touch bridge metal TBM2may be electrically connected to the second touch sensor metal TSM2included in the second touch electrode TE #2. As shown inFIGS.8D,8E, and8F, a top surface TS of the second inorganic encapsulation layer PAS2is exposed in an area between a first buffer film T-BUF1and a second buffer film T-BUF2(collectively T-BUF). The first transparent electrode layer TAE1directly contacts the top surface of the second inorganic encapsulation layer PAS2exposed in said area. This area overlaps with the transmissive area TA. In some embodiments, the area where the top surface TS of the second inorganic encapsulation layer PAS2is exposed partially overlaps with the bank area BKA. FIGS.9A to9Fshow a cross-sectional structure from point B of the eighth light emitting area EA8of the eighth subpixel SP8inFIG.7Ato point B′ of the first light emitting area EA1of the first subpixel SP1. The first color (e.g., red) of the light emitted from the eighth light emitting area EA8of the eighth subpixel SP8may be equal to the first color (e.g., red) of the light emitted from the first light emitting area EA1of the first subpixel SP1. As shown inFIGS.9D,9E, and9F, a top surface TS of the second inorganic encapsulation layer PAS2is exposed in an area between a first buffer film T-BUF1and a second buffer film T-BUF2. The first transparent electrode layer TAE1directly contacts the top surface of the second inorganic encapsulation layer PAS2exposed in said area. This area overlaps with the transmissive area TA. In some embodiments, the area where the top surface TS of the second inorganic encapsulation layer PAS2is exposed partially overlaps with the bank area BKA. Referring toFIGS.9A to9F, the second touch bridge metal TBM2is not seen in the cross-sectional structure from point B to point B′. FIGS.10A to10Fshow a cross-sectional structure from point C of the first light emitting area EA1of the first subpixel SP1inFIG.7Ato point C′ of the second light emitting area EA2of the second subpixel SP2. The first color (e.g., red) of the light emitted from the first light emitting area EA1of the first subpixel SP1may be equal to the first color (e.g., red) of the light emitted from the second light emitting area EA2of the second subpixel SP2. The area from point C of the first light emitting area EA1of the first subpixel SP1to point C′ of the second light emitting area EA2of the second subpixel SP2includes the border area between the first touch electrode TE #1and the second touch electrode TE #2. All or at least a part of the first subpixel SP1may overlap the area of the first touch electrode TE #1, and all or at least a part of the second subpixel SP2may overlap the area of the second touch electrode TE #2. Referring toFIGS.10A to10F, the second touch bridge metal TBM2is not seen in the cross-sectional structure from point C to point C′. Referring toFIGS.10A to10F, the touch buffer layer T-BUF may be disposed on the encapsulation layer ENCAP, the touch inter-layer insulation film T-ILD may be disposed on the touch buffer film T-BUF, and the first transparent electrode TAE1and the second transparent electrode TAE2may be disposed on the touch inter-layer insulation film T-ILD. The second transparent electrode TAE2may also be disposed in the transmissive area TA. Referring toFIGS.10A to10F, the first touch sensor metal TSM1may be disposed on the first transparent electrode TAE1, and the second touch sensor metal TSM2may be disposed on the second transparent electrode TAE2. In other words, the first and second touch sensor metals TSM1and TSM2may be positioned on the touch buffer film T-BUF. Referring toFIGS.10A to10F, the first touch sensor metal TSM1may be in direct contact with the first transparent electrode TAE1and may be electrically connected to the first transparent electrode TAE1. Accordingly, the first touch sensor metal TSM1and the first transparent electrode TAE1may constitute the first touch electrode TE #1. Referring toFIGS.10A to10F, the second touch sensor metal TSM2may be in direct contact with the second transparent electrode TAE2and may be electrically connected to the second transparent electrode TAE2. Accordingly, the second touch sensor metal TSM2and the second transparent electrode TAE2may constitute the second touch electrode TE #2. Referring toFIG.10A to10F, all or at least a part of each of the first touch sensor metal TSM1and the second touch sensor metal TSM2may be disposed to overlap the bank BK, and may not be disposed in the transmissive area TA. Referring toFIGS.7A and10A to10F, in the first light emitting area EA1of the first subpixel SP1among the plurality of light emitting areas EA, the first pixel electrode PE1and the first light emitting layer EL1on the first pixel electrode PE1may be disposed. In the second light emitting area EA2of the second subpixel SP2positioned in the first direction with respect to the first light emitting area EA1, the second pixel electrode PE2and the second light emitting layer EL2on the second pixel electrode PE2may be disposed. Referring toFIGS.10A to10F, the first touch sensor metal TSM1included in the first touch electrode TE #1may be disposed while avoiding the first light emitting area EA1of the first subpixel SP1. The first touch sensor metal TSM1may not be disposed in the first light emitting area EA1of the first subpixel SP1. In other words, the first touch sensor metal TSM1may not overlap the first light emitting area EA1of the first subpixel SP1. The second touch sensor metal TSM2included in the second touch electrode TE #2may be disposed while avoiding the second light emitting area EA2of the second subpixel SP2. The second touch sensor metal TSM2may not be disposed in the second light emitting area EA2of the second subpixel SP2. In other words, the second touch sensor metal TSM2may not overlap the second light emitting area EA2of the second subpixel SP2. Referring toFIGS.7A to7D, the transparent electrodes TAE1and TAE2may be disposed in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2, The first light emitting area EA1may be positioned in the area of the first touch electrode TE #1, and the second light emitting area EA2may be positioned in the area of the second touch electrode TE #2. The transparent electrodes TAE1and TAE2disposed in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2may be electrically connected to at least one of the first touch sensor metal TSM1and the second touch sensor metal TSM2. Referring toFIGS.10A to10F and7A, one transparent electrode TAE1may be disposed in the first transmissive area TA1. The one transparent electrode TAE2may be electrically connected to one of the first touch sensor metal TSM1and the second touch sensor metal TSM2. According to the example ofFIGS.7A and7C, one transparent electrode TAE2disposed in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2may be electrically connected to the second touch sensor metal TSM2. Referring toFIGS.10A to10F, the first bank BK may be positioned around the first light emitting layer EL1, and the bank BK may be positioned around the second light emitting layer EL2. The encapsulation layer ENCAP may be disposed on the first light emitting layer EL1, the first bank BK, the second light emitting layer EL2, and the second bank BK. Referring toFIGS.10A to10F, the first touch sensor metal TSM1, the second touch sensor metal TSM2, and the transparent electrodes TAE1and TAE2may be positioned on the encapsulation layer ENCAP. All or at least a part of the first touch sensor metal TSM1may overlap the first bank BK positioned around the first light emitting layer EL1. All or at least a part of the second touch sensor metal TSM2may overlap the second bank BK positioned around the second light emitting layer EL2. Referring toFIGS.10D to10F, in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2, the touch buffer layer T-BUF and the touch inter-layer insulation film T-ILD may have an opening. Referring toFIGS.10A to10F, the common electrode CE may be disposed between the first light emitting layer EL1and the encapsulation layer ENCAP in the first light emitting area EA1and between the second light emitting layer EL2and the encapsulation layer ENCAP in the second light emitting area EA2. Referring toFIGS.10A and10D, the common electrode CE may also extend to the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2. In other words, the common electrode CE may be disposed in all of the first light emitting area EA1, the first transmissive area TA1, and the second light emitting area EA2. Referring toFIGS.10B,10C,10E, and10F, the common electrode CE may have an opening OPEN_CE in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2. Referring toFIGS.10C and10F, the floating metal FM may be disposed in the opening OPEN_CE of the common electrode CE. The floating metal FM may be electrically disconnected from the common electrode CE. The floating metal FM may include the same material as the common electrode CE. The common voltage EVSS applied to the common electrode CE may not be applied to the floating metal FM. Referring toFIGS.10A to10F, a first space OP (or a first opening OP) between the first transparent electrode TAE1and the second transparent electrode TAE2is present. The first transparent electrode TAE1and the second transparent electrode TAE2are spaced apart from each other by a distance (e.g., distance OP). In some embodiments, the first space OP overlaps with a bank BK or bank layer BK. That is, the first space OP may overlap with a first bank layer BK that overlaps with the first touch sensor metal TSM1. Even though not shown in the drawings, in another embodiment, the first space OP may overlap with a second bank layer BK that overlaps with the second touch sensor metal TSM2. FIGS.11A to11Fshow a cross-sectional structure from point D of the first light emitting area EA1of the first subpixel SP1inFIG.7Bto point D′ of the second light emitting area EA2of the second subpixel SP2. The cross-sectional structure taken along D-D′ ofFIG.7B, illustrated inFIGS.11A to11F, is substantially the same as the cross-sectional structure taken along C-C′ ofFIG.7A, illustrated inFIGS.10A to10F. Only a difference lies in the transparent electrode arrangement structure disposed in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2. Referring toFIGS.11A to11F and7B, two transparent electrodes TAE1and TAE2may be disposed in the first transmissive area TA1positioned between the first light emitting area EA1and the second light emitting area EA2. In the case of the touch sensor structure ofFIGS.7A and7B, in the area where the first touch electrode TE #1is formed, the first touch sensor metal TSM1and the first touch bridge metal TBM1may be electrically connected. However, in the area where the first touch electrode TE #1is formed, the second touch sensor metal TSM2and the second touch bridge metal TBM2are not electrically connected. In the case of the touch sensor structure ofFIGS.7A and7B, in the area where the second touch electrode TE #2is formed, the second touch sensor metal TSM2and the second touch bridge metal TBM2may be electrically connected. However, in the area where the second touch electrode TE #2is formed, the first touch sensor metal TSM1and the first touch bridge metal TBM1are not electrically connected. In the case of the touch sensor structure ofFIGS.7A and7B, only the first touch sensor metal TSM1may be disposed in the area where the first touch electrode TE #1is formed, and only the second touch sensor metal TSM2may be disposed in the area where the second touch electrode TE #2is formed. In contrast, in the case of the touch sensor structure ofFIGS.7C and7D, the first touch sensor metal TSM1may be disposed in the area where the first touch electrode TE #1is formed, and so may a portion of the second touch sensor metal TSM2. The portion of the second touch sensor metal TSM2may be a piece cut off from the first touch sensor metal TSM1in the area of the first touch electrode TE #1. In the case of the touch sensor structure ofFIGS.7C and7D, the second touch sensor metal TSM2may be disposed in the area where the second touch electrode TE #2is formed, and so may a portion of the first touch sensor metal TSM1. The portion of the first touch sensor metal TSM1may be a piece cut off from the second touch sensor metal TSM2in the area of the second touch electrode TE #2. In the case of the touch sensor structure ofFIGS.7C and7D, in the area where the first touch electrode TE #1is formed, the first touch sensor metal TSM1and the first touch bridge metal TBM1may be electrically connected. Further, in the area where the first touch electrode TE #1is formed, the second touch sensor metal TSM2and the second touch bridge metal TBM2may also be connected. In the case of the touch sensor structure ofFIGS.7C and7D, in the area where the second touch electrode TE #2is formed, the second touch sensor metal TSM2and the second touch bridge metal TBM2may be electrically connected. Further, in the area where the second touch electrode TE #2is formed, the first touch sensor metal TSM1and the first touch bridge metal TBM1may also be connected. Referring toFIGS.12,7C and7E, in the PTS1area in the area where the first touch electrode TE #1is formed, the touch inter-layer insulation film T-ILD may be disposed between the first touch sensor metal TSM1and the first touch bridge metal TBM1. The first touch bridge metal TBM1may be electrically connected to the first touch sensor metal TSM1through one or more first contact holes CNT1of the touch inter-layer insulation film T-ILD. Referring toFIGS.13,7C and7E, in the PTS2area in the area where the first touch electrode TE #1is formed, the touch inter-layer insulation film T-ILD may be disposed between the first and second touch sensor metals TSM1and TSM2and the second touch bridge metal TBM2. Referring toFIGS.13,7C, and7E, in the PTS2area in the area where the first touch electrode TE #1is formed, the second touch bridge metal TBM2may be electrically connected to the second touch sensor metal TSM2through one or more second contact holes CNT2of the touch inter-layer insulation film T-ILD. There may be at least one second contact hole CNT2where the second touch sensor metal TSM2and the second touch bridge metal TBM2are electrically connected to each other in the area of the first touch electrode TE #1. Referring toFIGS.13,7C, and7E, in the area where the first touch electrode TE #1is formed, as the portion of the second touch sensor metal TSM2, included in the second touch electrode TE #2, is electrically connected to the second touch bridge metal TBM2, the resistance of the second touch electrode TE #2may be reduced. Accordingly, resistance capacitance (RC) delay in the second touch electrode TE #2may be reduced, and thus touch sensitivity may be enhanced. According to the example ofFIGS.7B and7D, the two transparent electrodes TAE1and TAE2may include a first transparent electrode TAE1electrically connected to the first touch sensor metal TSM1and a second transparent electrode TAE2electrically connected to the second touch sensor metal TSM2. The first transparent electrode TAE1and the second transparent electrode TAE2may be electrically separated from each other. Referring toFIGS.7A to7D, the second light emitting area EA2may emit light of the same first color as the light emitted from the first light emitting area EA1. Referring toFIGS.7A to7D, the plurality of light emitting areas EA may further include a third light emitting area EA3positioned in a second direction different from the first direction, with respect to the first light emitting area EA1and emitting the first color of light and a fourth light emitting area EA4positioned in the second direction with respect to the second light emitting area EA2and emitting the first color of light. The fourth light emitting area EA4may be positioned in the first direction with respect to the third light emitting area EA3. Referring toFIGS.7A and7D, no transmissive area TA may be disposed between the first light emitting area EA1and the third light emitting area EA3and between the second light emitting area EA2and the fourth light emitting area EA4. Referring toFIGS.7A to7D, a second transmissive area TA2may be disposed between the third light emitting area EA3and the fourth light emitting area EA4. Referring toFIGS.7A to7D, the plurality of light emitting areas EA may further include a fifth light emitting area EA5positioned between the first light emitting area EA1and the third light emitting area EA3and a sixth light emitting area EA6positioned between the second light emitting area EA2and the fourth light emitting area EA4. The sixth light emitting area EA6may be positioned in the first direction with respect to the fifth light emitting area EA5. Referring toFIGS.7A to7D, the fifth light emitting area EA5may emit a second color of light different from the first color, and the sixth light emitting area EA6may emit the third color of light different from the first color and the second color. Referring toFIGS.7A to7D, each of the fifth light emitting area EA5and the sixth light emitting area EA6may be larger in size than each of the first light emitting area EA1, the second light emitting area EA2, the third light emitting area EA3, and the fourth light emitting area EA4. For example, the first light emitting area EA1, the second light emitting area EA2, the third light emitting area EA3, and the fourth light emitting area EA4may emit the same first color of light. The first color may be a color (e.g., red) having the longest wavelength among the first to third colors. The wavelength of the light emitted from the fifth light emitting area EA5and the sixth light emitting area EA6may be shorter than the wavelength of the light emitted from the first light emitting area EA1, the second light emitting area EA2, the third light emitting area EA3, and the fourth light emitting area EA4. Referring toFIGS.7A to7D, the first touch sensor metal TSM1may be disposed while avoiding the first light emitting area EA1, the fifth light emitting area EA5, and the third light emitting area EA3. The second touch sensor metal TSM2may be disposed while avoiding the second light emitting area EA2, the sixth light emitting area EA6, and the fourth light emitting area EA4. The first touch routing line TL #1may be electrically connected to the first touch sensor metal TSM1, and the second touch routing line TL #2may be electrically connected to the second touch sensor metal TSM2. The first touch routing line TL #1may electrically connect the first touch sensor metal TSM1to the touch pad unit TP. The second touch routing line TL #2may electrically connect the second touch sensor metal TSM2to the touch pad unit TP. The first touch routing line TL #1may be disposed to extend along the inclined surface SLP of the encapsulation layer ENCAP and may be connected to the touch pad unit TP through an upper portion of the at least one dam DAM1and DAM2. The at least one dam DAM1and DAM2may be positioned near the outermost point of the inclined surface SLP of the encapsulation layer ENCAP. The touch driving circuit160may be electrically connected to the touch pad unit TP. Referring toFIGS.11A to11C, a space OP (or an opening OP) between the first transparent electrode TAE1and the second transparent electrode TAE2is present. The first transparent electrode TAE1and the second transparent electrode TAE2are spaced apart from each other by a distance (e.g., distance OP). In some embodiments, the first space OP overlaps with the transmissive area TA. A top surface US of the touch inter-layer insulation film T-ILD is exposed by the space OP or the opening OP. Referring toFIGS.11D to11F, a space OP (or an opening OP) between the first transparent electrode TAE1and the second transparent electrode TAE2is present. The first transparent electrode TAE1and the second transparent electrode TAE2are spaced apart from each other by a distance (e.g., distance OP). In some embodiments, the first space OP overlaps with the transmissive area TA. A top surface TS of the second inorganic encapsulation layer PAS2is exposed by the space OP or the opening OP. As shown inFIG.11F, the space OP overlaps with the floating metal FM. Referring toFIGS.14and15, transmissive areas TA may be positioned on two opposite sides of the first touch sensor metal TSM1. In this case, the first touch bridge metal TBM1or the second touch bridge metal TBM2may be disposed under the first touch sensor metal TSM1. Referring toFIG.14, when the transmissive areas TA positioned on two opposite sides of the first touch sensor metal TSM1are included in the same area of the first touch electrode TE #1, the first transparent electrode TAE1positioned under the first touch sensor metal TSM1may be disposed in both the transmissive areas TA positioned on the two opposite sides of the first touch sensor metal TSM1. Referring toFIG.15, when the transmissive areas TA positioned on two opposite sides of the first touch sensor metal TSM1are included in the areas of the different touch electrodes TE #1and TE #2, respectively, the first transparent electrode TAE1positioned under the first touch sensor metal TSM1may be disposed in only one of the transmissive areas TA positioned on the two opposite sides of the first touch sensor metal TSM1. Referring toFIG.15, the first transparent electrode TAE1and the second transparent electrode TAE2respectively disposed in the transmissive areas TA positioned on two opposite sides of the first touch sensor metal TSM1may be spaced apart from each other and thus be electrically separated from each other. As shown inFIG.15, a first opening OP1and a second opening OP2overlaps with a bank layer BK in the bank area BKA. The first opening OP1and a second opening OP2exposes a top surface US of the touch inter-layer insulation film T-ILD. Here, the first transparent electrode TAE1and a third transparent electrode TAE3are spaced apart from each other by the first opening OP1having a first distance. Similarly, the second transparent electrode TAE2and the third transparent electrode TAE3are spaced apart from each other by the second opening OP2having a second distance. According to embodiments of the disclosure described above, it is possible to provide a transparent touch display device having high transmittance and high touch sensitivity. According to embodiments of the disclosure, it is possible to provide a transparent touch display device having a structure capable of increasing the area of the touch sensor without reducing the area of the transmissive areas. According to embodiments of the disclosure, it is possible to enhance touch sensitivity by placing the transparent electrode in the transmissive area to thereby increase the capacitance for touch sensing (e.g., finger capacitance or mutual-capacitance). According to embodiments of the disclosure, it is possible to increase transmittance by removing the common electrode and insulation film (touch buffer film or touch inter-layer insulation film) from the transmissive area, e.g., by forming an opening in the common electrode and insulation film (touch buffer film or touch inter-layer insulation film) in the transmissive area. According to embodiments of the disclosure, as the common electrode is removed from the transmissive area, an opening is formed in the common electrode positioned under the transparent electrode in the transmissive area, so that the overlap capacitance between the transparent electrode and the common electrode does not occur in the transmissive area, thereby rendering it possible to reduce parasitic capacitance. Therefore, touch sensitivity may be enhanced. The above description has been presented to enable any person skilled in the art to make and use the technical idea of the present disclosure, and has been provided in the context of a particular application and its requirements. Various modifications, additions and substitutions to the described embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. The above description and the accompanying drawings provide an example of the technical idea of the present disclosure for illustrative purposes only. That is, the disclosed embodiments are intended to illustrate the scope of the technical idea of the present disclosure. Thus, the scope of the present disclosure is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims. The scope of protection of the present invention should be construed based on the following claims, and all technical ideas within the scope of equivalents thereof should be construed as being included within the scope of the claims. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 113,851 |
11861126 | DETAILED DESCRIPTION OF THE EMBODIMENTS In this specification, when a component (or region, layer, portion, etc.) is referred to as “on”, “connected”, or “coupled” to another component, it may mean that the component is placed/connected/coupled directly on the other component or a third component can be disposed between them. The same reference numerals or symbols may refer to the same elements in this specification. In addition, in the drawings, thicknesses, ratios, and dimensions of components may be exaggerated for effective description of the technical content. “And/or” may include all combinations of one or more of the associated elements. Terms such as first and second may be used to describe various components, but the components should not be limited by the terms. These terms are only used for the purpose of distinguishing one component from other components. For example, a first component may be referred to as a second component, and similarly, a second component may be referred to as a first component. Singular expressions may include plural expressions unless the context clearly indicates otherwise. In addition, terms such as “below”, “lower”, “above”, and “upper” are used to describe the relationship between components shown in the drawings. The terms are relative concepts and are described based on the directions indicated in the drawings. Terms such as “include” or “have” are intended to designate the presence of a feature, number, step, action, component, part, or combination thereof as described in the specification, and it should be understood that this does not preclude the possibility of the presence or addition of one or more other features, numbers, steps, operations, components, parts, or combinations thereof. Unless otherwise defined, all terms (including technical and scientific terms) used in this specification have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. In addition, terms such as terms defined in commonly used dictionaries should be interpreted as having a meaning consistent with their meaning in the context of the related technology, and should not be interpreted in an ideal or overly formal sense unless explicitly defined here. Hereinafter, embodiments of the inventive concept will be described with reference to the drawings. FIG.1is a perspective view of an electronic device according to an embodiment of the inventive concept, andFIG.2is a cross-sectional view of an electronic device according to an embodiment of the inventive concept. Referring toFIGS.1and2, an electronic device1000may have a configuration that generates an image. The electronic device1000may be a light-emitting electronic device or a light-receiving electronic device. For example, the electronic device1000may be any one among an organic light-emitting electronic device, a quantum dot light-emitting electronic device, a micro light emitting diode (LED) electronic device, a nano LED electronic device, a liquid crystal electronic device, an electrophoretic electronic device, an electrowetting electronic device, and a microelectromechanical (MEMS) electronic device, and is not specifically limited thereto. The electronic device1000may include a display layer100and a sensor layer200disposed on the display layer100. The electronic device1000may display an image through a display surface IS. The display surface IS may be parallel to a plane formed by a first direction DR1and a second direction DR2. The display surface IS may include an active region AA and a peripheral region NA. Pixels PX may be disposed in the active region AA, and may not be disposed in the peripheral region NA. The peripheral region NA may be formed along the edge of the display surface IS. The peripheral region NA may surround the active region AA. In an embodiment of the inventive concept, the peripheral region NA may be omitted, or may be disposed only on one side or fewer than all sides of the active region AA. The normal direction of the display surface IS, in other words, the thickness direction of the electronic device1000may be indicated by a third direction DR3. The front surface (or upper surface) and the rear surface (or lower surface) of each layer or unit to be described below may be determined based on the third direction DR3. In an embodiment of the inventive concept, the electronic device1000provided with a flat display surface IS is illustrated, but is not limited thereto. The electronic device1000may include a curved display surface or a three-dimensional display surface. The three-dimensional display surface may include a plurality of display regions that indicate directions different from each other. The electronic device1000may include the display layer100and the sensor layer200. The display layer100according to an embodiment of the inventive concept may be a light-emitting display layer, but is not specifically limited thereto. For example, the display layer100may include an organic light-emitting display layer, a quantum dot display layer, a micro LED display layer, a nano LED display layer, or the like. A light-emitting layer of the organic light-emitting display layer may include an organic light-emitting material. A light-emitting layer of the quantum dot display layer may include quantum dots, quantum rods, etc. A light-emitting layer of the micro LED display layer may include micro LEDs. A light-emitting layer of the nano LED display layer may include nano LEDs. The display layer100may include a base layer110, a circuit layer120, a light-emitting element layer130, and an encapsulation layer140. The base layer110, the circuit layer120, the light-emitting element layer130, and the encapsulation layer140may be stacked in this order. The base layer110may be a member that provides a surface on which the circuit layer120is disposed. The base layer110may be a glass substrate, a metal substrate, or a polymer substrate. However, an embodiment of the inventive concept is not limited thereto, and the base layer110may be an inorganic layer, an organic layer, or a composite material layer. The base layer110may have a multi-layered structure. For example, the base layer110may include a first synthetic resin layer, a silicon oxide (SiOx) layer disposed on the first synthetic resin layer, an amorphous silicon (a-Si) layer disposed on the silicon oxide layer, and a second synthetic resin layer disposed on the amorphous silicon layer. The silicon oxide layer and the amorphous silicon layer may be referred to as a base barrier layer. The first and second synthetic resin layers may each include a polyimide-based resin. In addition, the first and second synthetic resin layers may each include at least one of an acryl-based resin, a methacryl-based resin, a polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyamide-based, or a perylene-based resin. In the present disclosure, “˜˜”-based resin may mean a resin including a functional group of “˜˜”. The circuit layer120may be disposed on the base layer110. The circuit layer120may include an insulating layer, a semiconductor pattern, a conductive pattern, a signal line, etc. An insulating layer, a semiconductor layer, and a conductive layer may be formed on the base layer110through coating, deposition, or the like, and then, may be selectively patterned by performing a photolithography process multiple times. Thereafter, a semiconductor pattern, a conductive pattern, and a signal line included in the circuit layer120may be formed. The light-emitting element layer130may be disposed on the circuit layer120. The light-emitting element layer130may include a light-emitting element. For example, the light-emitting element layer130may include an organic light-emitting material, a quantum dot, a quantum rod, a micro LED, or a nano LED. The encapsulation layer140may be disposed on the light-emitting element layer130. The encapsulation layer140may protect the light-emitting element layer130from moisture, oxygen, and foreign matters such as dust particles. The sensor layer200may be formed on the display layer100through a continuous process. In this case, it may be expressed that the sensor layer200is directly disposed on the display layer100. Being directly disposed may mean that a third component is not disposed between the display layer100and the sensor layer200. In other words, a separate adhesive member may not be disposed between the display layer100and the sensor layer200. Alternately, the display layer100and the sensor layer200may be connected to each other through an adhesive member. The adhesive member may include a typical bonding agent or adhesive. FIG.3is a plan view of a display layer according to an embodiment of the inventive concept. Referring toFIG.3, a display layer100may include an active region100A and a peripheral region100N adjacent to the active region100A. The active region100A and the peripheral region100N may be distinct from each other depending on whether a plurality of pixels PX11to PXnm are disposed or not. The plurality of pixels PX11to PXnm may be disposed in the active region100A, and may not be disposed in the peripheral region100N. When seen on a plane, the active region100A may overlap the active region AA (seeFIG.1) of the electronic device1000(seeFIG.1). The peripheral region100N may overlap the peripheral region NA (seeFIG.1) of the electronic device1000(seeFIG.1). The plurality of pixels PX11to PXnm may be respectively connected to corresponding gate lines of a plurality of gate lines GL1to GLn, and corresponding data lines of a plurality of data lines DL1to DLm. The plurality of pixels PX11to PXnm may each include a pixel-driving circuit and a display element. The display layer100may be provided with more types of signal lines depending on the configuration of the pixel-driving circuit in each of the plurality of pixels PX11to PXnm. A scanning driving circuit GDC and a plurality of pads PD may be disposed in the peripheral region100N. The scanning driving circuit GDC and circuits in the electronic device1000may be formed through the same process. A data driving circuit may be a partial circuit included in a driving chip, and the driving chip may be electrically connected to the plurality of pixels PX11to PXnm through pads PD disposed in the peripheral region100N. The display layer100may further include a plurality of sensing pads TPD. The plurality of sensing pads TPD may be disposed in the peripheral region100N. The plurality of sensing pads TPD may be electrically connected to a plurality of sensing electrodes of the sensor layer200(seeFIG.2), respectively. FIG.4is a plan view of a sensor layer according to an embodiment of the inventive concept. Referring toFIG.4, a sensor layer200may include an active region200A and a peripheral region200N adjacent to the active region200A. The active region200A may be activated in response to an electric signal. The active region200A may be a region in which an input is sensed. When seen on a plane, the active region200A may overlap the active region100A (seeFIG.3) of the display layer100(seeFIG.3). The peripheral region200N may overlap the peripheral region100N (seeFIG.3) of the display layer100(seeFIG.3). The sensor layer200may include a base layer201, a plurality of sensing electrodes SP, a plurality of sensing lines TL1and TL2, and a dummy electrode DE. The plurality of sensing electrodes SP, and the dummy electrode DE may be disposed in the active region200A. The plurality of sensing lines TL1and TL2may be disposed in the peripheral region200N. The plurality of sensing lines TL1and TL2may extend from the peripheral region200N to the active region200A. The base layer201may be an inorganic layer including any one among silicon nitride, silicon oxynitride, and silicon oxide. Alternately, the base layer201may be an organic layer including an epoxy resin, an acryl resin, or an imide-based resin. The base layer201may be directly formed on the display layer100(seeFIG.2). The plurality of sensing electrodes SP may include a plurality of first sensing electrodes TE1and a plurality of second sensing electrodes TE2. The sensor layer200may obtain information about an external input through a change in capacitance between the plurality of first sensing electrodes TE1and the plurality of second sensing electrodes TE2. The plurality of first sensing electrodes TE1may each extend along the first direction DR1, and may be arranged along the second direction DR2. The plurality of first sensing electrodes TE1may include a plurality of first sensing portions SP1and a plurality of second sensing portions BSP1. The plurality of second sensing portions BSP1may each electrically connect two first sensing portions SP1adjacent to each other. For example, one of the second sensing portions BSP1may be disposed between adjacent first sensing portions SP1. The plurality of first sensing portions SP1and the plurality of second sensing portions BSP1may have a mesh structure. The plurality of first sensing portions SP1may be referred to as the plurality of first sensing parts SP1. The plurality of second sensing portions BSP1may be referred to as the plurality of first connection parts BSP1. The plurality of second sensing electrodes TE2may each extend along the second direction DR2, and may be arranged along the first direction DR1. The plurality of second sensing electrodes TE2may include a plurality of sensing patterns SP2and a plurality of connection patterns BSP2. The plurality of connection patterns BSP2may each electrically connect two sensing patterns SP2adjacent to each other. For example, one of the connection patterns BSP2may be disposed between adjacent sensing patterns SP2. The plurality of sensing patterns SP2may have a mesh structure. The plurality of sensing patterns SP2may be referred to as the plurality of sensing parts SP2. The plurality of connection patterns BSP2may be referred to as the plurality of second connection parts BSP2. The plurality of second sensing portions BSP1and the plurality of connection patterns BSP2may be disposed on different layers. The plurality of connection patterns BSP2may be insulated from and cross the plurality of first sensing electrodes TE1. For example, the plurality of second sensing portions BSP1may be insulated from and cross the plurality of connection patterns BSP2, respectively. The dummy electrode DE may be disposed adjacent to the plurality of sensing electrodes SP. The dummy electrode DE may have a mesh structure. The plurality of sensing lines TL1and TL2may include a plurality of first sensing lines TL1and a plurality of second sensing lines TL2. The plurality of first sensing lines TL1may be electrically connected to the plurality of first sensing electrodes TE1, respectively. The plurality of second sensing lines TL2may be electrically connected to the plurality of second sensing electrodes TE2, respectively. The plurality of first sensing lines TL1and the plurality of second sensing lines TL2may be electrically connected to the plurality of sensing pads TPD (seeFIG.3) through contact holes, respectively. FIG.5is a cross-sectional view taken along I-I′ ofFIG.4according to an embodiment of the inventive concept. Referring toFIG.5, a sensor layer200may include a base layer201, a plurality of first sensing portions SP1, a plurality of second sensing portions BSP1, a plurality of sensing patterns SP2, a plurality of connection patterns BSP2, a sensing insulating layer203, and a cover insulating layer205. The plurality of connection patterns BSP2may be disposed on the base layer201. For example, the plurality of connection patterns BSP2may directly contact an upper surface of the base layer201. The sensing insulating layer203may be disposed on the plurality of connection patterns BSP2. For example, the plurality of connection patterns BSP2may be disposed between the base layer201and the sensing insulating layer203. The sensing insulating layer203may have a single- or multi-layered structure. The sensing insulating layer203may include an inorganic material, an organic material, or a composite material. The plurality of first sensing portions SP1, the plurality of second sensing portions BSP1, and the plurality of sensing patterns SP2may be disposed on the sensing insulating layer203. The plurality of first sensing portions SP1, the plurality of second sensing portions BSP1, and the plurality of sensing patterns SP2may have a mesh structure. A plurality of contact holes CNT may be formed by penetrating the sensing insulating layer203in the third direction DR3. Two adjacent sensing patterns SP2of the plurality of sensing patterns SP2may be electrically connected to the connection patterns BSP2through the plurality of contact holes CNT. In addition, a first sensing pattern SP1may be electrically connected to a connection pattern BSP2through one the plurality of contact holes CNT. The cover insulating layer205may be disposed on the plurality of first sensing portions SP1, the plurality of second sensing portions BSP1, and the plurality of sensing patterns SP2. The cover insulating layer205may have a single- or multi-layered structure. The cover insulating layer205may include an inorganic material, an organic material, or a composite material. FIG.5illustrates a bottom bridge structure in which the plurality of connection patterns BSP2are disposed under the plurality of first sensing portions SP1, the plurality of second sensing portions BSP1, and the plurality of sensing patterns SP2, but an embodiment of the inventive concept is not limited thereto. For example, the sensor layer200may also have a top bridge structure in which the plurality of connection patterns BSP2are disposed on the plurality of first sensing portions SP1, the plurality of second sensing portions BSP1, and the plurality of sensing patterns SP2. FIG.6is a plan view illustrating region AA′ ofFIG.1according to an embodiment of the inventive concept. Referring toFIGS.1and6, a plurality of first pixel regions PXA1, a plurality of second pixel regions PXA2, a plurality of third pixel regions PXA3, and a light-blocking region NPXA may be provided in a display layer100of an electronic device1000. The display layer100may provide first color light through the plurality of first pixel regions PXA1, second color light through the plurality of second pixel regions PXA2, and third color light through the plurality of third pixel regions PXA3. The first color light, the second color light, and the third color light may be light of colors different from each other. For example, the first color light may be green color light, the second color light may be blue color light, and the third color light may be red color light. A plurality of pixels PX (seeFIG.1) may include a plurality of first pixels, a plurality of second pixels, and a plurality of third pixels. The plurality of first pixel regions PXA1may be regions respectively corresponding to the plurality of first pixels. The plurality of first pixel regions PXA1may each have a first width WD1-1extending in the first direction DR1. The first width WD1-1may be about 31 μm to about 32 μm. For example, the first width WD1-1may be about 31.54 μm. The plurality of first pixel regions PXA1may each have a second width WD1-2extending in the second direction DR2. The second width WD1-2may be about 39 μm to about 40 μm. For example, the second width WD1-2may be about 39.56 μm. The plurality of second pixel regions PXA2may be regions respectively corresponding to the plurality of second pixels. The plurality of second pixel regions PXA2may each have a first width WD2-1extending in the first direction DR1. The first width WD2-1may be about 31 μm to about 32 μm. For example, the first width WD2-1may be about 31.54 μm. The first width WD2-1may be the same as the first width WD1-1. The plurality of second pixel regions PXA2may each have a second width WD2-2extending in the second direction DR2. The second width WD2-2may be about 19 μm to about 20 μm. For example, the second width WD2-2may be about 19.44 μm. The second width WD2-2may be larger than the second width WD1-2. The area of each of the plurality of second pixel regions PXA2may be greater than the area of each of the plurality of first pixel regions PXA1. The plurality of third pixel regions PXA3may be regions respectively corresponding to the plurality of third pixels. The plurality of third pixel regions PXA3may each have a first width WD3-1extending in the first direction DR1. The first width WD3-1may be about 27 μm to about 28 μm. For example, the first width WD3-1may be about 27.46 μm. The plurality of third pixel regions PXA3may each have a second width WD3-2extending in the second direction DR2. The second width WD3-2may be about 65 μm to about 66 μm. For example, the second width WD3-2may be about 65.5 μm. The area of each of the plurality of third pixel regions PXA3may be smaller than the area of each of the plurality of first pixel regions PXA1. The plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3may be alternately arranged along a first column extending in the second direction DR2. The plurality of second pixel regions PXA2may be arranged along a second column extending in the second direction DR2. The second column may be adjacent to the first column. The plurality of first pixel regions PXA1and the plurality of second pixel regions PXA2may be alternately arranged along the first direction DR1. The plurality of third pixel regions PXA3and the plurality of second pixel regions PXA2may be alternately arranged along the first direction DR1. A first spacing DT1may be a spacing extending in the second direction DR2between one of the first pixel regions PXA1and one of the third pixel regions PXA3adjacent to each other. The first spacing DT1may be about 19 μm to about 20 μm. For example, the first spacing DT1may be about 18.5 μm. The first spacing DT1may be equal to a second spacing DT2extending in the first direction DR1between one first pixel region PXA1and one second pixel region PXA2adjacent to each other. A first distance DT3extending in the second direction DR2between adjacent second pixel regions PXA2aand PXA2bmay be smaller than the first spacing DT1and the second spacing DT2. The first distance DT3may be about 12 μm to about 13 μm. For example, the first distance DT3may be about 12.5 μm. Second pixels of two adjacent second pixel regions PXA2aand PXA2bmay be referred to as a pixel group BPA. A second distance DT4extending in the second direction DR2between adjacent pixel groups BPA may be greater than the first spacing DT1, the second spacing DT2, and the first distance DT3. The second distance DT4may be about 48 μm to about 49 μm. For example, the second distance DT4may be about 48.5 μm. A second light-emitting layer EL2(seeFIG.7) in the second pixels of the two second pixel regions PXA2aand PXA2baccording to an embodiment of the inventive concept may be formed as a single pattern. Accordingly, the light-emitting layer in the second pixel regions PXA2aand PXA2bmay be deposited using a shadow mask having an opening with a size corresponding to an area of the second pixel regions PXA2aand PXA2b. According an embodiment of the inventive concept, the light-emitting layer in adjacent second pixel regions PXA2aand PXA2bmay be a single pattern, and may thus be continuous without a spacing therebetween. The distance between adjacent second pixels may be determined by the first distance DT3between the adjacent second pixel regions PXA2aand PXA2b. In this case, since the first distance DT3is irrespective of a shadow mask used for depositing a light-emitting layer, a shadow phenomenon in which the area of a light-emitting region is reduced due to the limitation of the shadow mask may be prevented. Accordingly, the resolution of an image generated by the display layer100may be improved. Accordingly, the electronic device1000with enhanced display performance may be provided. The light-blocking region NPXA may be disposed adjacent to the first pixel regions PXA1, the second pixel regions PXA2, and the third pixel regions PXA3. The light-blocking region NPXA may set the boundaries between the first pixel regions PXA1, the second pixel regions PXA2, and the third pixel regions PXA3. The light-blocking region NPXA may prevent color mixing between the first pixel regions PXA1, the second pixel regions PXA2, and the third pixel regions PXA3. The plurality of sensing electrodes SP may each include a first portion P1and a second portion P2. The plurality of sensing electrodes SP may not overlap the plurality of first pixel regions PXA1, the plurality of second pixel regions PXA2, and the plurality of third pixel regions PXA3. The plurality of sensing electrodes SP may each include a plurality of first sensing electrodes TE1(seeFIG.4) and a plurality of second sensing electrodes TE2(seeFIG.4). The plurality of first sensing electrodes TE1(seeFIG.4) may include a plurality of first sensing portions SP1(seeFIG.4) and a plurality of second sensing portions BSP1(seeFIG.4). The plurality of second sensing electrodes TE2(seeFIG.4) may include a plurality of sensing patterns SP2(seeFIG.4) and a plurality of connection patterns BSP2(seeFIG.4). The plurality of first sensing portions SP1(seeFIG.4), the plurality of second sensing portions BSP1(seeFIG.4), and the plurality of sensing patterns SP2(seeFIG.4) may be each composed of the first portion P1and the second portion P2. The first portion P1may be disposed adjacent to the plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3. For example, the first portion P1may be located on opposite sides of a first pixel region PXA1and a third pixel region PXA3that are adjacent to each other. The first portion P1may surround the plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3. The first portion P1may have a first width WDa. The first width WDa may be about 3.5 μm to about 4.5 μm. For example, the first width WDa may be about 4 μm. When seen on a plane, the second portion P2may extend between the pixel group BPA and another adjacent pixel group BPA in the first direction DR1. The second portion P2may be provided integrally with the first portion P1. In other words, the second portion P2and the first portion P1may be integrally formed. The second portion P2may have a second width WDb. The second width WDb may be equal to the first width WDa. The second portion P2may not be disposed between the second pixel regions PXA2aand PXA2bdue to the first distance DT3between the second pixel regions PXA2aand PXA2b. When seen on a plane, the pixel group BPA may not overlap the plurality of sensing electrodes SP. In other words, the second portion P2may not be disposed between the two second pixel regions PXA2aand PXA2bof a single pixel group BPA. For example, the second portion P2may be disposed on opposite sides of the two second pixel regions PXA2aand PXA2bof the single pixel group BPA. Mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) may be about 430 fF (femtofarad) to about 440 f. For example, the mutual capacitance may be about 434 f. The amount of change in the mutual capacitance may be about 36 fF to about 38 f. For example, the amount of change may be about 37 f. According to an embodiment of the inventive concept, the second portion P2formed between adjacent pixel groups BPA may increase the mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) which are each composed of the first portion P1and the second portion P2. Accordingly, the sensing sensitivity of the sensor layer200(seeFIG.1) may be improved. According to an embodiment of the inventive concept, the electronic device1000may include: a display layer100; and a sensor layer200disposed on the display layer100, and including a sensing electrode SP, wherein the display layer100includes: a plurality of first pixels (in PXA1), a plurality of second pixels (in PXA2a/b) spaced apart from the first pixels (in PXA1) in a first direction DR1, and a plurality of third pixels (in PXA3) alternately arranged with the first pixels (in PXA1) along a second direction DR2, the second pixels (in PXA2a/b) are arranged along the second direction DR2, and two adjacent second pixels (in PXA2a/b) form a first pixel group BPA, the two adjacent second pixels (in PXA2a/b) are spaced apart by a first distance DT3in the second direction DR2, and the first pixel group BPA is spaced apart by a second distance DT4in the second direction DR2from a second pixel group BPA adjacent thereto, the second distance DT4being greater than the first distance DT3, the sensing electrode SP include a first sensing electrode TE1and a second sensing electrode TE2each including a plurality of sensing patterns, and the sensing patterns each a first portion P1adjacent to the first pixels (in PXA1) and the third pixels (in PXA3), and a second portion P2extending, when viewed on a plane, in the first direction DR1between the first pixel group BPA and the second pixel group BPA. FIG.7is a cross-sectional view of an electronic device taken along II-II′ ofFIG.6according to an embodiment of the inventive concept. Referring toFIGS.6and7, an electronic device1000(seeFIG.1) may include a display layer100, a sensor layer200, an anti-reflective panel300, and a window400. The display layer100may include a base layer110, a circuit layer120, a light-emitting element layer130, and an encapsulation layer140. The base layer110may be a member providing a base surface on which the circuit layer120is disposed. The base layer110may be a glass substrate, a metal substrate, a polymer substrate, or the like. However, an embodiment is not limited thereto, and the base layer110may be an inorganic layer, an organic layer, or a composite material layer. The base layer110may have a multi-layered structure. For example, the base layer110may include a first synthetic resin layer, a silicon oxide (SiOx) layer disposed on the first synthetic resin layer, an amorphous silicon (a-Si) layer disposed on the silicon oxide layer, and a second synthetic resin layer disposed on the amorphous silicon layer. The silicon oxide layer and the amorphous silicon layer may be referred to as a base barrier layer. The first and second synthetic resin layers may each include a polyimide-based resin. In addition, the first and second synthetic resin layers may each include at least one of an acryl-based resin, a methacryl-based resin, a polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyamide-based resin, or a perylene-based resin. In the present disclosure, “˜˜”-based resin may mean a resin including a functional group of “˜˜”. At least one inorganic layer is formed on an upper surface of the base layer110. The inorganic layer may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. The inorganic layer may have a multi-layered structure. The multi-layered inorganic layers may constitute a barrier layer and/or a buffer layer. In the present embodiment, it is illustrated that the display layer100includes a buffer layer BFL. The buffer layer BFL may enhance a bonding force between the base layer110and a semiconductor pattern. The buffer layer BFL may include at least one of silicon oxide, silicon nitride, or silicon oxynitride. For example, the buffer layer BFL may include a structure in which a silicon oxide layer and a silicon nitride layer are alternately stacked. The semiconductor pattern may be disposed on the buffer layer BFL. The semiconductor pattern may include polysilicon. However, an embodiment of the inventive concept is not limited thereto, and the semiconductor pattern may include amorphous silicon, low-temperature polycrystalline silicon, or an oxide semiconductor. FIG.7only illustrates some of the semiconductor patterns, and the semiconductor patterns may be further disposed in another region. Semiconductor patterns may be arranged across pixels according to a specific rule. The semiconductor patterns may have different electrical properties depending on whether the semiconductor patterns are doped or not. The semiconductor patterns may include a first region having a higher conductivity and a second region having a lower conductivity. The first region may be doped with an N-type dopant or a P-type dopant. A P-type transistor may include a doped region doped with a P-type dopant, and an N-type transistor may include a doped region doped with an N-type dopant. The second region may be an undoped region, or a region doped at a lower concentration than that of the first region. The first region may have a higher conductivity than the second region, and may serve as an electrode or a signal line. The second region may correspond to an active region of a transistor. In other words, a part of the semiconductor pattern may be an active region of a transistor, another part of the semiconductor pattern may be a source region or a drain region of a transistor, and still another part of the semiconductor pattern may be a connection electrode or a connection signal line. A plurality of pixels may include a plurality of first pixels, a plurality of second pixels, and a plurality of third pixels. The plurality of pixels may each have an equivalent circuit including seven transistors, one capacitor, and a light-emitting element, and the equivalent circuit diagram of a pixel may be changed to various forms.FIG.7illustrates one transistor and one light-emitting element included in each pixel. The plurality of first pixels may each include a first transistor100PC1and a first light-emitting element100PE1. The plurality of second pixels may each include a second transistor100PC2and a second light-emitting element100PE2. The plurality of third pixels may each have a third transistor100PC3and a third light-emitting element100PE3. A source region SC1, an active region A1, and a drain region D1of each of the first transistor100PC1, the second transistor100PC2, and the third transistor100PC3may be formed from a semiconductor pattern. The source region SC1and the drain region D1may extend from the active region A1in mutually opposite directions on a cross-section.FIG.7illustrates a part of a connection signal line SCL formed from a semiconductor pattern. Although not illustrated separately, the connection signal line SCL may be connected to the drain region D1of the first transistor100PC1on a plane. A first insulating layer10may be disposed on the buffer layer BFL. The first insulating layer10may overlap a plurality of pixels in common, and may cover a semiconductor pattern. The first insulating layer10may be an inorganic layer and/or an organic layer, and may have a single- or multi-layered structure. The first insulating layer10may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. In the present embodiment, the first insulating layer10may be a single-layered silicon oxide layer. Not only the first insulating layer10, but also an insulating layer of the circuit layer120to be described later may be an inorganic layer and/or an organic layer, and may have a single- or multi-layered structure. The inorganic layer may include at least one of materials described above, but an embodiment of the inventive concept is not limited thereto. A gate G1of each of the first transistor100PC1, the second transistor100PC2, and the third transistor100PC3is disposed on the first insulating layer10. The gate G1may be a part of a metal pattern. The gate G1overlaps the active region A1. In the process of doping a semiconductor pattern, the gate G1may function as a mask. A second insulating layer20may be disposed on the first insulating layer10, and may cover the gate G1. The second insulating layer20may overlap pixels in common. The second insulating layer20may be an inorganic layer and/or an organic layer, and may have a single- or multi-layered structure. The second insulating layer20may include at least one of silicon oxide, silicon nitride, or silicon oxynitride. In the present embodiment, the second insulating layer20may have a multi-layered structure including a silicon oxide layer and a silicon nitride layer. A third insulating layer30may be disposed on the second insulating layer20. The third insulating layer30may have a single- or multi-layered structure. For example, the third insulating layer30may have a multi-layered structure including a silicon oxide layer and a silicon nitride layer. A first connection electrode CNE1may be disposed on the third insulating layer30. The first connection electrode CNE1may be connected to the connection signal line SCL through a contact hole CNT-1penetrating the first, second, and third insulating layers10,20, and30. A fourth insulating layer40may be disposed on the third insulating layer30. The fourth insulating layer40may be a single-layered silicon oxide layer. The fourth insulating layer40may cover a portion of the first connection electrode CNE1. A fifth insulating layer50may be disposed on the fourth insulating layer40. The fifth insulating layer50may be an organic layer. A second connection electrode CNE2may be disposed on the fifth insulating layer50. The second connection electrode CNE2may be connected to the first connection electrode CNE1through a contact hole CNT-2penetrating the fourth insulating layer40and the fifth insulating layer50. A sixth insulating layer60may be disposed on the fifth insulating layer50, and may cover the second connection electrode CNE2. The sixth insulating layer60may be an organic layer. The light-emitting element layer130may be disposed on the circuit layer120. The light-emitting element layer130may include a plurality of light-emitting elements, e.g., first, second and third light-emitting elements100PE1,100PE2, and100PE3. For example, the light-emitting element layer130may include an organic light-emitting material, a quantum dot, a quantum rod, a micro LED, or a nano LED. Hereinafter, it will be described as an example that the plurality of light-emitting elements100PE1,100PE2, and100PE3are organic light-emitting elements, but an embodiment of the inventive concept is not specially limited thereto. The first light-emitting element100PE1may include a first pixel electrode AE1, a first light-emitting layer EL1, and a common electrode CE. The second light-emitting element100PE2may include a second pixel electrode AE2, a second light-emitting layer EL2, and the common electrode CE. The third light-emitting element100PE3may include a third pixel electrode AE3, a third light-emitting layer EL3, and the common electrode CE. The first pixel electrode AE1, the second pixel electrode AE2, and the third pixel electrode AE3may be disposed on the sixth insulating layer60. The first pixel electrode AE1, the second pixel electrode AE2, and the third pixel electrode AE3may be each connected to the second connection electrode CNE2through a contact hole CNT-3penetrating the sixth insulating layer60. A pixel-defining film70may be disposed on the sixth insulating layer60, and may partially cover the first pixel electrode AE1. An opening70-OP is provided in the pixel-defining film70. The opening70-OP of the pixel-defining film70may expose at least a part of each of the first pixel electrode AE1, the second pixel electrode AE2, and the third pixel electrode AE3. An active region100A (seeFIG.3) may include a first pixel region PXA1, a second pixel region PXA2, a third pixel region PXA3, and a light-blocking region NPXA adjacent to the first pixel region PXA1, the second pixel region PXA2, and the third pixel region PXA3. The light-blocking region NPXA may surround the first pixel region PXA1, the second pixel region PXA2, and the third pixel region PXA3. In the present embodiment, the first pixel region PXA1, the second pixel region PXA2, and the third pixel region PXA3respectively correspond to partial regions of the first pixel electrode AE1, the second pixel electrode AE2, and the third pixel electrode AE3exposed by the openings70-OP. The first light-emitting layer EL1may be disposed on the first pixel electrode AE1. The second light-emitting layer EL2may be disposed on the second pixel electrode AE2. The third light-emitting layer EL3may be disposed on the third pixel electrode AE3. The first light-emitting layer EL1, the second light-emitting layer EL2, and the third light-emitting layer EL3may be respectively disposed in regions corresponding to the openings70-OP. The common electrode CE may be disposed on the first, second and third light-emitting layers EL1, EL2and EL3. The common electrode CE may have an integrated shape, and may be disposed in a plurality of pixels in common. A hole control layer may be disposed between the first pixel electrode AE1and the first light-emitting layer EL1. The hole control layer may be disposed, in common, in the first pixel region PXA1and the light-blocking region NPXA. The hole control layer may include a hole transport layer, and may further include a hole injection layer. An electron control layer may be disposed between the first light-emitting layer EL1and the common electrode CE. The electron control layer may include an electron transport layer, and may further include an electron injection layer. The hole control layer and the electron control layer may be formed, in common, in a plurality of pixels using an open mask. The encapsulation layer140may be disposed on the light-emitting element layer130. The encapsulation layer140may include an inorganic layer, but layers composing the encapsulation layer140are not limited thereto. An inorganic layer of the encapsulation layer140may protect the light-emitting element layer130from moisture and oxygen. The inorganic layer may include a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, an aluminum oxide layer, or the like. The sensor layer200may include a base layer201, a first conductive layer202, a sensing insulating layer203, a second conductive layer204, and a cover insulating layer205. The base layer201may be an inorganic layer including at least any one of silicon nitride, silicon oxynitride, or silicon oxide. Alternatively, the base layer201may be an organic layer including an epoxy resin, an acryl resin, or an imide-based resin. The base layer201may have a single-layered structure, or a multi-layered structure in which layers are stacked along the third direction DR3. The first conductive layer202and the second conductive layer204may each have a single-layered structure, or a multi-layered structure in which layers are stacked along the third direction DR3. The first conductive layer202may include a plurality of first sensing portions SP1, a plurality of second sensing portions BSP1, and a plurality of sensing patterns SP2. The second conductive layer204may include a plurality of connection patterns BSP2. However, this is an example, and components included in each of the first conductive layer202and the second conductive layer204according to an embodiment of the inventive concept are not limited thereto. A conductive layer having a single-layered structure may include a metal layer or a transparent conductive layer. The metal layer may include molybdenum, silver, titanium, copper, aluminum, or an alloy thereof. The transparent conductive layer may include a transparent conductive oxide such as indium tin oxide (ITO), indium zinc oxide (IZO), zinc oxide (ZnO), or indium zinc tin oxide (IZTO). In addition, the transparent conductive layer may include a conductive polymer such as PEDOT, metal nano wires, graphene, or the like. A conductive layer having a multi-layered structure may include metal layers. For example, the metal layers may have a three-layered structure of titanium/aluminum/titanium. The conductive layer having a multi-layered structure may include at least one metal layer and at least one transparent conductive layer. At least any one of the sensing insulating layer203or the cover insulating layer205may include an inorganic film. The inorganic film may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. At least any one of the sensing insulating layer203or the cover insulating layer205may include an organic film. The organic film may include at least any one among an acryl-based resin, a methacryl-based resin, a polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, and a perylene-based resin. The anti-reflective panel300may be disposed on the sensor layer200. The anti-reflective panel300reduces the reflectance of external light incident from above the window400. In an embodiment of the inventive concept, the anti-reflective panel300may be omitted. The window400may be disposed on the anti-reflective panel300. The window400may include an optically transparent insulating material. For example, the window400may include glass or plastic. The window400may have a single- or multi-layered structure. For example, the window400may include a plurality of plastic films bonded with an adhesive, or a glass substrate and a plastic film which are bonded with an adhesive. FIG.8Ais a plan view of an electronic device illustrating a region corresponding to region AA′ ofFIG.1according to an embodiment of the inventive concept. In describingFIG.8A, the same reference numerals or symbols are used for the components described with reference toFIG.6, and thus, a description thereof may be omitted. Referring toFIG.8A, when seen on a plane, a plurality of second portions P2a-1, P2a-2, and P2a-3may be disposed between a pixel group BPA and another pixel group BPA adjacent thereto. For example, the plurality of second portions P2a-1, P2a-2, and P2a-3may be disposed between a pixel group BPA and another pixel group BPA that are adjacent to each other and arranged in the second direction DR2. The plurality of second portions P2a-1, P2a-2, and P2a-3may be provided integrally with a first portion P1. The plurality of second portions P2a-1, P2a-2, and P2a-3may include a first sub portion P2a-1, a second sub portion P2a-2, and a third sub portion P2a-3. The first sub portion P2a-1, the second sub portion P2a-2, and the third sub portion P2a-3may each extend in the first direction DR1. The first sub portion P2a-1, the second sub portion P2a-2, and the third sub portion P2a-3may be arranged in the second direction DR2.FIG.8Aillustrates that the number of the second portions is three, but the number of the plurality of second portions according to an embodiment of the inventive concept is not limited thereto. The first sub portion P2a-1, the second sub portion P2a-2, and the third sub portion P2a-3may each have a second width WDb. The second width WDb may be about 3.5 μm to about 4.5 μm. For example, the second width WDb may be about 4 μm. Unlike the inventive concept, if the distances between pixel-defining films70(seeFIG.7) and metal layers are not uniform, a user, who should see a white image at a specific point, may recognize the white image as a reddish white image. A phenomenon in which the amount of change in a color coordinate is large only at a specific point is referred to as a white wavelength shift or white angular dependency (WAD). Herein, a white image, wavelength-shifted to a long wavelength, is described as an example of the white wavelength shift, but the white wavelength shift is not limited thereto. Depending on the direction of change in a color coordinate, the white image may be recognized as a reddish white image, a bluish white image, or a greenish white image. However, according to an embodiment of the inventive concept, the second portion, which is a metal layer, may be provided in plural. The uniformity of distances between each of the first sub portion P2a-1, the second sub portion P2a-2, and the third sub portion P2a-3and the pixel-defining films70(seeFIG.7) may be improved, compared to cases where the plurality of second portions P2a-1, P2a-2, and P2a-3are not present. As a consequence, the WAD may be mitigated. Accordingly, the electronic device1000(seeFIG.1) with enhanced display performance may be provided. Mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) may be about 441 fF to about 450 fF. For example, the mutual capacitance may be about 448 fF. The amount of change in the mutual capacitance may be about 38 fF to about 40 fF. For example, the amount of change may be about 38 f. According to an embodiment of the inventive concept, mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) which are each formed of the first portion P1and the plurality of second portions P2a-1, P2a-2, and P2a-3may increase. The amount of change in the mutual capacitance may be greater as the mutual capacitance becomes larger. Accordingly, the sensing sensitivity of the sensor layer200(seeFIG.1) may be improved. FIG.8Bis a plan view of an electronic device illustrating a region corresponding to region AA′ ofFIG.1according to an embodiment of the inventive concept. In describingFIG.8B, the same reference numerals or symbols are used for the components described with reference toFIG.6, and thus, a description thereof may be omitted. Referring toFIG.8B, when seen on a plane, a second portion P2bmay be disposed between a pixel group BPA and another pixel group BPA adjacent thereto. The second portion P2bmay be provided integrally with the first portion P1. The second portion P2bmay extend in the second direction DR2. The second portion P2bmay have a second width WDb-1. The second width WDb-1may be about 32 μm to about 34 μm. For example, the second width WDb-1may be about 33 μm. Mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) may be about 470 f to about 480 if. For example, the mutual capacitance may be about 476 f. The amount of change in the mutual capacitance may be about 40 fF to about 45 fF. For example, the amount of change may be about 43 fF. According to an embodiment of the inventive concept, mutual capacitance between the plurality of first sensing electrodes TE1(seeFIG.4) and the plurality of second sensing electrodes TE2(seeFIG.4) which are each formed of the first portion P1and the second portion P2bmay increase. The amount of change in the mutual capacitance may be greater as the mutual capacitance becomes larger. Accordingly, the sensing sensitivity of the sensor layer200(seeFIG.1) may be improved. FIG.9Ais a plan view illustrating region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9A, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9A, a first pattern PT1may be provided in the second sub portion P2a-2. The first pattern PT1may be located in the center of the second sub portion P2a-2. A second pattern PT2may be provided in the first sub portion P2a-1. The second pattern PT2may be located on a first side of the first sub portion P2a-1. For example, inFIG.9A, the second pattern PT2may be located on the left side of the first sub portion P2a-1. A third pattern PT3may be provided in the third sub portion P2a-3. The third pattern PT3may be defined on a second side of the third sub portion P2a-3. For example, inFIG.9A, the third pattern PT2may be located on the right side of the third sub portion P2a-3. When seen in the second direction DR2, the first pattern PT1, the second pattern PT2, and the third pattern PT3may not overlap one another. FIG.9Bis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9B, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9B, a first pattern PT1-1may be provided in the second sub portion P2a-2. The first pattern PT1-1may be located in the center of the second sub portion P2a-2. A second pattern PT2-1may be provided in the first sub portion P2a-1. The second pattern PT2-1may be located on a first side of the first sub portion P2a-1. A third pattern PT3-1may be provided in the third sub portion P2a-3. The third pattern PT3-1may be located on a second side of the third sub portion P2a-3. When seen in the second direction DR2, the first pattern PT1-1, the second pattern PT2-1, and the third pattern PT3-1may not overlap one another. However, a part of the first pattern PT1-1may overlap a part of the second pattern PT2-1in the second direction DR2, and a part of the second pattern PT2-1may overlap a part of the third pattern PT3-1in the second direction DR2. One surface SF1of the second sub portion P2a-2in which the first pattern PT1-1is provided may have a predetermined angle AG. The first sub portion P2a-1in which the second pattern PT2-1is provided and the third sub portion P2a-3in which the third pattern PT3-1is provided may also have a pattern surface having the same angle as the predetermined angle AG. FIG.9Cis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9C, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9C, a first pattern PT1-2may be provided in the second sub portion P2a-2. The first pattern PT1-2may be located in the center of the second sub portion P2a-2. A second pattern PT2-2may be provided in the first sub portion P2a-1. The second pattern PT2-2may be located in the center of the first sub portion P2a-1. A third pattern PT3-2may be provided in the third sub portion P2a-3. The third pattern PT3-2may be located in the center of the third sub portion P2a-3. When seen in the second direction DR2, the first pattern PT1-2, the second pattern PT2-2, and the third pattern PT3-2may overlap one another. For example, the first pattern PT1-2, the second pattern PT2-2, and the third pattern PT3-2may form a straight line in the second direction DR2. FIG.9Dis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9D, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9D, a first pattern PT1-3may be provided in the second sub portion P2a-2. The first pattern PT1-3may be located in the center of the second sub portion P2a-2. FIG.9Eis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9E, the same reference numerals or symbols are used for the components described with reference to FIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9E, a first pattern PT1-4may be provided in the second sub portion P2a-2. The first pattern PT1-4may be located on one side of the second sub portion P2a-2. For example, the first pattern PT1-4may be located closer to the side of the second sub portion P2a-2having a smaller protruding part. FIG.9Fis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9F, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof may be omitted. Referring toFIG.9F, a first pattern PT1-5may be provided in the second sub portion P2a-2. The first pattern PT1-5may be located on the other side of the second sub portion P2a-2. In other words, the first pattern PT1-5may be provided in a place spaced apart from the first pattern PT1-4ofFIG.9Ein the first direction DR1. FIG.9Gis a plan view illustrating a region corresponding to region BB′ ofFIG.8Aaccording to an embodiment of the inventive concept. In describingFIG.9G, the same reference numerals or symbols are used for the components described with reference toFIG.8A, and thus, a description thereof will be omitted. Referring toFIG.9G, a first pattern PT1-6may be provided in the second sub portion P2a-2. The first pattern PT1-6may be located in the center of the second sub portion P2a-2. One surface SF2of the second sub portion P2a-2in which the first pattern PT1-6is defined may have a predetermined angle. According to an embodiment of the inventive concept, at least one pattern provided in the second portions P2a-1, P2a-2, and P2a-3may be variously formed. The shape of the sensing electrode SP may be variously provided through the pattern. For example, the sensing electrode SP having a diamond shape inFIG.4may be provided through the pattern. In addition, the shape of the pattern may be differently formed depending on each of the plurality of first sensing electrodes TE1(seeFIG.4), the plurality of second sensing electrodes TE2(seeFIG.4), and the dummy electrode DE (seeFIG.4) and the observation of the pattern shape make it possible to easily distinguish each of the plurality of first sensing electrodes TE1(seeFIG.4), the plurality of second sensing electrodes TE2(seeFIG.4), and the dummy electrode DE (seeFIG.4). FIG.10is a plan view illustrating a region corresponding to region AA′ ofFIG.1according to an embodiment of the inventive concept. In describingFIG.10, the same reference numerals or symbols are used for the components described with reference toFIG.8B, and thus, a description thereof may be omitted. Referring toFIG.10, at least one opening HAa may be provided in a first portion P1-1. The first portion P1-1may be electrically connected to the second portion P2b. The first portion P1-1may be provided integrally with the second portion P2b. FIG.10illustrates the opening HAa in which a cutting surface does not have a predetermined angle, but the opening HAa according to an embodiment of the inventive concept is not limited thereto. For example, the opening HAa may have a cutting surface with a predetermined angle like the first pattern PT1-6(seeFIG.9G) in the embodiment ofFIG.9G. According to an embodiment of the inventive concept, the opening HAa may be provided in plural. The plurality of openings HAa may be formed in different places of each of the plurality of sensing electrodes SP (seeFIG.4). The plurality of openings HAa may reduce a phenomenon in which the plurality of sensing electrodes SP (seeFIG.4) are viewed from the outside. Accordingly, the electronic device1000(seeFIG.1) with enhanced display performance may be provided. FIG.11is a plan view illustrating a region corresponding to region AA′ ofFIG.1according to an embodiment of the inventive concept. In describingFIG.11, the same reference numerals or symbols are used for the components described with reference toFIG.8B, and thus, a description thereof may be omitted. Referring toFIG.11, the sensing electrode SP (seeFIG.4) may include a plurality of protrusions PTT. When seen on a plane, the plurality of protrusions PTT may be disposed in two adjacent second pixel regions PXA2aand PXA2b, and may protrude from a first portion P1-2in the second direction DR2. When seen on a plane, the plurality of protrusions PTT may overlap the pixel group BPA. According to an embodiment of the inventive concept, the plurality of protrusions PTT may sense a touch between two adjacent second pixel regions PXA2aand PXA2b. Accordingly, the electronic device1000(seeFIG.1) with enhanced sensing sensitivity may be provided. FIG.12is a block diagram of an electronic device according to an embodiment of the inventive concept. Referring toFIG.12, the sensing electrode SP may be spaced apart by a predetermined distance from the common electrode CE of the display layer100(seeFIG.1) in the third direction DR3. A first parasitic capacitor Cb may be formed between the sensing electrode SP and the common electrode CE. A second parasitic capacitor Cc may be formed between the dummy electrode DE and the common electrode CE. The dummy electrode DE may be electrically connected to a ground electrode GE. A third parasitic capacitor Ca may be formed between the dummy electrode DE and the sensing electrode SP. When an external input TC is in contact with or close to the sensor layer200(seeFIG.1), a sensing capacitor Ct may be formed between the external input TC and the sensing electrodes SP. The electronic device1000(seeFIG.1) may determine whether the external input TC is touched or not, and a touch location, on the basis of the amount of change in the capacitance of the sensing capacitor Ct. According to an embodiment of the inventive concept, a first noise signal generated in the common electrode CE may be transferred to the dummy electrode DE through the second parasitic capacitor Cc. The first noise signal transferred to the dummy electrode DE may be removed through the ground electrode GE. In addition, a second noise signal which is generated in the dummy electrode DE may be removed through the ground electrode GE. Accordingly, the external input TC may be sensed based on the amount of change in the capacitance of the sensing capacitor Ct of which the noise has been reduced or removed. Accordingly, the electronic device1000(seeFIG.1) with enhanced sensing sensitivity may be provided. FIG.13is a cross-sectional view illustrating a sensor layer according to an embodiment of the inventive concept.FIG.14Ais a plan view illustrating a first conductive layer according to an embodiment of the inventive concept,FIG.14Bis a plan view illustrating a second conductive layer according to an embodiment of the inventive concept, andFIG.15is a plan view illustrating a region corresponding to region AA′ ofFIG.1according to an embodiment of the inventive concept. In describingFIG.13, the same reference numerals or symbols are used for the components described with reference toFIG.7, and thus, a description thereof may be omitted. Referring toFIGS.13to15, a sensor layer200-1may include a base layer201, a first conductive layer202-1, a sensing insulating layer203, a second conductive layer204-1, and a cover insulating layer205. The first conductive layer202-1may be disposed on the base layer201. The first conductive layer202-1may include a first sensing electrode TE1-1. The sensing insulating layer203may be disposed on the first conductive layer202-1. The first sensing electrode TE1-1may include a first portion P1-3and a second portion P2-3. The first portion P1-3may be disposed adjacent to the plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3. The first portion P1-3may surround the plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3. When seen on a plane, the second portion P2-3may extend in the first direction DR1between a pixel group BPA and another pixel group BPA adjacent thereto. The second portion P2-3may be provided integrally with the first portion P1-3. The second conductive layer204-1may be disposed on the sensing insulating layer203. The second conductive layer204-1may include a second sensing electrode TE2-1. The second sensing electrode TE2-1may be disposed adjacent to the plurality of first pixel regions PXA1and the plurality of third pixel regions PXA3. When seen on a plane, an opening202-H overlapping the second sensing electrode TE2-1may be provided in the first portion P1-3. The opening202-H may be provided in plural. A plurality of openings202-H may reduce the area in which the first sensing electrodes TE1-1and the second sensing electrodes TE2-1overlap. The plurality of openings202-H may prevent the occurrence of excessive capacitance between the first sensing electrode TE1-1and the second sensing electrode TE2-1. Mutual capacitance between the first sensing electrode TE1-1and the second sensing electrode TE2-1may be controlled by the plurality of openings202-H, and the sensor layer200may sense an external input on the basis of the amount of change in the mutual capacitance. Accordingly, the electronic device1000(seeFIG.1) with enhanced sensing sensitivity may be provided. According to embodiments of the inventive concept described above, due to a second portion formed between pixel groups adjacent to each other, a mutual capacitance between a plurality of first sensing electrodes and a plurality of second sensing electrodes which are each composed of a first portion and a second portion may increase. The amount of a change in the mutual capacitance may be greater as the mutual capacitance becomes larger. Accordingly, sensing sensitivity of a sensor layer may be enhanced. In addition, according to embodiments of the inventive concept described above, due to second portions formed between pixel groups adjacent to each other, uniformity in distances between pixel-defining films and the second portions, which are metal layers, may be improved. The white angular dependency (WAD) may thus be mitigated. Accordingly, an electronic device with improved display performance may be provided. In the above, a description has been made with reference to embodiments of the inventive concept, but those skilled in the art or those of ordinary skill in the relevant technical field may understand that various modifications and changes may be made to the inventive concept within the scope of the inventive concept as described in the claims. Therefore, the scope of the inventive concept is not limited to the contents described in the detailed description of the specification. | 66,867 |
11861127 | DETAILED DESCRIPTION In this specification, when an element (or a region, a layer, a portion, or the like) is referred to as “being on”, “being connected to”, or “being coupled to” another element, it may be directly located/connected/coupled to another element, or an intervening third element may also be located therebetween. Like numbers or symbols refer to like elements throughout. Also, in the drawings, the thicknesses, ratios, and dimensions of the elements are exaggerated for effective description of the technical contents. “And/or” includes one or more combinations which may be defined by the associated elements. Although the terms first, second, etc. may be used to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element may be referred to as a second element, and similarly, a second element may also be referred to as a first element without departing from the scope of the present disclosure. The singular forms include the plural forms as well, unless the context clearly indicates otherwise. Also, terms of “below”, “on lower side”, “above”, “on upper side”, or the like may be used to describe the relationships of the elements illustrated in the drawings. These terms have relative concepts and are described on the basis of the directions indicated in the drawings. It will be understood that the term “includes” or “comprises”, when used in this specification, specifies the presence of stated features, integers, steps, operations, elements, components, or a combination thereof, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present disclosure belongs. Also, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Hereinafter, aspects of some embodiments of the present disclosure will be described in more detail with reference to the drawings. FIG.1is a perspective view of an electronic device according to some embodiments of the present disclosure; Referring toFIG.1, an electronic device1000may be a device activated in response to an electrical signal. For example, the electronic device1000may be a mobile phone, a tablet PC, a vehicle navigation unit, a game console, or a wearable apparatus, but embodiments of the present disclosure are not limited thereto. InFIG.1, the electronic device1000is illustrated as a mobile phone, but embodiments according to the present disclosure are not limited thereto. The electronic device1000may display images through an active region1000A. The active region1000A may include a plane defined by a first direction DR1and a second direction DR2. A thickness direction of the electronic device1000may be parallel to a third direction DR3that crosses the first direction DR1and the second direction DR2. Thus, a front surface (or upper surface) and a rear surface (or lower surface) of each member constituting the electronic device1000may be defined based on the third direction DR3. The electronic device1000may sense inputs applied from the outside of the electronic device1000. The external inputs may be an input of a user. The input of the user may include various types of external inputs such as a portion of the user's body, light, heat, or pressure. The electronic device1000illustrated inFIG.1may sense an input by a user's touch and an input by an input device2000. The input device2000may refer to a device other than the user's body. For example, the input device2000may be an active pen, a stylus pen, a touch pen, or an electronic pen. Hereinafter, an example in which the input device2000is an active pen will be described. The electronic device1000and the input device2000may be capable of bidirectional communication. The electronic device1000may provide an uplink signal to the input device2000. For example, the uplink signal may include a synchronization signal or information of the electronic device1000, but is not particularly limited thereto. The input device2000may provide a downlink signal to the electronic device1000. The downlink signal may include a synchronization signal or status information of the input device2000. For example, the downlink signal may include coordinate information of the input device2000, battery information of the input device2000, inclination information of the input device2000, and/or various information stored in the input device2000, and the like, but is not particularly limited thereto. FIG.2is a block diagram of an electronic device and an input device according to some embodiments of the present disclosure. Referring toFIG.2, the electronic device1000may include a display panel100and an input sensor200. The display panel100may be configured to substantially generate an image. The display panel100may be a light-emitting display layer. For example, the display panel100may be an organic light-emitting display layer, a quantum dot display layer, a micro LED display layer, or a nano LED display layer. The input sensor200may be located on the display panel100. The input sensor200may sense an external input applied from the outside. The input sensor200may sense both an input by a user's body3000and an input by the input device2000. An input region of the user's body3000may have a first width WE1. The input sensor200may be operated in a time-division driving manner. For example, the input sensor IS may be repeatedly driven alternately in a first mode and a second mode. The first mode may be a mode for sensing an input by the user's body3000, and the second mode may be a mode for sensing an input by the input device2000. When the second mode starts, the input sensor200may provide an uplink signal ULS to the input device2000. When the input device2000receives the uplink signal ULS and is synchronized with the electronic device1000, the input device2000may provide a downlink signal DLS toward the input sensor200. The input device2000may include a power source2100, a memory2200, a controller2300, a transmitter2400, a receiver2500, and a pen electrode2600. However, components constituting the input device2000are not limited to the components listed above. For example, the input device2000may further include an electrode switch for switching the pen electrode2600to a signal transmission mode or a signal reception mode, a pressure sensor for sensing pressure, or a rotation sensor for sensing rotation, etc. The input sensor200may acquire coordinates of the input device2000through the pen electrode2600, and the input sensor200may acquire an inclination of the input device2000through the pen electrode2600. An input region of the pen electrode2600may have a second width WE2. The second width WE2of the input region of the pen electrode2600may be smaller than the first width WE1of the input region of the user's body3000. The power source2100may include a high-capacity capacitor or a battery for supplying power to the input device2000. The memory2200may store function information of the input device2000. The controller2300may control the operation of the input device2000. Each of the transmitter2400and the receiver2500may communicate with the electronic device1000through the pen electrode2600. The transmitter2400may be referred to as a signal generator or a transmitting circuit, and the receiver2500may be referred to as a signal receiver or a receiving circuit. FIG.3is a cross-sectional view of an electronic device according to some embodiments of the present disclosure. Referring toFIG.3, the display panel100may include a base layer110, a circuit layer120, a light-emitting element layer130, and an encapsulation layer140. The base layer110may be a member that provides a base surface on which the circuit layer120is located. The base layer110may be a glass substrate, a metal substrate, or a polymer substrate. However, embodiments of the present disclosure are not limited thereto, and the base layer110may be an inorganic layer, an organic layer, or a composite material layer. The base layer110may have a multi-layer structure. For example, the base layer110may include a first synthetic resin layer, a silicon oxide (SiOx) layer located on the first synthetic resin layer, an amorphous silicon (a-Si) layer located on the silicon oxide layer, and a second synthetic resin layer located on the amorphous silicon layer. The silicon oxide layer and the amorphous silicon layer may be referred to as a base barrier layer. Each of the first and second synthetic resin layers may include a polyimide-based resin. In addition, each of the first and second synthetic resin layers may include at least one of an acrylic resin, a methacrylic resin, a polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyamide-based resin, or a perylene-based resin. Meanwhile, in this specification, a “˜based” resin may be considered as including a functional group of “˜˜”. The circuit layer120may be located on the base layer110. The circuit layer120may include an insulating layer, a semiconductor pattern, a conductive pattern, a signal line, and the like. The insulating layer, the semiconductor layer, and the conductive layer are formed on the base layer110through coating, deposition or the like, and subsequently, the insulating layer, the semiconductor layer, and the conductive layer may be selectively patterned by performing a photolithography process multiple times. Then, the semiconductor pattern, the conductive pattern, and the signal line included in the circuit layer120may be formed. At least one inorganic layer is formed on the upper surface of the base layer110. The inorganic layer may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon oxynitride, zirconium oxide, or hafnium oxide. The inorganic layer may be formed as multiple layers. The multi-layered inorganic layers may constitute a barrier layer and/or a buffer layer. According to some embodiments, the display panel100is illustrated as including a buffer layer BFL. The buffer layer BFL may enhance a bonding force between the base layer110and the semiconductor pattern. The buffer layer BFL may include a silicon oxide layer and a silicon nitride layer, and the silicon oxide layer and the silicon nitride layer may be alternately stacked. The semiconductor pattern may be located on the buffer layer BFL. The semiconductor pattern may include polysilicon. However, embodiments of the present disclosure are not limited thereto, and the semiconductor pattern may include amorphous silicon or a metal oxide. FIG.3illustrates only a portion of a semiconductor pattern, and a semiconductor pattern may be further located in another region. The semiconductor pattern may be arranged according to a specific rule over the pixels. The semiconductor pattern may have different electrical properties depending on whether the semiconductor pattern is doped or not. The semiconductor pattern may include a first region having high conductivity and a second region having low conductivity. The first region may be doped with an N-type dopant or a P-type dopant. A P-type transistor may include a doped region which is doped with the P-type dopant, and an N-type transistor may include a doped region which is doped with the N-type dopant. The second region may be an undoped region, or a region doped at a lower concentration than that of the first region. The first region may have higher conductivity than the second region, and may serve as an electrode or a signal line substantially. The second region may substantially correspond to an active region (or channel) of the transistor. That is, a portion of the semiconductor pattern may be an active of the transistor, another portion may be a source or a drain of the transistor, and the other portion may be a connection electrode or a connection signal line. Each of the pixels may have an equivalent circuit including seven transistors, one capacitor, and a light-emitting element, and an equivalent circuit diagram of the pixel may be modified in various forms.FIG.3illustrates one transistor100PC and a light-emitting element100PE included in the pixel, but embodiments according to the present disclosure are not limited thereto. A source SC1, an active A1, and a drain D1of the transistor100PC may be formed from a semiconductor pattern. The source SC1and the drain D1may extend in opposite directions from the active A1on a cross-section. A portion of a connection signal line SCL formed from the semiconductor pattern is illustrated inFIG.3. According to some embodiments, the connection signal line SCL may be electrically connected to the drain D1of the second transistor100PC on a plane. A first insulating layer10may be located on the buffer layer BFL. The first insulating layer10may overlap the plurality of pixels in common and cover the semiconductor pattern. The first insulating layer10may be an inorganic layer and/or an organic layer, and may have a single- or multi-layered structure. The first insulating layer10may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. According to some embodiments, the first insulating layer10may be a single-layered silicon oxide layer. An insulating layer of the circuit layer120to be described later as well as the first insulating layer10may be an inorganic layer and/or an organic layer, and may have a single-layer or multilayer structure. The inorganic layer may include at least one of the above-described materials, but embodiments according to the present disclosure are not limited thereto. The gate G1of the transistor100PC is located on the first insulating layer10. The gate G1may be a portion of the metal pattern. The gate G1overlaps the active A1. In the process of doping the semiconductor pattern, the gate G1may function as a mask. A second insulating layer20may be located on the first insulating layer10and may cover the gate G1. The second insulating layer20may overlap the pixels in common. The second insulating layer20may be an inorganic layer and/or an organic layer, and may have a single- or multi-layered structure. According to some embodiments, the second insulating layer20may be a single-layered silicon oxide layer. A third insulating layer30may be located on the second insulating layer20, and according to some embodiments, the third insulating layer30may be a single silicon oxide layer. A first connection electrode CNE1may be located on the third insulating layer30. The first connection electrode CNE1may be connected to the connection signal line SCL through a contact hole CNT-1that passes through the first, second, and third insulating layers10,20, and30. A fourth insulating layer40may be located on the third insulating layer30. The fourth insulating layer40may be a single-layered silicon oxide layer. A fifth insulating layer50may be located on the fourth insulating layer40. The fifth insulating layer50may be an organic layer. A second connection electrode CNE2may be located on the fifth insulating layer50. The second connection electrode CNE2may be connected to the first connection electrode CNE1through a contact hole CNT-2that passes through the fourth insulating layer40and the fifth insulating layer50. A sixth insulating layer60may be located on the fifth insulating layer50and may cover the second connection electrode CNE2. The sixth insulating layer60may be an organic layer. The light-emitting element layer130may be located on the circuit layer120. The light-emitting element layer130may include a light-emitting element. For example, the light-emitting element layer130may include an organic light-emitting material, a quantum dot, a quantum rod, a micro LED, or a nano LED. The light-emitting element100PE may include a first electrode AE, a light-emitting element layer EL, and a second electrode CE. The first electrode AE may be located on the sixth insulating layer60. The first electrode AE may be connected to the second connection electrode CNE2via a contact hole CNT-3that passes through the sixth insulating layer60. The pixel defining film70may be located on the sixth insulating layer60and may cover a portion of the first electrode AE. An opening70-OP may be defined in the pixel defining film70. The opening70-OP of the pixel defining film70may expose at least a portion of the first electrode AE. According to some embodiments, the light-emitting region PXA is defined corresponding to a partial region of the first electrode AE exposed by the opening portion70-OP. A non-light-emitting region NPXA may surround the light-emitting region PXA. The light-emitting layer EL may be located on the first electrode AE. The light-emitting layer EL may be located in the opening70-OP. That is, the light-emitting layer EL may be separately formed in each of the pixels. When the light-emitting layer EL is separately formed in each of the pixels, each of the light-emitting layers EL may emit light of at least one color among blue, red, and green. However, embodiments of the present disclosure are not limited thereto, and the light-emitting layer EL may be connected to the pixels and provided in common. In this case, the light-emitting layer EL may provide blue light or white light. The second electrode CE may be located on the light-emitting layer EL. The second electrode CE may have an integral shape and may be located, in common, in the plurality of pixels. A common voltage may be applied to the second electrode CE, and the second electrode CE may be referred to as a common electrode. According to some embodiments, a hole control layer may be located between the first electrode AE and the light-emitting layer EL. The hole control layer may be formed as a common layer in the light-emitting region PXA and the non-light-emitting region NPXA. The hole control layer may include a hole transport layer and may further include a hole injection layer. An electron control layer may be located between the light-emitting layer EL and the second electrode CE. The electron control layer may include an electron transport layer and may further include an electron injection layer. The hole control layer and the electron control layer ECL may be formed, in common, in the plurality of pixels by using an open mask. The encapsulation layer140may be located on the light-emitting element layer130. The encapsulation layer140may include an inorganic layer, an organic layer, and an inorganic layer which are stacked in this order, but layers constituting the encapsulation layer140are not limited thereto. The inorganic layers may protect the light-emitting element layer130from moisture and oxygen, and the organic layer may protect the light-emitting element layer130from foreign substances such as dust particles. The inorganic layers may each include a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, an aluminum oxide layer, or the like. The organic layer may include an acrylic organic layer, but embodiments of the present disclosure are not limited thereto. The input sensor200may be formed on the display panel100through a continuous process. In this case, the input sensor200may be expressed as being located directly on the display panel100. Being located directly may mean that a third component is not located between the input sensor200and the display panel100. That is, a separate adhesive member may not be located between the input sensor200and the display panel100. In this case, the thickness of the electronic device1000may be thinner. The input sensor200may include a base insulating layer201, a first conductive layer202, a sensing insulating layer203, a second conductive layer204, and a cover insulating layer205. The base insulating layer201may be an inorganic layer including at least one of silicon nitride, silicon oxynitride, or silicon oxide. Alternatively, the base insulating layer201may be an organic layer including an epoxy resin, an acryl resin, or an imide-based resin. The base insulating layer201may have a single-layer structure or a multilayer structure in which layers are stacked along the third direction DR3. Each of the first conductive layer202and the second conductive layer204may have a single-layer structure or a multilayer structure in which layers are stacked along the third direction axis DR3. The conductive layer having a single-layer structure may include a metal layer or a transparent conductive layer. The metal layer may include molybdenum, silver, titanium, copper, aluminum, or an alloy thereof. The transparent conductive layer may include a transparent conductive oxide such as indium tin oxide (ITO), indium zinc oxide (IZO), zinc oxide (ZnO) or indium zinc tin oxide (IZTO). In addition, the transparent conductive layer may include a conductive polymer such as PEDOT, metal nanowires, graphene, or the like. The conductive layer having a multi-layered structure may include metal layers. The metal layers may have, for example, a three-layer structure of titanium/aluminum/titanium. The conductive layer having a multi-layered structure may include at least one metal layer and at least one transparent conductive layer. At least one of the sensing insulating layer203or the cover insulating layer205may include an inorganic film. The inorganic film may include at least one of aluminum oxide, titanium oxide, silicon oxide, silicon oxynitride, zirconium oxide, or hafnium oxide. A parasitic capacitance Cb may be generated between the input sensor200and the second electrode CE. As the distance between the input sensor200and the second electrode CE gets closer, the parasitic capacitance Cb may be greater. As the parasitic capacitance Cb is greater, the ratio of the amount of change in capacitance to a reference value may become smaller. The amount of change in capacitance may mean a change in capacitance that occurs before and after input by an input mean such as an input device2000(seeFIG.2) or the user's body3000(seeFIG.3). A driving chip that processes the signal sensed from the input sensor200may perform a leveling operation in which a value corresponding to the parasitic capacitance (Cb) is removed from the sensed signal. By the leveling operation, the ratio of the amount of change in capacitance to the reference value may be increased, so that sensing sensitivity may be improved. Depending on the specifications of the driving chip, however, there may be a difference in the ability to remove a value corresponding to the parasitic capacitance Cb. For example, if the maximum parasitic capacitance Cb is about 500 pF and the capacitance value that may be removed by the driving chip from the signal sensed from the input sensor200is about 200 pF, the reference value may not be sufficiently lowered by the driving chip. In this case, since the ratio of the amount of change in capacitance is insignificant compared to the reference value, the driving chip may recognize the amount of change in capacitance as noise or fail to recognize the amount of change in capacitance, resulting in a malfunction of failing to detect touch coordinates. According to some embodiments of the present disclosure, it may be possible to provide the maximum value of the parasitic capacitance Cb below a predetermined value by modifying the electrode structure of the input sensor200. In this case, even when the performance of the driving chip is relatively low, the coordinate recognition accuracy may be improved. The predetermined value may be about 200 pF, but is not particularly limited thereto. FIG.4is a plan view illustrating a sensor layer according to some embodiments of the present disclosure. Referring toFIG.4, the input sensor200may include an active region200A and a peripheral region200N. The active region200A may be a region which is activated in response to an electrical signal. For example, the active region200A may be a region which senses an input. The peripheral region200N may surround the active region200A. The input sensor200may include a base insulating layer201, a plurality of unit sensors210, and a plurality of wires220. The plurality of unit sensors210may be located in the active region200A. The plurality of wires220may be located in the peripheral region200N. The plurality of unit sensors210may be arranged along the first direction DR1and the second direction DR2. The plurality of wires220may be electrically connected to the plurality of unit sensors210. The plurality of unit sensors210may have a first pitch PC1. The first pitch PC1of the plurality of unit sensors210may be smaller than the first width WE1(seeFIG.2) of the input region of the user's body3000(seeFIG.2). The first pitch PC1may be about 3.5 mm (millimeter) to about 4.5 mm. For example, the first pitch PC1may be about 4 mm. According to some embodiments, an area of each of the plurality of unit sensors210may be smaller than an area of the input region of the user's body3000(seeFIG.2). Accordingly, the input sensor200may accurately sense the coordinates input by the user's body3000(seeFIG.2). The plurality of unit sensors210may include a first sensing electrode and a second sensing electrode crossing the first sensing electrode. The input sensor200may operate in a first mode in which information about an external input is obtained through a change in mutual capacitance between sensing electrodes included in the plurality of unit sensors210or in a second mode in which an input by the input device2000(seeFIG.2) is sensed through a change in capacitance of each of the sensing electrodes included in the plurality of unit sensors210. According to some embodiments, when the input device2000approaches the input sensor200, the input sensor200may enter a second mode for sensing the input device2000. The input device2000may transmit/receive data to/from a sensor controller through the input sensor200. In the second mode, each of the plurality of first sensing electrodes211and the plurality of second sensing electrodes212may be used as a transmitting electrode for supplying uplink signals ULS provided from the sensor controller to the input device2000. In the second mode, each of the plurality of first sensing electrodes211and the plurality of second sensing electrodes212may be used as a receiving electrode for supplying downlink signals DLS provided from the input device2000to the sensor controller of the input sensor200. That is, in the second mode, the plurality of first sensing electrodes211and the plurality of second sensing electrodes212may be all used as transmitting electrodes or may be used as receiving electrodes. FIG.5is a view illustrating a unit sensor according to some embodiments of the present disclosure. Referring toFIG.5, the unit sensor210may include a first sensing electrode211, a second sensing electrode212, a first dummy electrode213, and a second dummy electrode214. The first sensing electrode211may extend in the first direction DR1in a bar shape and may be provided in plurality. The second sensing electrode212may extend in the second direction DR2in a bar shape and may be provided in plurality. The first sensing electrode211and the second sensing electrode212may cross each other. A plurality of the first sensing electrodes211may be arranged in the second direction DR2. A plurality of second sensing electrodes212may be arranged in the first direction DR1. InFIG.5, three first sensing electrodes211and three second sensing electrodes212may be located in one unit sensor210. However, embodiments according to the present disclosure are not limited thereto. The first sensing electrodes211may sense movement of the input device2000in the second direction DR2. The second sensing electrodes212may sense movement of the input device2000in the first direction DR1. The first sensing electrode211may include a plurality of first sensing units SP1and a plurality of first connection units CP1. Each of the plurality of first connection units CP1may connect two adjacent first sensing units SP1among the plurality of first sensing units SP1. The second sensing electrode212may include a plurality of second sensing units SP2and a plurality of second connection units CP2. Each of the plurality of second connection units CP2may connect two adjacent second sensing units SP2among the plurality of second sensing units SP2. A plurality of first openings OP1may be defined between the first sensing units SP1and the second sensing units SP2. Second opening OP2may be respectively defined in the first sensing units SP1and the second sensing units SP2. The first sensing units SP1, the second sensing units SP2, and the first connection units CP1may be located on the same layer. The second connection units CP2may be located on a layer different from layers on which the first sensing units SP1, the second sensing units SP2, and the first connection units CP1are located. According to some embodiments, the first sensing units SP1, the second sensing units SP2, and the first connection units CP1may be located on the second conductive layer204(seeFIG.3). The second connection units CP2may be located on the first conductive layer202. Each of the second connection units CP2may include a first bridge pattern BRP1. The first bridge pattern BRP1may be provided in plurality. InFIG.5, two first bridge patterns BRP1may be provided. However, this is only an example, and the first bridge pattern BRP1may be provided in one or at least two. The first dummy electrode213may be located in the first openings OP1. The second dummy electrode214may be located in the second openings OP2. The first dummy electrode213and the second dummy electrode214may float to be electrically insulated from the first sensing electrode211and the second sensing electrode212. The first dummy electrode213and the second dummy electrode214may be patterned on a layer same as a layer on which the first sensing electrode211and the second sensing electrode212are located. InFIG.5, when the input device2000moves in the second direction DR2, the movement may be sensed by the first sensing electrodes211. In this regard, details will be described later. FIGS.6A and6Bare views illustrating a partial region of an input sensor according to some embodiments of the present disclosure.FIGS.7A and7Bare enlarged views of the partial region ofFIG.6according to some embodiments of the present disclosure.FIG.8Ais an enlarged view of the partial region ofFIG.7Baccording to some embodiments of the present disclosure.FIG.8Bis a cross-sectional view taken along the line AA′ ofFIG.8A. FIG.6Ais an enlarged plan view illustrating an X1region ofFIG.4.FIG.6Bis a view illustrating regions ofFIG.6A.FIG.7Ais an enlarged plan view illustrating AA′ ofFIG.6.FIG.7Bis an enlarged plan view illustrating BB′ ofFIG.6. Referring toFIGS.6A and6B, the input sensor200may include a plurality of unit sensors210located adjacent to each other. The plurality of unit sensors210may be located in a plurality of unit sensor regions UA.FIG.6Aillustrates that nine unit sensors210are located in each of unit sensor regions. According to some embodiments, the plurality of unit sensors210may include a first unit sensor210a, a second unit sensor210b, and a third unit sensor210c. The first unit sensor210amay be located in a first unit sensor region UA1, the second unit sensor210bmay be located in a second unit sensor region UA2, and the third unit sensor210cmay be located in a third unit sensor region UA3. The first unit sensor210aand the second unit sensor210bare adjacent to each other in the first direction DR1. The first unit sensor210aand the third unit sensor210care adjacent to each other in the second direction DR2. Referring toFIGS.6B,7A, and7B, the plurality of unit sensor regions UA may include a first region OA1and a second region OA2. The first region OA1may correspond to a region in which the first unit sensor region UA1and the second unit sensor region UA2overlap. The second region OA2may correspond to a region in which the first unit sensor region UA1and the second unit sensor region UA3overlap. That is, a portion of the first unit sensor210aand a portion of the second unit sensor210bmay be located in the first region OA1. A portion of the first unit sensor210aand a portion of the third unit sensor210cmay be located in the second region OA2. Each of the plurality of unit sensors210may further include a first additional sensing electrode ASE1and a second additional sensing electrode ASE2. That is, the first unit sensor210a, the second unit sensor210b, and the third unit sensor210cmay each include the first additional sensing electrode ASE1and the second additional sensing electrode ASE2. The first additional sensing electrode ASE1may be connected to the first sensing electrode211. The second additional sensing electrode ASE2may be connected to the second sensing electrode212. The first additional sensing electrode ASE1may be located in the second region OA2. The second additional sensing electrode ASE2may be located in the first region OA1. InFIG.7A, a second additional sensing electrode ASE2aof the first unit sensor210aand a second additional sensing electrode ASE2bof the second unit sensor210bmay be located in the first region OA1. The first unit sensor210aand the second unit sensor210bmay be symmetrical to each other with respect to a sensor boundary BDR. InFIG.7B, a first additional sensing electrode ASE1aof the first unit sensor210aand a first additional sensing electrode ASE1cof the third unit sensor210cmay be located in the second region OA2. The first unit sensor210aand the third unit sensor210cmay be symmetrical to each other with respect to the sensor boundary BDR. InFIGS.7A and7B, first sensing electrodes211aof the first unit sensor210amay be electrically connected to first dummy electrodes213cof the third unit sensor210c. Second sensing electrodes212aof the first unit sensor210amay be electrically connected to first dummy electrodes213bof the second unit sensor210b. The first sensing electrodes211aof the first unit sensor210amay be electrically connected to first adjacent dummy electrodes adjacent to the first unit sensor210aamong the first dummy electrodes213cof the third unit sensor210c. The second sensing electrodes212aof the first unit sensor210aare connected to first adjacent dummy electrodes adjacent to the first unit sensor210aamong the first dummy electrodes213bof the second unit sensor210b. Here, the first adjacent dummy electrodes may be the first additional sensing electrodes ASE1or the second additional sensing electrodes ASE2. That is, the first adjacent dummy electrodes of the third unit sensor210celectrically connected to the first sensing electrodes211aof the first unit sensor210amay be first additional sensing electrodes ASE1aof the first unit sensor210a. The first adjacent dummy electrodes of the second unit sensor210belectrically connected to the second sensing electrodes212aof the first unit sensor210amay be second additional sensing electrodes ASE2aof the first unit sensor210a. According to some embodiments, the first sensing electrode211and the second sensing electrode212of the unit sensor210may be electrically connected to dummy electrodes of another adjacent unit sensor to form the first additional sensing electrode ASE1and the second additional sensing electrode ASE2. That is, the first additional sensing electrode ASE1and the second additional sensing electrode ASE2may be located in the first openings OP1. FIG.8Ais an enlarged view of a region XX′ ofFIG.7B.FIG.8Bis a cross-sectional view taken along line AA′ ofFIG.8A. Referring toFIGS.8A and8B, the unit sensor210may include a second bridge pattern BRP2. The second bridge pattern BRP2may connect the first sensing electrode211and the first additional sensing electrode ASE1. The second bridge pattern BRP2may connect the second sensing electrode212and the second additional sensing electrode ASE2. The second bridge pattern BRP2may be patterned on a layer on which the first bridge pattern BRP1is located. According to some embodiments, the first sensing electrode211may include a first connection pattern CPT1, and the second sensing electrode212may include a second connection pattern CPT2. The first connection pattern CPT1may extend from the first sensing electrode211most adjacent to the first additional sensing electrode ASE1among the plurality of first sensing electrodes211. The second connection pattern CPT2may extend from the second sensing electrode212most adjacent to the second additional sensing electrode ASE2among the plurality of second sensing electrodes212. The second bridge pattern BRP2connects the first connection pattern CPT1located on a sensor boundary BDR to the first additional sensing electrode ASE1. The second bridge pattern BRP2connects the second connection pattern CPT2and the second additional sensing electrode ASE2. The first connection pattern CPT1may be directly connected to the first additional sensing electrode ASE1and thus the first sensing electrode211and the first additional sensing electrode ASE1may be electrically connected. The second connection pattern CPT2may be directly connected to the second additional sensing electrode ASE2and thus the second sensing electrode212and the second additional sensing electrode ASE2may be electrically connected. The first connection pattern CPT1and the second connection pattern CPT2may be located on a layer same as a layer on which the first sensing electrode211and the second sensing electrode212are located, and may be located on a layer different from a layer on which the second bridge pattern BRP2is located. For example, the second bridge pattern BRP2may connect the first sensing electrode211or the second sensing electrode212with the first connection pattern CPT1or the second connection pattern CPT2. The second bridge pattern BRP2may connect the first or second connection patterns CPT1or CPT2with the first additional sensing electrode ASE1or the second additional sensing electrode ASE2. As a result, the second bridge pattern BRP2may connect the first or second sensing electrodes211or212with the first or second additional sensing electrodes ASE1or ASE2. InFIG.8A, the second bridge pattern BRP2connects the first sensing electrodes211aand the first connection pattern CPT1and connects the first connection pattern CPT1and the first additional sensing electrode ASE1a. InFIG.8B, the second bridge pattern BRP2may be located on the base insulating layer201. The first sensing electrode211, the second connection pattern CPT2, and the first additional sensing electrode ASE1may be located on the sensing insulating layer203. The second bridge pattern BRP2and the first sensing electrode, and the second connection pattern CPT2and the first additional sensing electrode ASE1may be electrically connected through a contact hole. A duplicate description made with reference toFIG.3in relation to other configurations will be omitted. As illustrated inFIG.5, the unit sensor210may include a plurality of second dummy electrodes214respectively arranged in the second opening OP2. The plurality of second dummy electrodes214may include a second adjacent dummy electrode adjacent to another adjacent unit sensor. According to some embodiments, the first additional sensing electrode ASE1and the second additional sensing electrode ASE2may be located in the second opening OP2. That is, the first additional sensing electrode ASE1and the second additional sensing electrode ASE2located in the first opening OP1may extend to the second opening OP2. Referring toFIGS.5and7A, the second additional sensing electrodes ASE2aand ASE2bmay be connected to the second dummy electrode214located in the second opening OP2. The second additional sensing electrodes ASE2aand ASE2bmay be electrically connected to adjacent second adjacent dummy electrodes214among a plurality of third dummy electrodes214. The second additional sensing electrodes ASE2aand ASE2bmay be extended by being connected to the second adjacent dummy electrode through bridge patterns. Detailed description will be made with reference toFIG.10. FIG.9is a view illustrating a partial region of an input sensor according to some embodiments of the present disclosure.FIG.10is a view illustrating a partial region of an input sensor according to some embodiments of the present disclosure. FIG.9illustrates a state in which the unit sensor is extended due to the first additional sensing electrode ASE1and the second additional sensing electrode ASE2in the X1region. Unlike the X1region ofFIG.9,FIG.10illustrates that unit sensors each including an extended first additional sensing electrode ASE1-1and a second additional sensing electrode ASE2-1are located. InFIG.9, the first additional sensing electrode ASE1and the second additional sensing electrode ASE2may be located only in the first opening OP1(seeFIG.5). Referring toFIG.10, the first additional sensing electrode ASE1-1and the second additional sensing electrode ASE2-1may also be located in the second opening OP2(seeFIG.5). That is, the first additional sensing electrode ASE1-1and the second additional sensing electrode ASE2-1may extend from the first opening OP1to the second opening OP2. According to some embodiments, the first additional sensing electrode ASE1-1and the second additional sensing electrode ASE2-1may be respectively arranged, among the second openings OP2, in the second opening OP2adjacent to the first sensing electrode211and the second opening OP2adjacent to the second sensing electrode212. The first additional sensing electrode ASE1and the second additional sensing electrode ASE2which are located in the first opening OP1and the first additional sensing electrode ASE1-1and the second additional sensing electrode ASE2-1which are located in the second opening OP2may be connected through third bridge patterns BRP3. The first additional sensing electrodes ASE1and ASE1-1and the second additional sensing electrodes ASE2and ASE2-1may be located on the same layer, and the third bridge pattern BRP3may be located on a layer different from a layer on which the first additional sensing electrode ASE1and ASE1-1and a second additional sensing electrode ASE2and ASE2-1are located. The third bridge pattern BRP3may be located on a same layer as a layer on which the second bridge pattern BRP2(seeFIG.8B) is located. Among the plurality of second openings OP2(seeFIG.5), the first or second additional sensing electrodes ASE1-1and ASE2-1may be located in the second openings OP2adjacent to the adjacent unit sensor, and the second dummy electrode214(seeFIG.5) may be located in the remaining second openings OP2. FIGS.11A to11Care graphs showing effects according to some embodiments of the present disclosure. FIG.11Aillustrates that the input device2000according to some embodiments moves on a plurality of unit sensors adjacent to each other. InFIG.11A, the input device2000may correspond to a pen including an active pen and an electronic pen. The pen may move by a distance of about −6 mm to about 6 mm. Here, a plurality of unit sensors210-1,210, and210-2are located within the distance of about −6 mm to about 6 mm. The pen may sequentially pass through the plurality of unit sensors210-1,210, and210-2. The pen may pass by a first unit sensor210-1in a first section of about −6 mm to about −2 mm, may pass by a second unit sensor210in a second section of about −2 mm to about 2 mm, and may pass by a third unit sensor210-2in a third section of about 2 mm to about 6 mm. FIGS.11B and11Cshow the change in sensitivity of the pen at this time. That is,FIGS.11B and11Cshow whether the sensitivity of the pen, which is the input device2000, may be continuously maintained while the pen passes by the unit sensors210-1,210, and210-2adjacent to each other. First Comparative Example ST1 and Second Comparative Example ST2 show pen sensitivities according to a typical unit sensor structure. Example ST3 shows the pen sensitivity according to the unit sensor210(seeFIG.6A) of the input sensor200(seeFIG.3) according to some embodiments of the present disclosure. InFIG.11B, in all of First Comparative Example ST1, Second Comparative Example ST2, and Example ST3, the pen sensitivity of the first unit sensor210-1in the first section becomes maximum at −3 mm. In all of First Comparative Example ST1, Second Comparative Example ST2, and Example ST3, the pen sensitivity in the second section becomes maximum at 0 mm, and the pen sensitivity in the third section becomes maximum at 4 mm. However, in the case of First Comparative Example ST1 and Second Comparative Example ST2, the pen sensitivity is not continuous between the first section and the second section and between the second section and the third section. However, in the case of Example ST3 of the inventive concept, when compared with the pen sensitivity of First Comparative Example ST1 and Second Comparative Example ST2, the pen sensitivity of Example ST3 continuously appears between the first section and the second section and also between the second section and the third section. When the pen sensitivity continuously appears between the first section and the second section and between the second section and the third section, the linearity of the input may be high or good. That is, some embodiments of the present disclosure include first and second additional electrodes between adjacent unit sensors to have high pen sensitivity even in a section between input sensors when the pen moves, and to increase input linearity. A description will be given below with reference toFIGS.11A,11B and11C. InFIGS.11B and11C, a pen sensitivity reference line may correspond to a minimum pen sensitivity for unit sensors to sense a pen. The pen sensitivity reference line corresponds to an arbitrary set value, and corresponds to 5 inFIGS.11B and11C, but is not necessarily limited thereto. In the case of First Comparative Example ST1, in the first section, the first unit sensor210-1senses the pen at about −3 mm, and the sensitivity decreases to about −1 mm. Thereafter, in the second section, the second unit sensor210does not sense the pen from about −2 mm to about −1 mm, but senses the pen at about 0 mm. In the third section, it may be seen that the third unit sensor210-2does not sense the pen from about 0 mm to about 2.4 mm, but senses the pen at about 2.5 to about 4 mm. That is, in First Comparative Example ST1, the input sensor includes a first dead section ST1-DZ between the plurality of unit sensors210-1,210, and210-2. For example, the first dead section ST1-DZ may correspond to a section from about 0 mm to about 2.4 mm between the second section and the third section. In this section, in First Comparative Example ST1, the sensitivity of the second unit sensor is decreased and the sensitivity of the third unit sensor is low, and thus the pen may not be sensed. In the case of Second Comparative Example ST2, the first unit sensor210-1senses the pen at about −3 mm in the first section, and in the second section, the pen sensitivity decreases to about −1 mm, where the second unit sensor210starts to sense the pen, and thus the first unit sensor210-1does not sense the pen. The second unit sensor210senses the pen at about 0 mm, and the pen sensitivity decreases to about 2 mm, where the third unit sensor210-2starts to sense the pen, and thus second unit sensor210does not sense the pen. That is, a second dead section ST2-DZ may correspond to a section from about 0 mm to about 2 mm. In the case of Example ST3 according to the present disclosure, the first unit sensor210-1senses a pen up to about −3 mm, and in the second section, the second unit sensor210senses a pen from about −2 mm to about 0 mm. In the second section, the pen sensitivity of the second unit sensor210decreases from about 0 mm. In the third section, the third unit sensor210-2starts to sense the pen from about 1 mm. That is, the third dead section ST3-DZ may correspond to about 1 mm from about 0 mm. That is, the dead section of Example ST3 according to some embodiments of the present disclosure may be shorter than that of First and Second Comparative Examples ST1 and ST2. Accordingly, according to Example, the linearity of the input by the input device2000is high and the sensing reliability is improved. When the pen sensitivity reference line corresponds to 5, the first to third dead sections ST1-DZ, ST2-DZ, and ST3-DZ are as described above. According to some embodiments, when there is no pen sensitivity reference line, in Example ST3 according to some embodiments of the present disclosure, the first unit sensor210-1of the first section senses the pen at about −3 mm, and then the second unit sensor210starts immediately sensing the pen from about −3 mm in the second section. In the second section, the second unit sensor210senses the pen at about 0 mm, and then in the third section, the third unit sensor210-2starts immediately sensing the pen from about 0 mm. Accordingly, when there is no pen sensitivity reference line, a dead section does not exist in Example ST3 according to some embodiments of the present disclosure, and an input by the input device2000may appear continuously and linearly. According to some embodiments of the present disclosure, an electronic device may increase the linearity of a pen input and improve a sensing reliability by extending a sensing electrode of a unit sensor of an input sensor to an adjacent unit sensor. The embodiments have been described in the drawings and the specification. While specific terms were used, they were not used to limit the meaning or the scope of embodiments according to the present disclosure described in the claims but merely used to explain aspects of some embodiments of the present disclosure. Accordingly, those skilled in the art will understand that various modifications and other equivalent embodiments are also possible. Hence, the scope of embodiments according to the present disclosure shall be determined by the technical scope of the accompanying the claims, and their equivalents. | 50,918 |
11861128 | DETAILED DESCRIPTION OF THE INVENTION FIG.1shows a touch device1of a first embodiment of the invention, including a first substrate10and a circuit substrate20. The first substrate includes a touch sensing structure11(only schematically) and a plurality of first electrodes12. The touch sensing structure11is disposed on the first substrate10. The first electrodes12are electrically connected to the touch sensing structure11. A circuit board may include the circuit substrate20and a plurality of second electrodes22. That is to said, the plurality of second electrodes22are formed on the circuit substrate20. The circuit board can be a flexible printed circuit board. The touch sensing structure11of the invention can be capacitive, pressure sensitive, electromagnetic, or another touch sensing structure. The capacitive touch sensing structure can utilize mutual capacitive technology or self-capacitive technology or both self-capacitive and mutual capacitive technology. For a mutual capacitive touch sensing structure, for example, the touch sensing structure11includes a driving electrode and a receiving electrode (not shown). The driving electrode and the receiving electrode are insulated from each other, but there is coupling capacitance therebetween. The touch sensing structure11is electrically connected to the first electrode12with a plurality of traces, wherein the traces comprise traces112connected to the driving electrodes and the traces114connected to the receiving electrodes. A ground line116can be disposed between the traces112and the traces114to decrease noise therebetween. A ground line118surrounds the traces114to provide static electricity protection. The first electrode12disposed outermost may be provided with a first electrode extending portion122, which can be two elongated shapes as shown inFIG.1, or other shapes such as register marks when the circuit substrate20is attached to the first substrate10. FIG.2Ais a sectional view in2A-2A′ direction ofFIG.1.FIG.2Bis a top view of the main portions of the touch device ofFIG.1. With reference toFIGS.2A and2B, a first gap G1is formed between two adjacent first electrodes12. A minimum distance between the two adjacent first electrodes12is a gap distance d1of the first gap G1. The gap distance d1is defined as the minimized straight distance between the opposite side edges of the two adjacent first electrodes12. The circuit substrate20partially overlaps the first substrate10in a vertical projection direction (third direction Z) of the first substrate10. In other words, if we look at the circuit substrate20and the first substrate10from the vertical projection direction that is perpendicular to the first substrate10, the circuit substrate20would partially overlaps the first substrate10. A plurality of second gaps G2are formed between the two adjacent second electrodes22(a second gap G2is formed between each two adjacent second electrodes22). At least one of the first electrodes12has an offset distance d2with at least one of the corresponding second electrodes22in a first direction X. The offset distance d2is greater than zero and smaller than half of the corresponding gap distance d1. In one embodiment, the offset distance d2is greater than zero and smaller than one third of the corresponding gap distance d1. In other words, as shown inFIG.2B, the two adjacent first electrodes respectively have a first electrode side edge131and a first electrode side edge132facing each other. The minimized gap distance d1is formed between first electrode side edges131and132. The two adjacent second electrodes22are electrically connected to the two corresponding first electrodes12by a conductive glue layer30. A touch sensing signal travels from the first electrodes12to the second electrodes22. One of the two second electrodes22has a second electrode side edge231located between the first electrode side edges131and132. The second electrode side edge231corresponds the shortest distance d2of the first electrode side edge131of the first electrode12in the X direction electrically connected thereto, which is smaller than half or one third of the gap distance d1. A gap distance d3is formed between a second electrode side edge232of another second electrode22and a corresponding first electrode side edge133in the X direction as shown inFIG.2B. The value of the gap distance d3can be the same or close to the value of the gap distance d2, and the difference therebetween can be within 10% of the gap distance d2. In one embodiment, the roughness of the first electrode side edge131,132or133can be different to the roughness of the second electrode side edge231or232. Increased roughness increases attachment between the conductive glue layer30and the first electrode12or the second electrode22. In another embodiment, one of the first electrodes comprises a first electrode extending portion122as a register mark when the circuit substrate20is attached to the first substrate10. The circuit board can comprise a corresponding register mark (not shown) formed on the circuit substrate20. The second electrode22partially covers the first electrode extending portion122, which also improves the attachment between the conductive glue layer30and the first electrode12or the second electrode22, and increases the electrical connection area between the first electrode12and the second electrode22. With reference toFIG.3, in another embodiment, when the shape of the first electrode12and the shape of the second electrode22are symmetrical shapes (for example, rectangular, oval, etc.), each first electrode12comprises a first central axis14, and each second electrode22comprises a second central axis24. The central axis means a symmetrical central axis of the electrode. An offset distance d2is formed between the first central axis14and the corresponding second central axis24. With reference toFIG.2A, in one embodiment, a first right angle15is formed on an edge of the first electrode12, and a second right angle25is formed on an edge of the second electrode22. When the first electrode12is connected to the second electrode22, the first right angle15is corresponding to a flat portion of the second electrode22rather than the second right angle25. The second right angle25is corresponding to a flat portion of the first electrode12rather than the first right angle15. Therefore, while the first electrode12is attached to the second electrode22, the first right angle15is prevented from colliding with the second right angle25, and the stress concentration problem and fragmentation problem are avoided. With reference toFIGS.2A and2B, when the first electrode12is connected to the second electrode22, there are increased exhausting gaps formed between the first electrode12and the second electrode22in the first direction X and the second direction Y. The bubbles between the first electrode12and the second electrode22can be exhausted smoothly through the exhausting gaps. The touch display panel of the embodiment of the invention has improved adhesion, and the electrode corrosion problem is prevented. With reference toFIG.2A, in one embodiment, the conductive glue layer30contacts at least portions of the first electrode12and the second electrode22to electrically connect the first electrode12and the second electrode22. The first electrode12can be partially or fully electrically connected to the second electrode22via the conductive glue layer30. With reference toFIG.2B, in one embodiment, each first electrode12has a tapered portion16. The tapered portion16is not corresponding to the second electrode22in the vertical projection direction. The tapered portion16improves impedance matching and design flexibility of the trances. With reference toFIG.3, in one embodiment, the width of each first electrode12is the same as or different from the width of each second electrode22. The ratio between the width of each first electrode12and the width of each second electrode22is 0.8 to 1.3. The width of each first electrode12can be the same as the width of each second electrode22. FIG.4shows a touch device2of a second embodiment of the invention, including a first substrate10and touch sensing structure20. The first substrate10comprises a touch sensing structure11and a plurality of first conductive pads12. The touch sensing structure11is disposed on the first substrate10. The first conductive pads12are arranged along a first direction X, wherein a space area G is formed between the two adjacent first conductive pads12, and a minimum distance between the two adjacent first conductive pads12is a gap distance d1. A plurality of second conductive pads22are formed on the circuit substrate20. At least one of the second conductive pads22partially overlaps a space area G in a vertical projection direction of the first conductive pads12. In other words, if we look at the second conductive pads22and the first substrate10from the vertical projection direction that is perpendicular to the second conductive pads22, the circuit substrate20would partially overlaps the space area G. An offset distance d2is formed between an outline of at least one of the second conductive pads22and an outline of one of the two adjacent first conductive pads12in the first direction X, and the offset distance d2is smaller than half of the gap distance d1. In other words, as viewed in the vertical projection direction, a portion of the outline of one of the two adjacent second conductive pads22is located in the space area G of the corresponding (electrically connected) two adjacent first conductive pads12. The minimum distance d2between the outline of the second conductive pads22and the outline of the corresponding (electrically connected) first conductive pads12in the X direction is smaller than half or one third of the gap distance d1. In the second embodiment, when the first electrode12is connected to the second electrode22, there are increased exhausting gaps (offset) formed between the first electrode12and the second electrode22in the first direction X and the second direction Y. The bubbles between the first electrode12and the second electrode22can be exhausted smoothly through the exhausting gaps. The embodiments above can be utilized in various touch devices. In the following examples, several touch devices are described, wherein the elements with the same functions follow the same labels, and the function description thereof are omitted. FIGS.5A and5Bshow a capacitive touch device3of an embodiment of the invention. A touch sensing structure32is disposed on two sides of a first substrate300, and comprises patterned driving electrodes322and patterned receiving electrodes324. A plurality of first electrodes12are divided into two groups respectively provided on both sides of the first substrate300and the two groups are respectively electrically connected to the patterned driving electrodes322and the patterned receiving electrodes324. The second electrodes22are formed on both sides of the circuit substrate20, wherein the circuit board may be a flexible printed circuit board (FPCB). A display34can be disposed under the capacitive touch device3, and a cover lens36can be attached to the top surface of the capacitive touch device3. The display34can be a liquid-crystal display (LCD), an organic light-emitting diode display (OLED), an electro-phoretic display (EPD), an electrode-wetting display (EWD) or a quantum dot display (QD). FIGS.6A and6Bshow a capacitive touch device4of an embodiment of the invention, whereinFIG.6Bis a sectional view along lines A-A and B-B ofFIG.6A. The touch sensing electrode structure44comprises patterned driving electrodes442and patterned receiving electrodes444. The driving electrodes442are disposed on the first substrate41, and the patterned receiving electrodes444are disposed on the second substrate42. A plurality of first electrodes12are divided into two groups respectively disposed on the first substrate41and second substrate42, and the two groups are respectively electrically connected to the patterned driving electrodes442and the patterned receiving electrodes444. The second electrodes22are formed on one side of the circuit substrate20, and are electrically connected to the first electrodes12. The thickness of the first substrate41may be greater than or equal to the thickness of the second substrate42. In this embodiment, the thickness of the first substrate41is between 150 μm˜50 μm, the thickness of the second substrate42is between 110 μm˜10 μm. For example, the first substrate41and the second substrate42are plastic substrates or polymer film substrates (for example, made of polyimide). The thickness of the first substrate41(70 μm˜120 μm) is greater than the thickness of the second substrate42(10 μm˜60 μm). The thickness of the second substrate42is smaller, and therefore the same piece of circuit substrate20(such as a flexible printed circuit substrate) can be docked. A conductive glue layer (for example, anisotropy conductive film, ACF, not shown) can be disposed between the substrates41,42, and the substrate (FPC substrate)20to electrically connect the first electrodes12to the second electrodes22. The substrates41and42and the circuit substrate20are flexible. Therefore the off difference of 10 μm˜60 μm does not decrease bonding effect. Thus, the second electrodes22can be simultaneously coupled to the first electrodes12of the substrates41and42through only one circuit substrate20and one bonding process. A display464can be disposed under the capacitive touch device4, and a cover lens48can be attached to the top surface of the capacitive touch device4to form a touch display device. The display46can be a liquid-crystal display, an organic light-emitting diode display, an electro-phoretic display, an electrode-wetting display or a quantum dot display. FIG.7shows a capacitive touch device5of an embodiment of the invention. A display50comprises a first substrate51and a second substrate52. An active element layer53(e.g., a patterned stacked layer with TFT elements) is disposed on the first substrate51. The display50can be a liquid-crystal display (LCD), an organic light-emitting diode display (OLED), an electro-phoretic display (EPD), an electrode-wetting display (EWD) or a quantum dot display (QD). The touch sensing electrode structure54comprises patterned driving electrodes542and patterned receiving electrodes544. The driving electrodes542are disposed on the substrate51. The receiving electrodes544are disposed on the substrate52. The substrate52can be a color filter substrate of LCD or an upper package cover of OLED. A plurality of first electrodes12are divided into two groups respectively disposed on the first substrate51and second substrate52, and the two groups are respectively electrically connected to the patterned driving electrodes542and the patterned receiving electrodes544. The second electrodes22are formed on one side of the circuit substrate20, and are electrically connected to the first electrodes12by, for example, anisotropy conductive film (ACF). The driving electrodes542can be formed by any conductive layer of the active element layer53(for example, a metal layer with scanning lines or data lines), or formed by a separated patterned conductive layer. The driving electrodes542can be formed by patterned anodes or patterned cathodes of OLED element (not shown). A decorative layer561is provided on a cover lens56to shield the circuit substrate20, the first electrodes12, the second electrodes22and the other metal traces should be hidden (not shown). A further embodiment can be derived from the embodiment ofFIG.7. The driving electrodes542can be omitted from the touch sensing structure54. The receiving electrodes544are no longer only for receiving sensing signals. According to the design of the electrode pattern, mutual capacitive sensing technology or self-capacitive sensing technology can be utilized. For example, the patterns of a part of the receiving electrodes544are amended to have driving function. In another example, each receiving electrodes544are amended to have driving and receiving functions. FIG.8shows a capacitive touch device6of an embodiment of the invention. A display60comprises a first substrate61and a second substrate62. An active element layer63(e.g., a patterned stacked layer with TFT elements) is disposed on the first substrate61. The display60can be a liquid-crystal display (LCD), an organic light-emitting diode display (OLED), an electro-phoretic display (EPD), an electrode-wetting display (EWD) or a quantum dot display (QD). The touch sensing structure64can be a single layer or a multi-layer patterned electrode layer with the functions of receiving electrodes and driving electrodes. The substrate62can be a color filter substrate of LCD or an upper package cover of OLED. The cover lens66is attached to the second substrate62by optical glue layer68, and a light-shielding layer661on the cover lens66shields the circuit substrate20and the electrodes12and22therebelow. The substrates (including the first and second substrates) mentioned above can be general glass substrates, alkali-free glass substrates (e.g. LCD substrate), chemically or physically treated strengthen glass substrates (e.g. cover lens), or plastic substrates such as polyethylene terephthalate (PET) substrate, polycarbonate (PC) substrate, polymethyl methacrylate (PMMA) substrate or cycloolefin polymer (COP) substrate. The material of the electrodes of the touch sensing structures11,32,44,54and64can be transparent conductive materials such as indium tin oxide (ITO) or indium zinc oxide (IZO), or metal material such as Ag, Cu, Al, Mo, Nd, Au, Cr or alloy thereof, or other materials such as graphene, silicon alkenyl, or nano silver. The first electrodes and the second electrodes22can utilize the material mentioned above. The touch sensing structures11,32,44,54and64(including the first electrode traces122and the second electrode traces222) can be single layer patterned electrodes or multi-layered patterned electrodes stacked with insulating layers. When the electrode material is metal materials, patterned metal electrodes can be constituted by thin metal traces. The width of the thin metal trace is between 0.05 μm˜6 μm, preferably between 0.08 μm˜4 μm. The aperture ratio of the patterned metal electrode in a unit area is between 85% and 99%. The first electrode12, the second electrode22, the first electrode traces122and second electrode traces222can be formed by patterned metal electrodes. In the embodiments, the first electrode12is the first conductive pads12, and the second electrode22is the second conductive pads22, which are the same elements with same functions and different names. The cover lenses36,48,56and66in the embodiments described above can be physically or chemically treated tempered glass substrates, e.g., chemically ion exchange treated glass substrate or physical heat treated glass substrate. The cover lenses36,48,56and66can also be a polarizing sheet (linear or circular polarizing sheet) to replace the cover lenses6,48,56and66. Multi-functional film (single layer or multi-layer) is coated or plated on the surface of the polarizing sheet to provide a function such as anti-reflective, anti-glare, anti-pollution, or improving light transmittance. Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term). While the invention has been described by way of example and in terms of the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements. | 20,293 |
11861129 | DETAILED DESCRIPTION In the specification, the expression that a first component (or region, layer, part, portion, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component may be directly on, connected with, or coupled with the second component or means that a third component may be interposed therebetween. The same reference numeral may refer to the same component. In addition, in drawings, thicknesses, proportions, and dimensions of components may be exaggerated to describe the technical features effectively. The expression “and/or” includes one or more combinations which associated components are capable of defining. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used only to differentiate one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. The articles “a,” “an,” and “the” are singular in that they have a single referent, but the use of the singular form in the specification should not preclude the presence of more than one referent. In addition, the terms “under”, “below”, “on”, “above”, etc. are used to describe the correlation of components illustrated in drawings. The terms that are relative in concept are described based on a direction shown in drawings. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described m the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. Unless otherwise defined, all terms (including technical terms and scientific terms) used in the specification have the same meaning as commonly understood by one skilled in the art to which the present disclosure belongs. Furthermore, terms such as terms defined in the dictionaries commonly used should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted in ideal or overly formal meanings unless explicitly defined herein. Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. FIG.1Ais a perspective view of an electronic device unfolded, according to an embodiment.FIG.1Bis a perspective view of an electronic device ofFIG.1A, which is being in-folded.FIG.1Cis a perspective view of an electronic device ofFIG.1A, which is being out-folded. According to an embodiment, an electronic device ED is activated in response to an electrical signal. For example, the electronic device ED is one of a cellular phone, a tablet, a vehicle navigation system, a game console, or a wearable device, but embodiments of the present disclosure are not necessarily limited thereto. For example,FIG.1Ashows that the electronic device ED is a cellular phone. Referring toFIGS.1A to1C, the electronic device ED according to an embodiment includes a first display surface FS that is parallel to a plane defined by a first directional axis DR1and a second directional axis DR2that crosses the first directional axis DR1. The electronic device ED provides an image IM to the user through the first display surface FS. The electronic device ED displays the image IM on the first display surface FS such that the image IM is projected along a third directional axis DR3that is normal; to the plane defined by the first and second directional axes DR1and DR2. In the present specification, a front surface (or a top surface) and a rear surface (or a bottom surface) of each component are defined based on a direction in which the image IM is displayed. The front surface and the rear surface may be opposite to each other on the third direction axis DR3, and the normal direction of each of the front surface and the rear surface is parallel to the third direction axis DR3. The electronic device ED according to an embodiment may include the first display surface FS and a second display surface RS. The first display surface FS may include a first active region F-AA and a first peripheral region F-NAA. The first active region F-AA may include an electronic module region EMA. The second display surface RS may be located on a surface opposite to at least a portion of the first display surface FS. For example, the second display surface RS may be located on a portion of a rear surface of the electronic device ED. The electronic device ED according to an embodiment can detect an externally applied input. The external input includes various types of inputs received from the outside of the electronic device ED. For example, as well as a contact by a body part, such as a user's hand, etc., the external input includes an external input, such as a hovering input, that is sensed when a user's hand approaches the electronic device ED or is adjacent to the electronic device ED within a given distance. In addition, the external input includes force, pressure, temperature, or light. FIG.1Aand drawings thereafter illustrate the first to third directional axes DR1to DR3, however, directions indicated by the first to third directional axes DR1to DR3described in this specification are relative concepts and can be transformed into other directions The first active region F-AA of the electronic device ED is activated in response to an electrical signal. The electronic device ED may display the image IM through the first active region F-AA. In addition, various types of external inputs can be detected in the first active region F-AA. The first peripheral region F-NAA is adjacent to the first active region F-AA. The first peripheral region F-NAA may have a specific color. The first peripheral region F-NAA surrounds the first active region F-AA. Accordingly, the shape of the first active region F-AA is substantially defined by the first peripheral region F-NAA. However, embodiments are not necessarily limited thereto, and in some embodiments, the first peripheral region F-NAA is disposed to be adjacent to only one side of the first active region F-AA or is omitted. The active region of the electronic device ED according to an embodiment of the present disclosure may have one of various other shapes, and the present disclosure is not necessarily limited to any one embodiment. The electronic device ED may include a folding region FA1and non-folding regions NFA1and NFA2. The electronic device ED may include the plurality of non-folding regions NFA1and NFA2. According to an embodiment, the non-folding regions NFA1and NFA2are adjacent to the folding region FA1while the folding region FA1is interposed between the non-folding regions NFA1and NFA2. The electronic device ED may include a first non-folding region NFA1and a second non-folding region NFA2that are spaced apart from each other in the first directional axis DR1while the folding region FA1is interposed between the first non-folding region NFA1and the second non-folding region NFA2. For example, the first non-folding region NFA1may be disposed at one side of the folding region FA1in the first direction DR1, and the second non-folding region NFA2may be disposed at an opposite side of the folding region FA1in the first direction DR1. Note that althoughFIGS.1A to1Cshow that the electronic device ED includes one folding region FA1according to an embodiment, embodiments are not necessarily limited thereto. For example, in an embodiment, the electronic device ED includes a plurality of folding regions. Referring toFIG.1B, the electronic device ED according to an embodiment may be folded about a first folding axis FX1. The first folding axis FX1is a virtual axis that extends in parallel to the second directional axis DR2on the first display surface FS, and the first folding axis FX1may be parallel to a longer side of the electronic device ED. The electronic device ED can be folded about the first folding axis FX1and may change into an in-folded state in which one region of the first display surface FS that overlaps the first non-folding region NFA1faces another region that overlaps the second non-folding region NFA2. When the electronic device ED according to an embodiment is in-folded, the second display surface RS may be viewable by the user. The second display surface RS may further includes an electronic module region for an electronic module that includes various components, and embodiment of the present disclosure are not necessarily limited to any ono embodiment. Referring toFIG.1C, the electronic device ED according to an embodiment can be folded about the first folding axis FX1and may change into an out-folded state in which one region of the second display surface RS that is overlaps the first non-folding region NFA1faces and another region of the second display surface RS that overlaps the second non-folding region NFA2. However, embodiments are not necessarily limited thereto. For example, in an embodiment, the electronic device ED is folded about a plurality of folding axes, such that portions of the first display surface FS face each other, portions of the second display surfaces RS face each other, and the number of the folding axes and the number of the non-folding regions are not necessarily limited thereto. Various electronic modules can be disposed in the electronic module region EMA. For example, the electronic module may be at least one of a camera, a speaker, a light sensing sensor, or a heat sensing sensor. The electronic module region EMA may sense an external subject through the first or second display surfaces FS or RS or may output a sound signal, such as a voice, through the first or second display surfaces FS or RS. The electronic module may include a plurality of components, and embodiments of the present disclosure are not necessarily limited to any one embodiment. The electronic module region EMA may be surrounded by the first active region F-AA and the first peripheral region F-NAA. However, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, the electronic module region EMA is disposed inside the first active region F-AA. FIG.1Dis a perspective view of an unfolded electronic device, according to an embodiment.FIG.1Eis a perspective view of an electronic device ofFIG.1Dthat is being an in-folded. According to an embodiment, an electronic device ED-a may be folded about a second folding axis FX2that extends in a direction parallel to the second direction DR2.FIGS.1D and1Eshow that the extension direction of the second folding axis FX2is parallel to a shorter side of the electronic device ED-a. However, embodiments are not necessarily limited thereto. According to an embodiment, the electronic device ED-a may include at least one folding region FA2and non-folding regions NFA3and NFA4adjacent to the folding region FA2. The non-folding regions NFA3and NFA4may be spaced apart from each other while the folding region FA2is interposed between the non-folding regions NFA3and NFA4. The folding region FA2has a specific curvature and a specific radius of curvature. According to an embodiment, the first non-folding region NFA3and the second non-folding region NFA4may face each other, and the electronic device ED-a can be in-folded, such that the display surface FS is not externally exposed. In addition, according to an embodiment, the electronic device ED-a can be out-folded, such that the display surface FS is externally exposed. According to an embodiment, when the electronic device ED-a is unfolded, the first display surface FS can be viewed by a user. When the electronic device ED-a is in-folded, the second display surface RS can be viewed by the user. The second display surface RS may include the electronic module region EMA for the electronic module. According to an embodiment, the electronic device ED-a include the second display surface RS, and the second display surface RS is located opposite to at least a portion of the first display surface FS. In the in-folding state, the second display surface RS can be viewed by a user. The second display surface RS may include the electronic module region EMA for the electronic module. According to an embodiment, an image can be provided through the second display surface RS. According to an embodiment, the electronic devices ED and ED-a can be subject to repeated sequences of being unfolded and infolded, or of repeated sequences of being unfolded and outfolded, but embodiments are not necessarily limited thereto. According to an embodiment, the electronic devices ED and ED-a can select any one of an unfolding operation, an in-folding operation, or an out-folding operation. FIG.2is an exploded perspective view of an electronic device, according to an embodiment. Referring toFIG.2, in an embodiment, the electronic device ED include a window WM, a display module DM, and an external case EDC. The window WM is divided into a transmission region TA and a bezel region BZA. The transmission region TA is where the image IM is displayed. A user can see the image IM through the transmission region TA. In an embodiment, the transmission region TA has a rectangular shape with rounded vertexes. However, embodiments are not necessarily limited thereto, and in other embodiments, the transmission region TA may have different shapes. The bezel region BZA is adjacent to the transmission region TA. The bezel region BZA may have a specific color. The bezel region BZA may surround the transmission region TA. Accordingly, the shape of the transmission region TA may be substantially defined by the bezel region BZA. However, embodiments are not necessarily limited thereto, and in some embodiments, the bezel region BZA may be adjacent to only one side of the transmission region TA or may be omitted. The display module DM includes a display panel DP and an input sensor ISP. According to an embodiment of the present disclosure, the display panel DP may be an emissive-display panel. For example, the display panel DP may be one of an organic light emitting display panel, an inorganic light emitting display panel, or a quantum dot light emitting display panel. A light emitting layer of a organic light emitting display panel includes an organic light emitting material. A light emitting layer of an inorganic light emitting display panel includes an inorganic light emitting material. A light emitting layer of a quantum dot light emitting display panel includes a quantum dot and a quantum rod. The following described an embodiment in which the display panel DP includes an organic light emitting display panel. The display panel DP may output the image IM, and the output image may be displayed on a display surface IS of the display panel. The input sensor ISP may be disposed on the display panel DP and senses an external input. The window WM includes a transparent material through which the image IM is output. For example, the window WM includes at least one of glass, sapphire, or plastic. Although the window WM is illustrated as having a single layer, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, the window WM includes a plurality of layers. In addition, the non-display region BZA of the electronic device ED described above may correspond to a region of the window WM in which a material that includes a specific color is printed. For example, according to an embodiment of the present disclosure, the window WM may include a light blocking pattern in the bezel region BZA. The light blocking pattern, which is a colored organic film, may be, for example, formed by coating. The window WM is coupled to the display module DM through an adhesive film. According to an embodiment of the present disclosure, the adhesive film includes an optically dear adhesive film (OCA). However, the adhesive film is not necessarily limited thereto, and in other embodiments, includes a typical adhesive agent and adhesion agent. For example, in an embodiment, the adhesive film includes an optically clear resin (OCR) or a pressure sensitive adhesive (PSA) film. An anti-reflective layer may be further interposed between the window WM and the display module DM. The anti-reflective layer reduces a reflective index of external light incident from an upper portion of the window WM. According to an embodiment of the present disclosure, the anti-reflective layer may include a phase retarder and a polarizer. The retarder may have one of a film type or a liquid crystal coating type, and may include a λ/2 retarder and/or a λ/4 retarder. The polarizer may also have one of a film type or a liquid crystal coating type. The film type polarizer may include a stretched synthetic resin film, and the liquid crystal coating type polarizer may include liquid crystals that are aligned in a predetermined array. The retarder and the polarizer can be implemented with one polarization film. The display module DM displays an image in response to an electrical signal, and can transmit or receive information on an external input. The display module DM includes an active region AA and a peripheral region NAA. The active region AA is a region through which an image is output from the display module DM. In addition, the active region AA is where the input sensor ISP can sense an externally applied input. The peripheral region NAA is adjacent to the active region AA. For example, the peripheral region NAA surrounds the active region AA. However, embodiments are not necessarily limited thereto. For example, in other embodiments, the peripheral region NAA has various other forms. According to an embodiment, the active region AA of the display module DM corresponds to at least a portion of the transmission region TA. The display module DM further may include a main circuit board MCB, a flexible circuit film FCB, and a driver chip DIC. The main circuit board MCB may be connected with the flexible circuit film FCB and electrically connected with the display panel DP. The flexible circuit film FCB is connected with the display panel DP and electrically connects the display panel DP with the main circuit board MCB. The main circuit board MCB may include a plurality of driving devices. The plurality of driving devices may include a circuit part that drives the display panel DP. The driver chip DIC is mounted on the flexible circuit film FCB. According to an embodiment of the present disclosure, although one flexible circuit film FCB is illustrated, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, a plurality of flexible circuit films FCBs are provided and connected with the display panel DP. AlthoughFIG.2shows that the driver chip DIC is mounted on the flexible circuit film FCB, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, the driver chip DIC is directly mounted on the display panel DP. For example, in an embodiment, a part of the display panel DP on which the driver chip DIC is mounted is bent and disposed on a rear surface of the display module DM. In addition, in an embodiment, the driver chip DIC is directly mounted on the main circuit board MCB. The input sensor ISP is electrically connected with the main circuit board MCB through the flexible circuit film FCB. However, embodiments of the present disclosure are not necessarily limited thereto. In other embodiments, the display module DM additionally includes a separate flexible circuit film that electrically connects the input sensor ISP to the main circuit board MCB. The electronic device ED further includes the external case EDC that receives the display module DM. The external case EDC is coupled to the window WM and defines an outer appearance of the electronic device ED. The external case EDC absorbs externally applied impacts and prevents foreign substances/moisture from permeating into the display module DM and protects components received in the external case EDC. However, according to an embodiment of the present disclosure, the external case EDC may include a plurality of receiving members that are coupled to each other. According to an embodiment, the electronic device ED further includes an electronic module that includes various functional modules that operate the display module DM, a power supply module that supplies power for operating the electronic device ED, the display module DM and/or a bracket coupled to the external case EDC that divides an internal space of the electronic device ED. FIG.3is a cross-sectional view of a display module illustrated inFIG.2. Referring toFIG.3, in an embodiment, the display module DM includes the display panel DP and the input sensor ISP. The display panel DP includes a base layer BL, a circuit device layer DP-CL disposed on the base layer BL, a display device layer DP-OLED disposed on the circuit device layer DP-CL, and an encapsulating layer TFE disposed on the display device layer DP-OLED and the circuit device layer DP-CL. In addition, the display panel DP may further include functional layers, such as an anti-reflective layer or a reflective index adjusting layer. The base layer BL includes at least one plastic film. The base layer BL is one of a plastic substrate, a glass substrate, a metal substrate, or an organic/inorganic composite substrate. According to an embodiment of the present disclosure, the base layer BL may include a flexible substrate. The active region AA and the peripheral region NAA described with reference toFIG.2are defined in the base layer BL. The circuit device layer DP-CL includes at least one intermediate insulating layer and a circuit device. The intermediate insulating layer includes at least one intermediate inorganic layer and at least one intermediate organic layer. The circuit device includes signal lines, and a driving circuit for pixels. The display device layer DP-OLED includes a light emitting device. The light emitting device includes at least one organic light emitting diode. The display device layer DP-OLED further includes an organic film such as a pixel defining film. The display device layer DP-OLED may be disposed on the circuit device layer DP-CL in the active region, but not in the peripheral region NAA. The encapsulating layer TFE encapsulates the display device layer DP-OLED. The encapsulating layer TFE may be disposed on the circuit device layer DP-CL in the peripheral region NAA. The encapsulating layer TFE includes at least one inorganic layer. The encapsulating layer TFE further includes at least one organic layer. The inorganic layer protects the display device layer DP-OLED from moisture/oxygen and the organic layer protects the display device layer DP-OLED from the foreign substances, such as dust particles. The inorganic layer include at least one of a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The organic layer includes, but is not necessarily limited to, an acrylic-based organic layer. The input sensor ISP is formed on the display panel DP through a subsequent process. In addition, the input sensor ISP and the display panel DP are coupled to each other through an adhesive film. The input sensor ISP has a multi-layer structure. The input sensor ISP may include a single-layer insulating layer or a multi-layer insulating layer. According to an embodiment of the present disclosure, when the input sensor ISP is directly disposed on the display panel DP, the input sensor ISP is directly disposed on the encapsulating layer TFE, and no adhesive film is interposed between the input sensor ISP and the display panel DP. However, according to an embodiment, the adhesive film is interposed between the input sensor ISP and the display panel DP. For example, the input sensor ISP is not fabricated together with the display panel OP. For example, after fabricating the input sensor ISP through a process separate from that of the display panel DP, the input sensor ISP is fixed on a top surface of the display panel DP through the adhesive film. However, according to an embodiment of the present disclosure, the display panel DP may further include the encapsulating layer TFE. The encapsulating layer TFE may be disposed on the display device layer DP-OLED and faces the base layer BL. The encapsulating layer TFE includes at least one of a plastic substrate, a glass substrate, a metal substrate, or an organic/inorganic composite substrate. A sealant is interposed between the encapsulating layer TFE and the base layer BL, and the encapsulating layer TFE and the base layer BL may be coupled to each other by the sealant. The sealant includes an organic adhesive or a frit, which is a ceramic adhesive material. The display device layer DP-OLED may be sealed by the sealant and the encapsulating layer TFE. When the input sensor ISP is directly disposed on the display panel DP, the input sensor ISP is directly disposed on the encapsulating layer TFE. However, according to an embodiment, when the adhesive film is interposed between the input sensor ISP and the display panel DP, the input sensor ISP is fixed to the top surface of the encapsulating layer TFE through the adhesive film. FIG.4is a cross-sectional view of an input sensor, according to an embodiment of the present disclosure; Referring toFIG.4, the input sensor ISP according to an embodiment of the present disclosure may include a first sensing insulating layer IIL1, a first conductive layer ICL1, a second sensing insulating layer IIL2, a second conductive layer ICL2, and a third sensing insulating layer IIL3. The first sensing insulating layer IIL1may be directly disposed on the encapsulating layer TFE. However, according to an embodiment of the present disclosure, the first sensing insulating layer IIL1may be omitted. Each of the first conductive layer ICL1and the second conductive layer ICL2includes a plurality of conductive patterns. The conductive patterns may include a plurality of sensing electrodes SE1_1to SE1_5and SE2_1to SE2_4(seeFIG.5) and a plurality of signal lines SL1_1to SL1_5and SL2_1to SL2_4(seeFIG.5) that are respectively connected to the plurality of sensing electrodes SE1_1to SE1_5and SE2_1to SE2_4. Each of the first to third sensing insulating layers IIL1to IIL3includes at least one of an inorganic material or an organic material. According to an embodiment, the first sensing insulating layer IIL1and the second sensing insulating layer IIL2may be inorganic layers. The inorganic layer includes at least one of aluminum oxide, titanium oxide, silicon oxide, silicon oxynitride, zirconium oxide, or hafnium oxide. The thickness of the inorganic layer may be between 1000 angstroms and 4000 angstroms. The third sensing insulating layer IIL3may be an organic layer. The organic layer includes at least one of an acrylate-based resin, a methacrylate-based resin, polyisoprene, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, or a perylene-based resin. The third sensing insulating layer IIL3prevents moisture from permeating into the first conductive layer ICL1and the second conductive layer ICL2from the outside. FIG.5is a cross-sectional view of an electronic device, according to an embodiment of the present disclosure. Referring toFIG.5, in an embodiment, the electronic device ED includes the display panel DP and the input sensor ISP directly disposed on the display panel DP. The display panel DP includes the base layer BL, the circuit device layer DP-CL, the display device layer DP-OLED, and the encapsulating layer TFE. Each of the display panel DP and the input sensor ISP include the active region AA and the peripheral region NAA.FIG.5is an enlarged view of a portion of the active region AA. The base layer BL is a base surface for disposing the circuit device layer DP-CL. The circuit device layer DP-CL may be disposed on the base layer BL. The circuit device layer DP-CL includes an insulating layer, a semiconductor pattern, a conductive pattern, and a signal line. The insulating layer, a semiconductor layer, and a conductive layer may be formed on the base layer BL through a coating or deposition process, and may be selectively patterned through a plurality of photolithography processes. Thereafter, the semiconductor pattern, the conductive pattern, and the signal line in the circuit device layer DP-CL are formed. At least one inorganic layer is disposed on a top surface of the base layer BL. According to an embodiment, the display panel DP is illustrated as including a buffer layer BFL. The buffer layer BFL increases a bonding force between the base layer BL and the semiconductor pattern. The buffer layer BFL may include a silicon oxide layer and a silicon nitride layer, and the silicon oxide layer and the silicon nitride layer may be alternately stacked. The semiconductor pattern may be disposed on the buffer layer BFL. The semiconductor pattern includes polysilicon. However, embodiments are not necessarily limited thereto, and in other embodiments, the semiconductor pattern includes amorphous silicon or a metal oxide. FIG.5illustrates some semiconductor patterns, and other semiconductor patterns are further disposed in other regions. The semiconductor patterns are arranged across pixels according to a specific rule. The semiconductor patterns have different electrical properties, depending on whether the patterns are doped. The semiconductor patterns include a first region that has higher conductivity and a second region that has lower conductivity. The first region is doped with an N-type dopant or a P-type dopant. A P-type transistor includes a doping region doped with the P-type dopants. The second region is a non-doping region or may be doped with a lighter concentration of dopants, as compared to the first region. The first region is more conductive than of the second region, and serves as an electrode or a signal line. The second region corresponds to an active region or a channel region of a pixel transistor TR-P. For example, a portion of the semiconductor pattern is the active region of the transistor, and another portion of the semiconductor pattern is a source region or a drain region of the transistor. Each of pixels may have an equivalent circuit that includes seven transistors, one capacitor, and a light emitting device, and the equivalent circuit of the pixel can be modified in various forms.FIG.5illustrates that the pixel includes one pixel transistor TR-P and one light emitting device LE, by way of example. A source region SR, a channel region CHR, and a drain region DR of the pixel transistor TR-P are formed from the semiconductor pattern. The source region SR and the drain region DR extend in opposite directions to each other from the channel region CHR.FIG.5illustrates a portion of a signal transfer region SCL that is formed using the first region of the semiconductor pattern. In addition, the signal transfer region SCL is electrically connected with the pixel transistor TR-P, when viewed in a plan view. A first insulating layer IL1may be disposed on the buffer layer BFL. The first insulating layer IL1may commonly overlaps a plurality of pixels, and the first insulating layer IL1may covers the semiconductor pattern. The first insulating layer IL1may be an inorganic layer and/or an organic layer, and may have a single-layer structure or a multi-layer structure. The first insulating layer IL1includes at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. According to an embodiment, the first insulating layer IL1is a silicon oxide layer that has a single-layer structure. The first insulating layer IL1and an insulating layer of the circuit device layer DP-CL, which is to be described below, may be an inorganic layer and/or an organic layer, and may have a single-layer structure or a multi-layer structure. The inorganic layer includes, but is not necessarily limited to, at least one of the above-described materials. A gate GE of the pixel transistor TR-P is disposed on the first insulating layer IL1. The gate GE may be a portion of a metal pattern. The gate GT overlaps the channel region CHR. The gate GT may function as a mask in a process of doping the semiconductor pattern. A second insulating layer IL2may be disposed on the first insulating layer IL1and may cover the gate GT. The second insulating layer IL2may be commonly provided on the pixels. The second insulating layer IL2may be an inorganic layer and/or an organic layer, and may have a single-layer structure or a multi-layer structure. According to an embodiment, the second insulating layer IL2is a silicon oxide layer that has a single-layer structure. A third insulating layer IL3may be disposed on the second insulating layer IL2. According to an embodiment, the third insulating layer IL3is a silicon oxide layer that has a single-layer structure. A first connection electrode CNE1may be disposed on the third insulating layer IL3. The first connection electrode CNE1may be connected to the signal transfer region SCL through a contact hole CNT1formed through the first insulating layer IL1, the second insulating layer IL2, and the third insulating layer IL3. A fourth insulating layer IL4may be disposed on the third insulating layer IL3and may covers the first connection electrode CNE1. The fourth insulating layer IL4may be a silicon oxide layer that has a single-layer structure. A fifth insulating layer IL5may be disposed on the fourth insulating layer IL4. The fifth insulating layer IL5may be an organic layer. A second connection electrode CNE2may be disposed on the fifth insulating layer IL5. The second connection electrode CNE2may be connected to the first connection electrode CNE1through a contact hole CNT2formed through the fourth insulating layer IL4, and the fifth insulating layer IL5. A sixth insulating layer IL6may be disposed on the fifth insulating layer IL5and may cover the second connection electrode CNE2. The sixth insulating layer IL6may be an organic layer. The display device layer DP-OLED may be disposed on the circuit device layer DP-CL. The display device layer DP-OLED may be disposed on the sixth insulating layer IL6. The display device layer DP-OLED further includes the light emitting device LE. The light emitting device LE include a first electrode AE, a light emitting layer EL, and a second electrode CE. For example, the light emitting layer EL includes one of an organic light emitting material, a quantum dot, a quantum rod, a micro-LED, or a nano-LED. The first electrode AE may be disposed on the sixth insulating layer IL6. The first electrode AE may be connected to the second connection electrode CNE2through a contact hole CNT3formed through the sixth insulating layer IL6. A pixel defining layer IL7may be disposed on the sixth insulating layer IL6and may cover a portion of the first electrode AE. An opening OP7is formed in the pixel defining layer IL7. The opening OP7of the pixel defining layer IL7exposes at least a portion of the first electrode AE. According to an embodiment, a light emitting region PXA is defined by is exposed by the opening OP7that corresponds to the exposed portion of the first electrode AE. A non-light emitting region NPXA surrounds the light emitting region PXA. The light emitting layer EL may be disposed on the first electrode AE. The light emitting layer EL may be disposed in the opening OP7. For example, the light emitting layer EL is separately formed for each pixel. When the light emitting layer EL is separately formed for each pixel, each of the light emitting layers EL emits at least one of blue light, red light, or green light. However, embodiments of the present disclosure are not necessarily limited thereto, and in an embodiment, the light emitting layer EL is connected with the pixels and commonly provided for the pixels. For example, the light emitting layer EL emits blue light or white light. The second electrode CE may be disposed on the light emitting layer EL. The second electrode CE, which has the form of one integrated body, is commonly disposed for the plurality of pixels. The second electrode CE receives a common voltage and may be referred to as a common electrode. In addition, a hole control layer is interposed between the first electrode AE and the light emitting layer EL. The hole control layer is commonly disposed in the light emitting region PXA and the non-light emitting region NPXA. The hole control layer includes a hole transport layer and a hole injection layer. An electron control layer is interposed between the light emitting layer EL and the second electrode CE. The electron control layer includes an electron transport layer and an electron injection layer. The hole control layer and the electron control layer are commonly formed in the pixels by using an open mask. The input sensor ISP is formed on the top surface of the encapsulating layer TFE through sequential processes. The input sensor ISP includes the first sensing insulating layer IIL1, the first conductive layer ICL1, the second sensing insulating layer IIL2, the second conductive layer ICL2, and the third sensing insulating layer IIL3. In an embodiment, the first sensing insulating layer IIL1is omitted Each of the first conductive layer ICL1and the second conductive layer ICL2may have a single-layer structure or a multi-layer structure that is stacked in the third direction DR3. A conductive layer in a single-layer structure includes a metal layer or a transparent conducive layer. The metal layer includes at least one of molybdenum, silver, titanium, copper, aluminum, or an alloy thereof. The transparent conductive layer includes a transparent conductive oxide, such as indium tin oxide (ITO), indium zinc oxide (IZO), zinc oxide (ZnO), or indium zinc tin oxide (IZTO). In addition, the transparent conductive layer includes a conductive polymer such as PEDOT, a metal nano-wire, or graphene. A conductive layer in a multi-layer structure includes metal layers. The metal layers have, for example, a three-layer structure of titanium/aluminum/titanium. The conductive layer in a multi-layer structure includes at least one metal layer and at least one transparent conductive layer. The second conductive layer ICL2is connected to the first conductive layer ICL1by a contact hole CH-1formed through the second sensing insulating layer IIL2. The second sensing insulating layer IIL2covers the first conductive layer ICL1and the third sensing insulating layer IIL3covers the second conductive layer ICL2. Although the first sensing insulating layer IIL1to the third sensing insulating layer IIL3are illustrated as being a single-layer, embodiments of the present disclosure are not necessarily limited thereto. At least one of the first sensing insulating layer IIL1or the second sensing insulating layer IIL2includes an inorganic film. The inorganic film includes at least one of aluminum oxide, titanium oxide, silicon oxide, silicon nitride, silicon oxynitride, zirconium oxide, or hafnium oxide. The third sensing insulating layer IIL3may include an organic film. The organic film includes at least one of an acrylate-based resin, a methacrylate-based resin, polyisoprene, a vinyl-based resin, an epoxy-based resin, a urethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, or a perylene-based resin. A color filter layer CFL and a light blocking pattern BM may be disposed on the input sensor ISP. The light blocking pattern BM prevents external light from being reflected. The light blocking pattern BM may be disposed in the same layer as the color filter layer CFL. The color filter layer CFL may include a plurality of color filters CF. The color filters CF are disposed in the light emitting region PXA. The light blocking pattern BM is disposed in the non-light emitting region NPXA FIG.6is a plan view of an input sensor, according to an embodiment of the present disclosure. Referring toFIG.6, in an embodiment, the input sensor ISP according to an embodiment of the present disclosure includes the active region AA and the peripheral region NAA adjacent to the active region AA. The plurality of sensing electrodes SE1_1to SE1_5and SE2_1to SE2_4may be disposed in the active region AA, and the plurality of signal lines SL1_1to SL1_5and SL2_1to SL2_4may be disposed in the peripheral region NAA. According to an embodiment, the sensing electrodes SE1_1to SE1_5and SE2_1to SE2_4include a first sensing electrode SE1and a second sensing electrode SE2. One of each of the first sensing electrodes SE1and the second sensing electrode SE2is a receive electrode and another of each the first sensing electrodes SE1and the second sensing electrodes SE2is a transmit electrode. For example, the first sensing electrode SE1is the receive electrode SE1and the second sensing electrode SE2is the transmit electrode SE2. A plurality of receive electrodes SE1and a plurality of transmit electrodes SE2are provided, respectively. The sensing electrodes SE1_1to SE1_5and SE2_1to SE2_4may include receive electrodes SE1_1to SE1_5and transmit electrodes SE2_1to SE2_4. Hereinafter, in an embodiment, the first sensing electrode SE1will be described as a receive electrode SE1and the second sensing electrode SE2will be described as a transmit electrode SE2. The signal lines to SL1_1to SL1_5and SL2_1to SL2_4include receive signal lines SL1_1to SL1_5connected to the receive electrodes SE1_1to SE1_5and transmit signal lines SL2_1to SL2_4connected to the transit it electrodes SE2_1to SE2_4. The receive signal lines SL1_1to SL1_5may be referred to as first signal lines. The transmit signal lines SL2_1to SL2_4may be referred to as second signal lines. The receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4cross each other. The receive electrodes SE1_1to SE1_5are arranged in the second direction DR2and extend in the first direction DR1. The transmit electrodes SE2_1to SE2_4are arranged in the first direction DR1and extend in the second direction DR2. The input sensor ISP may acquire coordinate information using mutual-capacitance. Capacitances are formed between the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4. The capacitances between the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4are changed by art external input of a user's body. According to an embodiment of the present disclosure, the capacitances between the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4are changed by an external input caused by an input device other than a user's body. For example, the sensing sensitivity of the input sensor ISP is determined by the variation in capacitance. However, embodiments of the present disclosure are not necessarily limited thereto, and in an embodiment, the input sensor ISP acquires coordinate information through self-capacitance. In an embodiment, the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4are integrated into one sensing electrode to sense art external input. According to an embodiment, the input sensor ISP is not limited to a mutual capacitance sensor or a self-capacitance sensor when obtaining coordinate information. The input sensor ISP can acquire coordinate information by using both mutual capacitance and self-capacitance. Each of the receive electrodes SE1_1to SE1_5includes first sensor parts SSP1and first connection parts CP1disposed in the active region AA. Each of the transmit electrodes SE2_1to SE2_4includes second sensor parts SSP2and second connection parts CP2disposed in the active region AA. Each of two first sensor parts of the first sensor parts SSP1disposed at opposite ends of one receive electrode is smaller, e.g., ½ size, than a first sensor part disposed at the center. Each of two second sensor parts of the second sensor parts SSP2disposed at opposite ends of one transmit electrode is smaller, e.g., ½ size, than a second sensor part disposed at the center. AlthoughFIG.6illustrates the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4according to an embodiment as having the shape of a rhombus, the shapes of the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4are not necessarily limited thereto. According to an embodiment of the present disclosure, each of the receive electrodes SE1_1to SE1_5or each of the transmit electrodes SE2_1to SE2_4has a form, such as a bar shape, in which a sensor part and a connection part are not distinguished from each other. Although each of the first sensor parts SSP1or each of the second sensor parts SSP2has the shape of a rhombus, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, the first sensor parts SSP1and the second sensor parts SSP2have mutually different polygonal shapes. The first sensor parts SSP1are arranged in the second direction DR2in one receive electrode, and the second sensor parts SSP2are arranged in the first direction DR1in one transmit electrode. Each of the first connection parts CP1connects adjacent first sensor parts SSP1to each other, and each of the second connection parts CP2connects adjacent second sensor parts SSP2to each other. Each of the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4has a mesh shape. Each of the receive electrodes SE1_1to SE1_5and the transmit electrodes SE2_1to SE2_4has a mesh shape that reduces a parasitic capacitance with electrodes that are included in the display panel DP (seeFIG.2). The mesh-shaped receive electrodes SE1_1to SE1_5and transmit electrodes SE2_1to SE2_4include, but are not necessarily limited to, silver, aluminum, copper, chromium, nickel, or titanium. The receive signal lines SL1_1to SL1_5and the transmit signal lines SL2_1to SL2_4may be disposed in the peripheral region NAA. The input sensor ISP includes input pads I_PD that extend from ends of the receive signal lines SL1_1to SL1_5and the transmit signal lines SL2_1to SL2_4and are disposed in the peripheral region NAA. The input pads I_PD are electrically connected to the receive signal lines SL1_1to SL1_5and the transmit signal lines SL2_1to SL2_4, respectively. According to an embodiment of the present disclosure, the input pads I_PD include receive input pads I_PD1, to which the receive signal lines SL1_1to SL1_5are electrically connected, and transmit input pads I_PD2to which the transmit signal lines SL2_1to SL2_4are electrically connected. According to an embodiment of the present disclosure, a pad region PLD, in which the input pads I_PD are disposed, is included in the peripheral region NAA. The pad region PLD may further include pixel pads D_PD that connect the flexible circuit film FCB (seeFIG.2) to the display panel DP (seeFIG.2). The electronic device ED may further include a sensing driver ICP that controls driving of the input sensor ISP. According to an embodiment of the present disclosure, the sensing driver ICP is electrically connected to the input sensor ISP. The sensing driver ICP is electrically connected to the receive signal lines SL1_1to SL1_5and the transmit signal lines SL2_1to SL2_4, respectively. The sensing driver ICP transmits a driving control signal DCS to the transmit electrodes SE2_1to SE2_4, and receives sensing signals RS that reflect a variation in a capacitance between the transmit electrodes SE2_1to SE2_4and the receive electrodes SE1_1to SE1_5from the receive electrodes SE1_1to SE1_5. For example, according to an embodiment of the present disclosure, the driving control signal DCS includes sensing scan signals that are sequentially transmitted to the transmit electrodes SE2_1to SE2_4. The sensing driver ICP may drive the input sensor ISP in a first manner and/or second manner. The first manner may use mutual-capacitance. The second manner may use self-capacitance. According to an embodiment,FIG.7illustrates at least one heat generating module HM disposed on the input sensor ISP. Each of a plurality of heat generating modules HM includes a heat generating electrode HEL and a heat generating device HE. The heat generating module HM generates heat through the heat generating device HE, when the internal temperature of the electronic device ED is low. For example, the heat generating module HM compensates for the temperature of the electronic device ED. The electronic device ED lowers the modulus by the temperature compensating operation of the heat generating module and reduces the occurrence rate of cracks when folded. Hereinafter, the heat generating module HM will be described in detail with reference toFIGS.7to10. FIG.7is an enlarged view of a sensor unit, according to an embodiment of the present disclosure.FIG.7is an enlarged view of a region XX′ ofFIG.6.FIG.8is an enlarged view of a region YY′ ofFIG.7. In an embodiment, the input sensor ISP (seeFIG.6) includes a plurality of sensor units SND. The first sensing electrode SE1, the second sensing electrode SE2, and the heat generating module HM are disposed in each of the plurality of sensor units SND. A plurality of heat generating devices HE are respectively disposed in each of the corresponding plurality of sensor units SND. Referring toFIG.7, in an embodiment, the electronic device ED includes the heat generating module HM. The heat generating module HM includes the heat generating electrode HEL and the heat generating device HE. The heat generating module HM generates heat, as the heat generating device HE emits heat through a current that flows to the heat generating electrode HEL. The heat generating module HM is connected to the sensing driver ICP (seeFIG.6) to bias the current to the heat generating electrode HEL to generate the heat, depending on the resistance of the heat generating device HE. The heat generating electrode HEL is disposed in the same layer as the first sensing electrode SE1and the second sensing electrode SE2. According to an embodiment, the heat generating electrode HEL is disposed adjacent to the second sensing electrode SE2of the input sensor ISP. For example, the heat generating electrode HEL crosses the second sensing electrode SE2. The second sensing electrode SE2includes a first part PP1and a second part PP2. The first part PP1and the second part PP2are connected to each other through a relevant second signal line of the second signal lines SL2_1to SL2_4. The heat generating electrode HEL is interposed between the first part PP1and the second part PP2. The heat generating electrode HEL is connected to the sensing driver ICP through a separate signal line. The heat generating electrode HEL extends in the second direction DR2, similar to the second sensing electrode SE2. The heat generating electrode HEL has a mesh shape. The heat generating electrode HEL is not directly connected to the first sensing electrode SE1and the second sensing electrode SE2. The heat generating electrode HEL is electrically connected to the second sensing electrode SE2, which is a transmit electrode, through the heat generating device HE. The heat generating device HE connects the second sensing electrode SE2to the heat generating electrode HEL. The plurality of heat generating devices HE are provided in the active region AA. The plurality of heat generating devices HE are provided within one sensor unit SND. AlthoughFIG.7illustrates four heat generating devices HE, embodiments of the present disclosure are not necessarily limited thereto. The heat generating device HE connects the first part PP1to the heat generating electrode HEL. The heat generating device HE connects the second part PP2to the heat generating electrode HEL. The heat generating device HE connects the transmit electrode SE2to the heat generating electrode HEL to drive the heat generating module HM in response to the driving control signal DCS of the transmit electrode SE2. For example, the transmit electrode SE2drives the input sensor ISP and the heat generating module HM. The heat generating device HE may include a conductor that emits heat in proportion to the resistance. The heat generating device HE may include a conductor material that has higher resistance and higher thermal conductivity. For example, the heat generating device HE includes at least one of a metal oxide, nickel chromium, or a carbon-based material. The details thereof will be described with reference toFIG.11. InFIG.7, the first sensing electrode SE1includes a first sensor part SSP1and a first connection part CP1. The second sensing electrode SE2includes a second sensor part SSP2and a second connection part CP2. The first connection part CP1crosses the second connection part CP2. The second connection part CP2extends in the same layer as the second sensor part SSP2. The first connection part CP1is disposed in a different layer from the first sensor part SSP1and is connected to the first sensor part SSP1through a contact hole. AlthoughFIG.7illustrates that the first connection part CP1is disposed in the different layer, embodiments of the present disclosure are not necessarily limited thereto. For example, in an embodiment, the second connection part CP2is disposed in a different layer, and the first connection part CP1extends from the first sensor part SSP1in the same layer. FIG.8illustrates the second sensing electrode SE2and the heat generating electrode HEL. The second sensing electrode SE2and the heat generating electrode HEL are not directly connected to each other and are spaced apart from each other. The heat generating device HE connects the heat generating electrode HEL and the second sensing electrode SE2. InFIG.8, in an embodiment, the second sensing electrode SE2and the heat generating electrode HEL have a mesh shape that prevents the light emitting region PXA from being covered. A portion of the heat generating device HE is disposed in the non-light emitting region NPXA (seeFIG.9) to prevent the light emitting region PXA from being covered where the mesh-shaped heat generating electrode HEL and the second sensing electrode SE2are disposed. For example, the heat generating device HE overlaps the mesh patterns of the heat generating electrode HEL and the second sensing electrode SE2. FIG.9is a cross-sectional view taken along line I-I′ ofFIG.8.FIG.10is a cross-sectional view taken along line II-II′ ofFIG.7. For convenience of illustration, the first insulating layer IIL1is not shown inFIGS.9and10. Referring toFIGS.9and10, in an embodiment, the heat generating, device HE covers the second sensing electrode SE2and the heat generating electrode HEL. The light blocking pattern BM is disposed on the heat generating device HE. The light blocking pattern BM is disposed throughout the whole area of the non-light emitting region NPXA. The heat generating device HE is disposed in a portion of the non-light emitting region NPXA. The light blocking pattern BM is disposed on the sensor electrodes SE1and SE2. InFIG.9, the light blocking pattern BM is disposed on the second sensing electrode SE2. However, when the heat generating device HE is disposed on the second sensing electrode SE2, the light blocking pattern BM is disposed on the heat generating device HE. The second sensing electrode SE2and the heat generating electrode HEL are disposed on the second insulating layer IIL2. For example, the second sensing electrode SE2and the heat generating electrode HEL are disposed in the second conductive layer ICL2. At least one of the heat generating electrode HEL and the second sensing electrode SE2is disposed on the encapsulating layer TFE and the second insulating layer IIL2such that the heat generating electrode HEL and the second sensing electrode SE2are connected to each other through a contact hole. AlthoughFIG.9illustrates that the heat generating electrode HEL is disposed on the encapsulating layer TFE and the second insulating layer IIL2, embodiments are not necessarily limited thereto. In an embodiment, the heat generating electrode HEL is disposed on the second insulating layer IIL2, and the second sensing electrode SE2is disposed on the encapsulating layer TFE and the second insulating layer IIL2. The heat generating device HE may be covered by the color filter layer CFL. The color filter layer CFL may include a first color filter CF_R, a second color filter CF_G, and a third color filter CF_B. A planarization layer OC is disposed on the color filter layer CFL. The planarization layer OC planarizes the color filter layer CFL, such that the window WM can be disposed on the planarization layer OC. FIG.10shows the heat generating electrode HEL and the second sensing electrode SE2as including mesh lines. The light emitting region PXA may be defined between adjacent mesh lines. Although two mesh lines are illustrated in the heat generating electrode HEL, embodiments of the present disclosure are not necessarily limited thereto. FIG.11illustrates a method of driving a heat generating module according to an embodiment of the present disclosure.FIG.11schematically illustrates mesh shapes of the first sensing electrode SE1, the second sensing electrode SE2, and the heat generating electrode HEL. In an embodiment, the transmit electrode SE2is driven by a driving voltage Vtx. A current I flows from the transmit electrode SE2to the heat generating electrode HEL through the heat generating device HE. The heat generating electrode HEL is connected to a switch SWT. The heat generating module HM can be turned on or off by operation of the switch SWT. For example, when the switch SWT is turned off, no current flows through the heat generating electrode HEL. Accordingly, the heat generating device HE does not generate heat. The heat generating module HM is connected to the sensing driver ICP. The sensing driver ICP controls the switch SWT connected to the heat generating electrode HEL. The sensing driver ICP turns the switch SWT connected to the heat generating electrode HEL on or off, based on temperature information of the input sensor ISP. The electronic device ED includes a plurality of heat generating modules HM disposed on the input sensor ISP. The sensing driver ICP selectively drives some heat generating modules HM of the plurality of heat generating modules HM. For example, the sensing driver ICP turns on the switch SWT connected to some heat generating modules HM and turns off the switch SWT connected to other heat generating modules HM. The sensing driver ICP senses a temperature for each position inside the input sensor ISP to turn on the switch SWT of the heat generating module HM that has a lower temperature, and turn off the switch SWT of the heat generating module HM that has a higher temperature. The position of the switch SWT connected to the heat generating electrode HEL is not necessarily limited. According to an embodiment, the switch SWT is disposed in the sensing driver ICP. In an embodiment, the switch SWT is disposed in the active region AA. In some embodiments, the switch SWT is disposed on the main circuit board MCB (seeFIG.2) or the flexible circuit film FCB (seeFIG.2). FIG.12is a cross-sectional view of an input sensor, according to an embodiment of the present disclosure. In the following, description of components described with reference toFIG.9will be omitted. For convenience of illustration, the first insulating layer IIL1is not shown inFIG.12. InFIG.12, according to an embodiment, the light blocking pattern BM is not be separately disposed on the heat generating device HE. The heat generating device HE functions as the light blocking pattern BM. For example, the heat generating device HE includes a low reflectance material that is similar to the light blocking pattern BM. The heat generating device HE includes a conducting material that has a lower reflectance, a higher resistance, and higher thermal conductivity. For example, the reflectance of the heat generating device HE is reduced, as a high resistance and high thermally-conductive conductive material is carbonized to blacken the surface of the heat generating device HE. The heat generating device HE reflects external light and electrically connects the second sensing electrode SE2and the heat generating electrode HEL. The heat generating device HE may be disposed on the third sensing insulating layer IIL3. The third sensing insulating layer IIL3may be disposed on the second sensing insulating layer IIL2. The third sensing insulating layer IIL3may include an opening part OP. The heat generating device HE may electrically connect the second sensing electrode SE2to the heat generating electrode HEL through the opening part OP. FIG.13illustrates a method of driving a heat generating module, according to an embodiment of the present disclosure. In the following, descriptions of components described with reference toFIG.11will be omitted. InFIG.13, a voltage −Vtx having a phase reverse to a phase of a driving voltage Vtx applied to the second sensing electrode SE2is applied to the heat generating electrode HEL. For example, the reverse potential voltage −Vtx is applied to the heat generating electrode HEL to increase a potential difference with the driving voltage Vtx. When the potential difference is increased, a quantity of current flowing through the heat generating device HE increases, such that the heat generating device HE emits more heat. The voltage −Vtx inFIG.13is provided by way of example, and in an embodiment, another voltage, such as −2Vtx, is applied. FIGS.14A to14Cillustrate a method of driving a heat generating module, according to an embodiment of the present disclosure.FIGS.14A to14Cillustrate embodiments that differs from an embodiment ofFIG.11. In addition, the phase reverse voltage −Vtx is applied to the heat generating electrodes HEL inFIGS.14A to14C. InFIG.14A, in an embodiment, first and second heat generating devices HE1and HE2are interposed between the heat generating electrode HEL and the second sensing electrode SE2. The first heat generating device HE1and the second heat generating device HE2are spaced apart from each other. Although two heat generating devices HE1and HE2are illustrated inFIG.14A, embodiments are not necessarily limited thereto. In an embodiment, two or more heat generating devices are provided. InFIG.14B, in an embodiment, a plurality of heat generating electrodes HEL may be provided. The plurality of heat generating electrodes HEL are connected to one adjacent second sensing electrode SE2. One second sensing electrode SE2can be connected to one adjacent heat generating electrode HEL through the first heat generating device HE1, and can be connected to another adjacent heat generating electrode HEL through the second heat generating device HE2. Although one first heat generating device HE1and one second heat generating device HE2are illustrated, embodiments are not necessarily limited thereto, and in some embodiments, a plurality of heat generating devices HE1and a plurality of second heat generating devices HE2are provided. A switch SWT is connected to each of the plurality of heat generating electrodes HEL. InFIG.14C, in an embodiment, the heat generating electrode HEL is interposed between the plurality of second sensing electrodes SE2. One heat generating electrode HEL is connected to a plurality of adjacent second sensing electrodes SE2. The heat generating electrode HEL is connected to two adjacent second sensing electrodes SE2through a first heat generating device HE1and a second heat generating device HE2. Although one first heat generating device HE1is illustrated, embodiments are not necessarily limited thereto, and in some embodiments, a plurality of first heat generating devices HE1are provided. Similarly, in some embodiment, a plurality of second heat generating devices HE2are provided. InFIGS.14A to14C, a plurality of heat generating electrodes HEL are connected to one sensing driver ICP (seeFIG.6). The sensing driver ICP individually controls the plurality of switches SWT. The sensing driver ICP controls the switches SWT based on the internal temperature of the input sensor ISP. The internal temperature of the input sensor ISP is sensed by a temperature sensor. Hereinafter, a temperature sensor will be described. FIGS.15A and15Bare enlarged views of a sensor unit SND according to an embodiment of the present disclosure. A temperature sensor TS is disposed in each of the sensor units SND. InFIGS.15A and15B, in some embodiments, the temperature sensor TS disposed in each of the sensor units SND senses the temperature of the relevant sensor unit SND. For example, the sensing driver ICP receives information on the temperatures of sensor units SND from temperature sensors disposed in the sensor units SND, respectively, and selectively drives only those heat generating module HM of sensor units SND that should be heated. Referring toFIG.15Ain an embodiment, the heat generating module HM is disposed in the second sensing electrode SE2. The temperature sensor TS is disposed in the first sensing electrode SE1, regardless of the heat generating module HM. The temperature sensor TS is disposed in the same layer as the heat generating electrode HEL. The temperature sensor TS may extend in the first direction DR1and may be disposed on the first sensing electrode SE1. The second sensing electrode SE2includes the first part PP1and the second part PP2connected to each other through the heat generating electrode HEL and the heat generating device HE. The second sensing electrode SE2may also include a third part PP3disposed adjacent to the temperature sensor TS. The third part PP3may be connected to the first part PP1and the second part PP2through the second connection part CP2. In an embodiment, a plurality of second connection part CP2are provided. Third parts PP3may be disposed on opposite sides of the temperature sensor TS and surround the temperature sensor TS. The temperature sensor TS may measure the temperature of each of the sensor electrodes SND through the third part PP3. A plurality of the heat generating electrodes HEL are provided that are spaced apart from each other, and the temperature sensor TS is interposed between the heat generating electrodes HEL. The heat generating electrode HEL includes a connection part HCP that connects a plurality of portions. The heat generating electrode HEL may be disposed in the same layer as the temperature sensor TS. The connection part HCP of the heat generating electrode HEL may be disposed in a different layer from the other portions of the heat generating electrode HEL. Referring toFIG.15B, in an embodiment, the temperature sensor is integrated into the heat generating module HM disposed on the second sensing electrode SE2. The temperature sensor is integrated into a heat generating electrode HEL-1. For example, the heat generating electrode HEL-1ofFIG.15Bfunctions as the temperature sensor. According to an embodiment, a voltage is applied to the heat generating electrode HEL-1such that the heat generating electrode HEL-1functions as the temperature sensor. To simultaneously operate the heat generating electrode HEL-1as the temperature sensor when the heat generating module HM is operated, a voltage −Vtx (seeFIG.13), which has a reverse phase to a phase of the driving voltage Vtx (seeFIG.13) applied to the second sensing electrode, is applied to the heat generating electrode HEL-1. The first sensing electrode SE1includes a first part and a second part PP-1. The first part is the sensor part SSP1. The second part PP-1is adjacent to the temperature sensor HEL-1. The second part PP-1and the sensor part SSP1are connected through the first connection part CP1. FIG.16is a graph of resistance as a function of frequency, according to an embodiment of the present disclosure. In general, as a frequency is increased, a resistance increases. The sensing driver ICP (seeFIG.6) adjusts a driving frequency of the second sensing electrode SE2(seeFIG.6). The heat generating module HM (seeFIG.7) is driven based on a driving voltage applied to the second sensing electrode SE2. For example, the heat generating module HM is controlled based on a driving frequency applied to the second sensing electrode SE2. The sensing driver ICP increases the driving frequency when the heat generating module HM is necessary needs to generate heat, such as when a low temperature is sensed through the temperature sensor). For example, the resistance of the heat generating device HE (seeFIG.7) is increased by high-frequency driving. When an external input is sensed at times when heat-generation is not necessary, the resistance of the heat generating device HE is reduced by low-frequency driving. When the resistance of the heat generating device HE is reduced, heat is not emitted. FIG.16illustrates a frequency range for controlling the heat generating module HM according to an embodiment of the present disclosure. InFIG.16, when the heat generating module HM does not generate heat, and the input sensor ISP (seeFIG.6) senses an external input, the driving frequency is 100 kHz. When the heat generating module HM generates heat, the driving frequency is 300 kHz. For example, the heat generating module HM is turned on/off depending on the magnitude of a driving frequency applied to the input sensor ISP. According to an embodiment of the present disclosure, an electronic device includes a heat generating module disposed in an input sensor that increases the folding reliability at a lower temperature. As described above, embodiments are disclosed in drawings and specifications. Therefore, it will be understood that various modifications and other equivalent embodiments are possible by those skilled in the art. While embodiments of the present disclosure have been described with reference to the drawings, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of embodiments of the present disclosure as set forth in the following claims. | 71,005 |
11861130 | DETAILED DESCRIPTION In the following, the technical solutions in embodiments of the disclosure will be described clearly and completely in connection with the drawings in the embodiments of the disclosure. Obviously, the described embodiments are only part of the embodiments of the disclosure, and not all of the embodiments. Based on the embodiments in the disclosure, all other embodiments obtained by those of ordinary skills in the art under the premise of not paying out creative work pertain to the protection scope of the disclosure. The present disclosure provides a touch module, a manufacturing method thereof, and a touch display device. In each touch sensing unit, a bridging region includes a first cutting pattern, a boundary region between the first touch electrode and each of the second touch electrodes includes a second cutting pattern, and the first cutting pattern and the second cutting pattern are substantially the same. As a result, the Mura phenomenon (or moiré phenomenon) caused by the bridging region and the boundary region is alleviated. The inventor found that, during the process of manufacturing a touch display device, when the touch module is superimposed on the display module such as an OLED backplate, the metal meshes with different patterns will cause an optical Mura phenomenon (for example, dot Mura, line Mura, block Mura, etc. in the dark state, and differences in brightness at different azimuths in the bright state). A touch unit is generally composed of two adjacent transmitting electrode patterns and two adjacent sensing electrode patterns, wherein the transmitting electrode patterns and the sensing electrode patterns substantially occupy half of the area. As shown inFIGS.1and2, in a touch unit100of a touch module, the bridging position of a transmitting electrode (Tx) or a sensing electrode (Rx) is defined as a bridging region10, the boundary between the transmitting electrode and the sensing electrode is defined as a boundary region20, and the remaining region is a main body region30of the transmitting electrode or the sensing electrode. If there are deficiencies in the bridging distance of the bridging region10and the pattern design of the metal mesh, they will cause serious dot or block Mura. In the touch unit100shown inFIGS.1and2, the bridging region is a portion denoted by a dashed frame10, which is composed of a Tx connection portion and an Rx connection portion. The Tx connection portion connects adjacent transmitting electrode patterns (Tx) inside the touch unit100, and the Rx connection portion connects adjacent sensing electrode patterns (Rx) inside the touch unit100. The Tx connection portion and the Rx connection portion constitute a laminated structure. In a touch unit202shown inFIGS.4-6, the bridging region is a portion denoted by a dashed frame205, which is composed of a connection portion of a first touch electrode203and a connection portion of a second touch electrode204. The connection portion of the second touch electrode204may be a bridge209as shown inFIG.9. The connection portion of the first touch electrode203and the bridge209constitute a laminated structure. In some embodiments, the connection portion of the first touch electrode203in the bridging region may include one or more conductive patterns, and the connection portion (for example, the bridge209) of the second touch electrode204in the bridging region may also include one or more conductive patterns. According to an aspect of the present disclosure, there is provided a touch module.FIG.3illustrates a schematic structural view of a touch module according to an embodiment of the present disclosure. As shown inFIG.3, a touch module200comprises: a base substrate201; an array of touch units202arranged on the base substrate201. As shown inFIGS.4,5and6, the touch unit202comprises a first touch electrode203extending along a first direction X and two second touch electrodes204arranged on two sides of the first touch electrode along a second direction Y, the first direction X and the second direction Y intersecting each other. The touch unit202further comprises: a bridging region205(denoted by a dashed frame shown inFIGS.4,5and6) between the two second touch electrodes204, and a boundary region206between the first touch electrode203and each of the second touch electrodes204. The bridging region205includes a first cutting pattern207, the boundary region206includes a second cutting pattern208, and the first cutting pattern207and the second cutting pattern208are substantially the same. “Substantially the same” indicates that the first cutting pattern207and the second cutting pattern208have substantially the same contour, for example, it may indicate a change rule of the contour patterns, and at least one of the structure, length, shape, etc. of each pattern constituent part is substantially identical. Said “substantially” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. As shown inFIGS.4and5, there are two boundary lines between the first touch electrode203and the second touch electrodes204in one touch unit202, each boundary line including the first cutting pattern207and the second cutting pattern208. These two boundary lines are substantially identical after being rotated by 180 degrees. “Substantially identical” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. As shown inFIG.6, there are two boundary lines between the first touch electrode203and the second touch electrodes204in one touch unit202, each boundary line including the first cutting pattern207and the second cutting pattern208. These two boundary lines are substantially identical with respect to a virtual axisymmetric centerline. “Substantially identical” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. According to an embodiment of the present disclosure, the first cutting pattern207of the bridging region205may be designed based on the second cutting pattern208of the boundary region206. At the boundary between the bridging region205and the boundary region206, the first cutting pattern207and the second cutting pattern208are connected. On this premise, the morphology of the first cutting pattern207is kept as consistent with the morphology of the second cutting pattern208as possible. In the embodiments shown inFIGS.4and5, the boundary region and the bridging region each may be composed of a 12*12 metal mesh (corresponding to 12*12 sub-pixels). For the layout of the first cutting pattern207and the second cutting pattern208, continuously connected horizontal cutting openings (i.e., openings generated by the intersections of the first cutting pattern207and the second cutting pattern208with the metal mesh) cannot exceed 3 sub-pixels, and cannot exceed 5 sub-pixels at most. Continuously connected vertical sub-pixels cannot exceed 3 sub-pixels, and cannot exceed 5 sub-pixels at most. The cutting openings can be adjusted locally, so that the transverse, longitudinal and total cutting opening densities in the bridging region are consistent with the cutting opening density in the non-bridging region, and the resulting cutting openings in the bridging region and the boundary region have similar pattern morphologies. The embodiment inFIG.6includes cutting openings in three directions, which are cutting openings at 0° (horizontal direction), 45°, and 135° respectively. Continuously connected openings in each direction cannot extend beyond 3 sub-pixels, and cannot extend beyond 5 sub-pixels at most. In the embodiments shown inFIGS.4-6, all the cutting openings should be on the metal mesh. For example, the cutting opening in the horizontal direction cannot overlap the metal strip of the metal mesh in the horizontal direction. It is to be noted that, inFIG.3, multiple rectangles represent an array of touch units202. InFIGS.4and5, multiple rectangles represent regions of multiple sub-pixels to which the touch unit202corresponds. InFIG.6, multiple hexagons represent regions of multiple sub-pixels to which the touch unit202corresponds. According to an embodiment of the present disclosure, in the same touch unit, the first cutting pattern of the bridging region and the second cutting pattern of the boundary region are substantially the same. Therefore, when the touch module is used in a touch display device, the Mura phenomenon (or moiré phenomenon) caused by the bridging region and the boundary region is alleviated. Therefore, the present disclosure further provides a design method for an FMLOC (Flexible Multi-Layer On Cell) structure. The bridging region and the boundary region of Tx and Rx in a complete FMLOC cycle are designed, so that the first cutting pattern of the bridging region and the second cutting pattern of the boundary region are substantially the same. Combined with optical Mura simulation, an optimized design of the bridging region can be obtained, in which Mura resulting from superimposition of the FMLOC and the OLED is significantly alleviated. In addition, the bridging distance between Rx and Rx (or between Tx and Tx) may also be taken into account for setting the bridging region. The present disclosure may also be applied to other types of multilayer On Cell structures and devices, and is particularly suitable for a metal mesh On Cell touch structure. In the embodiments of the present disclosure, the first touch electrode203may be a transmitting electrode (Tx), and the first touch electrode203may also be a sensing electrode (Rx). The second touch electrodes204may be sensing electrodes (Rx), and the second touch electrodes204may also be transmitting electrodes (Tx). For example, in an embodiment, the first touch electrode203is a transmitting electrode (Tx), and the second touch electrodes204are sensing electrodes (Rx). In another embodiment, the first touch electrode203is a sensing electrode (Rx), and the second touch electrodes204are transmitting electrodes (Tx). Optionally, in some embodiments, as shown inFIGS.4and6, the first cutting pattern207and the second cutting pattern208have a broken line shape. Optionally, in some embodiments, as shown inFIG.5, the first cutting pattern207and the second cutting pattern208have a stepped shape. In the context of the present disclosure, the “broken line shape” refers to a shape composed of a plurality of line segments, and the “step shape” refers to a shape composed of a plurality of line segments alternately arranged in two mutually perpendicular directions. Those skilled in the art can understand that the first cutting pattern and the second cutting pattern may have a broken line shape, a stepped shape, a linear shape, a curved shape, an irregular shape, or the like. Optionally, in some embodiments, as shown inFIGS.4,5and6, in the same touch unit202, the bridging region205and the boundary region206have substantially the same area. “Substantially the same area” refers to areas having a similarity of at least 80%, 90% or more, and includes completely the same area. As a result, a compact and uniform touch unit array can be obtained. FIG.7illustrates a schematic structural view of a bridging region of a touch unit according to an embodiment of the present disclosure.FIG.8illustrates a schematic structural view of a bridging region of a touch unit according to another embodiment of the present disclosure. Optionally, in some embodiments, as shown inFIGS.7and8, in the same touch unit202, two second touch electrodes204are spaced apart by 1 to 5 sub-pixels. In the context of the present disclosure, the “distance between two second touch electrodes” may be measured by the number of complete sub-pixels exposed at the shortest distance between the two second touch electrodes. Therefore, the “distance between two second touch electrodes” has nothing to do with actual measured values such as the width or length of a sub-pixel. As shown inFIG.7, in the same touch unit202, two second touch electrodes204is spaced apart by one sub-pixel. As shown inFIG.8, in the same touch unit202, two second touch electrodes204are spaced apart by three sub-pixels. Optionally, in some embodiments, as shown inFIG.9, in the same touch unit, a bridge209that bridges two second touch electrodes204spans 1 to 5 sub-pixels. FIG.9illustrates a schematic structural view of a bridge of a touch unit according to an embodiment of the present disclosure. In some embodiments, as shown inFIG.9, the two second touch electrodes204are connected to each other through a via hole210and a bridge209, and the via hole210penetrates an insulating layer (or a passivation layer)211. Those skilled in the art can understand that the bridge209spans the bridging region, and the material of the bridge209may be a metal or a conductive metal oxide. Optionally, in some embodiments, as shown inFIGS.7and8, at least one of the first touch electrode203and the second touch electrodes204further includes a plurality of third cutting patterns212(denoted by short lines shown in the figures). With the plurality of third cutting patterns212, the Mura phenomenon can be further suppressed. For example, it is possible to design a plurality of third cutting patterns212in the boundary region206of the touch unit202, and replicate the same patterns in in the bridging region205and the main body region, so that the third cutting pattern212is arranged in the entire touch unit202. In addition, as shown inFIGS.10and11, the third cutting pattern212may be arranged according to a specific distribution of sub-pixels. The third cutting pattern212may serve as, for example, a dummy pattern in the main body region of the touch electrode to meet electrical requirements. Specifically, the arrangement of the third cutting pattern212may follow the following rules. Firstly, it is necessary to ensure that the density of the cut openings in the boundary region is equal to the density of the cut openings in the non-boundary region (including the bridging region and the main body region). Secondly, it is necessary to ensure that the third cutting pattern212in the boundary region is consistent with the third cutting pattern212in the non-boundary region. Of course, it is also required to ensure that the first cutting pattern207and the second cutting pattern208separate Tx from Rx. In the pixel structure shown inFIG.10, the numbers of cut openings corresponding to the horizontal direction and the vertical direction of the touch unit should be approximately equal. In addition, it should be further ensured that the cutting openings of the touch unit resulting from the third cutting patterns212in a certain direction are evenly arranged. Optionally, in some embodiments, the first touch electrode203and the second touch electrodes204each includes a metal mesh (as shown by the meshes inFIGS.4-8,10and11). In the touch unit202, the cutting openings (i.e., openings generated by the intersections of the third cutting patterns with the metal mesh) on the metal mesh generated by the plurality of third cutting patterns212have a uniform distribution density. Generally, the shape of the metal mesh and the positions of mesh openings may correspond to the shape and positions of sub-pixels of a display panel to which the touch module is adapted. In the context of the present disclosure, the “distribution density” of the cutting openings in the metal mesh refers to the ratio of the number of cutting openings in a repeating unit to the number of mesh patterns in the repeating unit in a certain direction. For example, among 100 metal wires extending in a certain direction, 20 metal wires have fractures, so the “distribution density” of the cutting openings in this direction is 20%. In some embodiments, the “distribution densities” of the cutting openings in all directions are equal to each other. FIG.14illustrates a schematic view of the arrangement of touch units according to an embodiment of the present disclosure. The array of touch units may include a plurality of touch units202as shown inFIG.4. The touch unit202comprises a first touch electrode203extending along a first direction X and two second touch electrodes204arranged on two sides of the first touch electrode along a second direction Y The first direction X and the second direction Y intersect each other. When the touch unit shown inFIG.5orFIG.6is used, the same or similar arrangement of touch units as shown inFIG.14can also be obtained. According to another aspect of the present disclosure, there is provided a touch module.FIG.3illustrates a schematic structural view of a touch module according to an embodiment of the present disclosure. As shown inFIG.3, the touch module200comprises: a base substrate201; an array of touch units202arranged on the base substrate201. As shown inFIGS.4,5and6, the touch unit202comprises a first touch electrode203extending along a first direction X, and two second touch electrodes204arranged on two sides of the first touch electrode along a second direction Y. The first direction X and the second direction Y intersect each other. The touch unit202further comprises: a bridging region205(denoted by a dashed frame shown inFIGS.4,5and6) between the two second touch electrodes204, and a boundary region206between the first touch electrode203and each of the second touch electrodes204. The bridging region205includes a first cutting pattern207, the boundary region206includes a second cutting pattern208, and the first cutting pattern207and the second cutting pattern208are composed of the same unit pattern21having a broken line shape. For example, in the embodiment shown inFIG.4, the first cutting pattern207and the second cutting pattern208are composed of the same unit pattern21(a pattern in a broken line shape). In the embodiment shown inFIG.5, the first cutting pattern207and the second cutting pattern208are composed of the same unit pattern21(a pattern in a stepped shape). In the embodiment shown inFIG.6, the first cutting pattern207and the second cutting pattern208are composed of the same unit pattern21(a pattern in the shape of a broken line with an included angle of about 135′). According to an embodiment of the present disclosure, in the same touch unit, the first cutting pattern of the bridging region and the second cutting pattern of the boundary region are substantially the same. Therefore, when the touch module is used in a touch display device, the Mura phenomenon (or moiré phenomenon) caused by the bridging region and the boundary region is alleviated. According to a further aspect of the present disclosure, there is provided a touch display device.FIG.12illustrates a schematic structural view of a touch display device according to an embodiment of the present disclosure. As shown inFIG.12, the touch display device300comprises a display panel301and the touch module200described in any of the foregoing embodiments, and the touch module200is arranged on a light exit surface of the display panel301. The touch display device provided by the embodiment of the present disclosure has the same advantages as the abovementioned touch module, which will not be repeated here. Optionally, in some embodiments, as shown inFIGS.4and6, the first cutting pattern207and the second cutting pattern208have a broken line shape. Optionally, in some embodiments, as shown inFIG.5, the first cutting pattern207and the second cutting pattern208have a stepped shape. Those skilled in the art can understand that the first cutting pattern and the second cutting pattern may have a broken line shape, a stepped shape, a linear shape, a curved shape, an irregular shape, or the like. Optionally, in some embodiments, as shown inFIGS.4,5and6, in the same touch unit202, the bridging region205and the boundary region206have substantially the same area. As a result, a compact and uniform touch unit array can be obtained. FIG.7illustrates a schematic structural view of a bridging region of a touch unit according to an embodiment of the present disclosure.FIG.8illustrates a schematic structural view of a bridging region of a touch unit according to another embodiment of the present disclosure. Optionally, in some embodiments, as shown inFIGS.7and8, in the same touch unit202, two second touch electrodes204are spaced apart by 1 to 5 sub-pixels. As shown inFIG.7, in the same touch unit202, two second touch electrodes204are spaced apart by one sub-pixel. As shown inFIG.8, in the same touch unit202, two second touch electrodes204are spaced apart by three sub-pixels. Optionally, in some embodiments, as shown inFIG.9, in the same touch unit, a bridge209that bridges two second touch electrodes204spans 1 to 5 sub-pixels. FIG.9illustrates a schematic structural view of a bridge of a touch unit according to an embodiment of the present disclosure. In some embodiments, as shown inFIG.9, the two second touch electrodes204are connected to each other through a via hole210and a bridge209, and the via hole210penetrates an insulating layer (or a passivation layer)211. Those skilled in the art can understand that the bridge209spans the bridging region, and the material of the bridge209may be a metal or a conductive metal oxide. Optionally, in some embodiments, as shown inFIGS.7and8, at least one of the first touch electrode203and the second touch electrodes204further includes a plurality of third cutting patterns212(denoted by short lines shown in the figures). With the plurality of third cutting patterns212, the Mura phenomenon can be further suppressed. In addition, as shown inFIGS.10and11, the third cutting pattern212may be arranged according to a specific distribution of sub-pixels. The third cutting pattern212may serve as, for example, a dummy pattern in the main body region of the touch electrode to meet electrical requirements. Optionally, in some embodiments, the first touch electrode203and the second touch electrodes204include a metal mesh (denoted by a mesh shown inFIGS.4-8,10and11). In the touch unit202, the cutting openings (i.e., openings generated by the intersections of the third cutting patterns212with the metal mesh) on the metal mesh generated by the plurality of third cutting patterns212have a uniform distribution density. Generally, the shape of the metal mesh and the positions of mesh openings may correspond to the shape and positions of sub-pixels of a display panel to which the touch module is adapted. According to yet another aspect of the present disclosure, there is provided a manufacturing method of a touch module.FIG.13illustrates a flow chart of a manufacturing method of a touch module according to an embodiment of the present disclosure. The method comprises: S11providing a base substrate; S12arranging an array of touch units on the base substrate, each touch unit comprising a first touch electrode extending along a first direction and two second touch electrodes arranged on two sides of the first touch electrode along a second direction, the first direction and the second direction intersecting each other. The touch unit further comprises: a bridging region between the two second touch electrodes, and a boundary region between the first touch electrode and the second touch electrodes. The bridging region includes a first cutting pattern, the boundary region includes a second cutting pattern, and the first cutting pattern and the second cutting pattern are substantially the same. “Substantially the same” indicates that the first cutting pattern and the second cutting pattern have substantially the same contour, for example, it may indicate a change rule of the contour patterns, and at least one of the structure, length, shape, etc. of each pattern constituent part is substantially identical. Said “substantially” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. As shown inFIGS.4and5, there are two boundary lines between the first touch electrode203and the second touch electrodes204in one touch unit202, each boundary line including the first cutting pattern207and the second cutting pattern208. These two boundary lines are substantially identical after being rotated by 180 degrees. “Substantially identical” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. As shown inFIG.6, there are two boundary lines between the first touch electrode203and the second touch electrodes204in one touch unit202, each boundary line including the first cutting pattern207and the second cutting pattern208. These two boundary lines are substantially identical with respect to a virtual axisymmetric centerline. “Substantially identical” means being completely identical or allowing some parts to be incompletely identical, and guaranteeing a similarity of 50%, 60%, 70%, 80%, 90% or more. According to an embodiment of the present disclosure, in the same touch unit, the first cutting pattern of the bridging region and the second cutting pattern of the boundary region are substantially the same. Therefore, when the touch module is used in a touch display device, the Mura phenomenon (or moiré phenomenon) caused by the bridging region and the boundary region is alleviated. Those skilled in the art can understand that the first cutting pattern, the second cutting pattern, and the third cutting pattern in the present disclosure are essentially slits, and the first cutting pattern, the second cutting pattern and/or the third cutting pattern can be formed on the first touch electrode and the second touch electrodes using processes such as photolithography and sawing. Since the first cutting pattern of the bridging region and the second cutting pattern of the boundary region are substantially the same, when the first cutting pattern, the second cutting pattern and/or the third cutting pattern are formed on the first touch electrode and the second touch electrodes using a photolithography process, the same mask plate may be used to perform multiple partial exposures on the base substrate of the touch module, thereby obtaining a large-sized touch module. In the description of the present disclosure, the orientations or positional relationships indicated by the terms “upper”, “lower”, etc. are based on the orientations or positional relationships illustrated in the drawings, which are only for the convenience of describing the present disclosure and do not require the present disclosure to be necessarily constructed and operated in a specific orientation, and therefore cannot be understood as a limitation to the present disclosure. In the description of this specification, the description with reference to the terms “an embodiment”, “another embodiment”, etc. means that specific features, structures, materials or characteristics described in conjunction with the embodiment are included in at least one embodiment of the present disclosure. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials or characteristics may be combined in any one or more embodiments or examples in a suitable manner. In addition, those skilled in the art may combine different embodiments or examples and the features of the different embodiments or examples described in this specification in the case of causing no conflict. Furthermore, it is to be noted that in this specification, the terms “first” and “second” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. What have been stated above are only specific embodiments of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any variations or substitutions that can be easily conceived by those skilled in the art familiar with this technical field within the technical scope revealed by the present disclosure should be encompassed within the protection scope of the present disclosure. Thus, the protection scope of the present disclosure should be based on the protection scope of the claims. | 28,617 |
11861131 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various embodiments or implementations of the invention. As used herein “embodiments” and “implementations” are interchangeable words that are non-limiting examples of devices or methods employing one or more of the inventive concepts disclosed herein. It is apparent, however, that various embodiments may be practiced without these specific details or with one or more equivalent arrangements. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring various embodiments. Further, various embodiments may be different, but do not have to be exclusive. For example, specific shapes, configurations, and characteristics of an embodiment may be used or implemented in another embodiment without departing from the inventive concepts. Unless otherwise specified, the illustrated embodiments are to be understood as providing exemplary features of varying detail of some ways in which the inventive concepts may be implemented in practice. Therefore, unless otherwise specified, the features, components, modules, layers, films, panels, regions, and/or aspects, etc. (hereinafter individually or collectively referred to as “elements”), of the various embodiments may be otherwise combined, separated, interchanged, and/or rearranged without departing from the inventive concepts. The use of cross-hatching and/or shading in the accompanying drawings is generally provided to clarify boundaries between adjacent elements. As such, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, dimensions, proportions, commonalities between illustrated elements, and/or any other characteristic, attribute, property, etc., of the elements, unless specified. Further, in the accompanying drawings, the size and relative sizes of elements may be exaggerated for clarity and/or descriptive purposes. When an embodiment may be implemented differently, a specific process order may be performed differently from the described order. For example, two consecutively described processes may be performed substantially at the same time or performed in an order opposite to the described order. Also, like reference numerals denote like elements. When an element, such as a layer, is referred to as being “on,” “connected to,” or “coupled to” another element or layer, it may be directly on, connected to, or coupled to the other element or layer or intervening elements or layers may be present. When, however, an element or layer is referred to as being “directly on,” “directly connected to,” or “directly coupled to” another element or layer, there are no intervening elements or layers present. To this end, the term “connected” may refer to physical, electrical, and/or fluid connection, with or without intervening elements. Further, the X-axis, the Y-axis, and the Z-axis are not limited to three axes of a rectangular coordinate system, such as the x, y, and z axes, and may be interpreted in a broader sense. For example, the X-axis, the Y-axis, and the Z-axis may be perpendicular to one another, or may represent different directions that are not perpendicular to one another. For the purposes of this disclosure, “at least one of X, Y, and Z” and “at least one selected from the group consisting of X, Y, and Z” may be construed as X only, Y only, Z only, or any combination of two or more of X, Y, and Z, such as, for instance, XYZ, XYY, YZ, and ZZ. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Although the terms “first,” “second,” etc. may be used herein to describe various types of elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another element. Thus, a first element discussed below could be termed a second element without departing from the teachings of the disclosure. Spatially relative terms, such as “beneath,” “below,” “under,” “lower,” “above,” “upper,” “over,” “higher,” “side” (e.g., as in “sidewall”), and the like, may be used herein for descriptive purposes, and, thereby, to describe one elements relationship to other element(s) as illustrated in the drawings. Spatially relative terms are intended to encompass different orientations of an apparatus in use, operation, and/or manufacture in addition to the orientation depicted in the drawings. For example, if the apparatus in the drawings is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the exemplary term “below” may encompass both an orientation of above and below. Furthermore, the apparatus may be otherwise oriented (e.g., rotated 90 degrees or at other orientations), and, as such, the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms, “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Moreover, the terms “comprises,” “comprising,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is also noted that, as used herein, the terms “substantially,” “about,” and other similar terms, are used as terms of approximation and not as terms of degree, and, as such, are utilized to account for inherent deviations in measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art. Various embodiments are described herein with reference to sectional and/or exploded illustrations that are schematic illustrations of idealized embodiments and/or intermediate structures. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments disclosed herein should not necessarily be construed as limited to the particular illustrated shapes of regions, but are to include deviations in shapes that result from, for instance, manufacturing. In this manner, regions illustrated in the drawings may be schematic in nature and the shapes of these regions may not reflect actual shapes of regions of a device and, as such, are not necessarily intended to be limiting. As customary in the field, some embodiments are described and illustrated in the accompanying drawings in terms of functional blocks, units, and/or modules. Those skilled in the art will appreciate that these blocks, units, and/or modules are physically implemented by electronic (or optical) circuits, such as logic circuits, discrete components, microprocessors, hard-wired circuits, memory elements, wiring connections, and the like, which may be formed using semiconductor-based fabrication techniques or other manufacturing technologies. In the case of the blocks, units, and/or modules being implemented by microprocessors or other similar hardware, they may be programmed and controlled using software (e.g., microcode) to perform various functions discussed herein and may optionally be driven by firmware and/or software. It is also contemplated that each block, unit, and/or module may be implemented by dedicated hardware, or as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. Also, each block, unit, and/or module of some embodiments may be physically separated into two or more interacting and discrete blocks, units, and/or modules without departing from the scope of the inventive concepts. Further, the blocks, units, and/or modules of some embodiments may be physically combined into more complex blocks, units, and/or modules without departing from the scope of the inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure is a part. Terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized or overly formal sense, unless expressly so defined herein. FIG.1is a perspective view showing a display device according to some embodiments of the present disclosure, andFIG.2is a plan view showing a display device according to some embodiments of the present disclosure. As used herein, the terms “above,” “top” and “upper surface” refer to the upper side of the display device, i.e., the side indicated by the arrow in the z-axis direction, whereas the terms “below,” “bottom” and “lower surface” refer to the lower side of the display device, i.e., the opposite side in the z-axis direction. As used herein, the terms “left,” “right,” “upper” and “lower” sides indicate relative positions when the display device is viewed from the top. For example, the “left side” refers to the opposite side indicated by the arrow of the x-axis direction, the “right side” refers to the side indicated by the arrow of the x-axis direction, the “upper side” refers to the direction indicated by the arrow of the y-axis direction, and the “lower side” refers to the opposite side indicated by the arrow of the y-axis direction. Further, in this specification, the phrase “on a plane,” or “plan view,” means viewing a target portion from the top, and the phrase “on a cross-section” means viewing a cross-section formed by vertically cutting a target portion from the side. Referring toFIGS.1to2, a display device10may be used for displaying moving images or still images. The display device10may be used as the display screen of portable electronic devices, such as a mobile phone, a smart phone, a tablet PC, a smart watch, a watch phone, a mobile communications terminal, an electronic notebook, an electronic book, a portable multimedia player (PMP), a navigation device and an ultra-mobile PC (UMPC), as well as the display screen of various products such as a television, a notebook, a monitor, a billboard, and products related to the Internet of Things (IoT). The display device10may have a rectangular shape when viewed from the top (e.g., in a plan view). For example, the display device10may have a rectangular shape having shorter sides in the first direction (x-axis direction) and longer sides in the second direction (y-axis direction) when viewed from the top. The corners where the shorter sides in the first direction (x-axis direction) meet the longer sides in the second direction (y-axis direction) may be a substantially right angle, or may be rounded to have a curvature (e.g., a predetermined curvature). The shape of the display device10when viewed from the top is not limited to a rectangular shape, but may be formed in another polygonal shape, a circular shape, or an elliptical shape. For example, the display device10may be formed flat, but the present disclosure is not limited thereto. For another example, the display device10may be formed to bend (e.g., to bend with a predetermined curvature). The display device10may include a display unit100, a display driver200, a display circuit board300, a touch driver400, a touch circuit board410, and a touch sensing unit500. The display unit100may include a display area having pixels for displaying images, and a non-display area around the display area. The display area of the display unit100may emit light from a plurality of emission areas (or a plurality of open areas), respectively. For example, the display unit100may include a pixel circuit such as a switching element, a pixel-defining layer defining emission areas of the display area, and a self-light-emitting element. For example, the self-light-emitting element may include at least one of an organic light-emitting diode, a quantum-dot light-emitting diode, an inorganic-based micro light-emitting diode (e.g., micro LED), and an inorganic-based nano light-emitting diode (e.g., nano LED). In the following description, the self-light-emitting element is an organic light-emitting diode as an example. The non-display area of the display unit100may include display electrode pads located on one edge of the substrate. The display electrode pads may be electrically connected to the display circuit board300. The display unit100will be described in detail with reference toFIGS.3A and4. The display driver200may output signals and voltages for driving the display unit100. The display driver200may supply data voltages to the data lines. The display driver200may provide a supply voltage to a power line, and may supply scan control signals to the scan driver. The display driver200may be implemented as an integrated circuit (IC) and may be attached to the display unit100by a chip-on-glass (COG) technique, a chip-on-plastic (COP) technique, or ultrasonic bonding. For example, the display driver200may be attached on the exposed part of the display unit100that is not covered by the touch sensing unit500. For another example, the display driver200may be attached to the display circuit board300. The display circuit board300may be attached on the display electrode pads of the display unit100using an anisotropic conductive film (ACF). Accordingly, lead lines of the display circuit board300may be electrically connected to the display electrode pads of the display unit100. The display circuit board300may be a flexible printed circuit board (FPCB), a printed circuit board (PCB), or a flexible film such as a chip-on-film (COF). The touch driver400may be connected to touch electrodes of the touch sensing unit500. The touch driver400may apply touch driving signals to the touch electrodes of the touch sensing unit500, and may measure capacitances of the touch electrodes. For example, the touch driving signals may have driving pulses. The touch driver400may not only determine whether a touch is input based on the capacitances of the touch electrodes, but also may calculate touch coordinates of the position where the touch is input. The touch driver400may be implemented as an integrated circuit (IC), and may be mounted on the touch circuit board410. The touch circuit board410may be attached onto the touch electrode pads of the touch sensing unit500using an anisotropic conductive film. Accordingly, the lead lines of the touch circuit board410may be electrically connected to the touch electrode pads of the touch sensing unit500. The touch circuit board410may be a flexible printed circuit board, a printed circuit board, or a flexible film such as a chip-on film. The touch sensing unit500may be located on the display unit100. The touch sensing unit500may have a rectangular shape having the shorter sides in the first direction (x-axis direction) and the longer sides in the second direction (y-axis direction) when viewed from the top. The corners where the shorter sides in the first direction (x-axis direction) meet the longer sides in the second direction (y-axis direction) may be a right angle, or may be rounded (e.g., rounded with a predetermined curvature). For example, the shape of the touch sensing unit500when viewed from the top is not limited to a rectangular shape, but may be formed in other polygonal shapes, a circular shape, or an elliptical shape. The shape of the touch sensing unit500may be similar to the shape of the display unit100when viewed from the top. The touch sensing unit500may be, but is not limited to being, flat. The touch sensing unit500may include curved portions formed at left and right ends thereof. The curved portions may have a constant curvature or varying curvatures. In addition, the touch sensing unit500may be formed to be flexible so that it may be curved, bent, folded or rolled, like the display unit100. The touch sensing unit500may include touch electrodes that are located in a touch sensor area, and that may detect a user's touch, and may also include touch electrode pads that are located in a touch peripheral area around the touch sensor area. The touch electrode pads may be formed on the touch sensing unit500at one edge of the touch sensing unit500to be electrically connected to the touch circuit board410. The touch sensing unit500will be described in detail with reference toFIGS.3A and5. Although the touch sensing unit500is a touch panel separated from the display unit100in the example shown inFIGS.1and2, the present disclosure is not limited thereto. FIG.3Ais an example of a cross-sectional view taken along the line I-I′ ofFIG.2. Referring toFIG.3A, the display device10may include a display unit100, a touch sensing unit500, and a sealing member SEAL that attaches the display unit100to the touch sensing unit500. The display unit100may include a first substrate SUB1, a thin-film transistor layer TFTL, and an emission material layer EML. The first substrate SUB1may be a base substrate or a base member, and may be made of an insulating material such as a polymer resin. For example, the first substrate SUB1may be a rigid substrate. For another example, the first substrate SUB1may be a flexible substrate that may be bent, folded, or rolled. When the substrate SUB is a flexible substrate, it may be made of, but is not limited to, polyimide (PI). The thin-film transistor layer TFTL may be located on the first substrate SUB1. The thin-film transistor layer TFTL may include scan lines, data lines, power lines, scan control lines, data connection lines for connecting the display driver200with data lines, pad connection lines for connecting the display driver200with the display electrode pads, etc., as well as thin-film transistors forming the pixel circuits of the pixels. Each of the thin-film transistors may include a gate electrode, a semiconductor layer, a source electrode, and a drain electrode. When the scan driver110is formed in the non-display area NDA of the display unit100, as shown inFIG.4, the scan driver110may include thin-film transistors. The thin-film transistor layer TFTL may be located in the display area and the non-display area. For example, the thin-film transistors in the pixels, the scan lines, the data lines, and the power supply lines on the thin-film film transistor layer TFTL may be located in the display area DA. The scan control lines, the data connection lines, and the pad connection lines of the thin-film transistor layer TFTL may be located in the non-display area. The emission material layer EML may be located on the thin-film transistor layer TFTL. The emission material layer EML may include pixels in each of which a first electrode, an emissive layer, and a second electrode are stacked on one another sequentially to be able to emit light, and a pixel-defining layer for defining the pixels. The pixels on the emission material layer EML may be located in the display area DA. For example, the emissive layer may be an organic emissive layer containing an organic material. The emissive layer may include a hole transporting layer, an organic light-emitting layer and an electron transporting layer. When a voltage is applied to the first electrode, while a cathode voltage is applied to the second electrode, through the thin-film transistors on the thin-film transistor layer TFTL, then the holes and electrons may move to the organic light-emitting layer through the hole transporting layer and the electron transporting layer, respectively, such that they combine in the organic light-emitting layer to emit light. For example, the first electrode may be an anode electrode, while the second electrode may be a cathode electrode. For example, an air gap VC may be formed between the display unit100and the touch sensing unit500. During a process of attaching the display unit100to the touch sensing unit500through the sealing member SEAL, the air gap VC may be formed between the display unit100and the touch sensing unit500. For another example, a filling layer may be located between the display unit100and the touch sensing unit500. During the process of attaching the display unit100to the touch sensing unit500through the sealing member SEAL, the filling layer may be injected between the display unit100and the touch sensing unit500. The filling layer may be, but is not limited to, an epoxy filling film or a silicon filling film. The touch sensing unit500may include a second substrate SUB2and a touch sensor layer TSL. The second substrate SUB2may be a base substrate or a base member, and may be made of an insulating material such as a polymer resin. For example, the second substrate SUB2may be a rigid substrate. When the second substrate SUB2is a rigid substrate, the second substrate SUB2may include, but is not limited to, a glass material or a transparent metal material. The touch sensor layer TSL may be located on the second substrate SUB2. The touch sensor layer TSL may include touch electrodes for sensing a user's touch by capacitive sensing, touch electrode pads, and touch signal lines for connecting the touch electrode pads with the touch electrodes. For example, the touch sensor layer TSL may sense a user's touch by self-capacitance sensing or mutual capacitance sensing. The touch electrodes of the touch sensor layer TSL may be located in the touch sensor area overlapping the display area of the display unit100. The touch signal lines and the touch electrode pads of the touch sensor layer TSL may be located in a touch peripheral area overlapping the non-display area of the display unit100. For example, a polarizing film and a cover window may be additionally located on the touch sensor layer TSL. The polarizing film may be located on the touch sensor layer TSL, and the cover window may be located on the polarizing film by an adhesive member. The adhesive member SEAL may be interposed between the edge of the first substrate SUB1and the edge of the second substrate SUB2in the non-display area. The sealing member SEAL may be located along the edges of the first substrate SUB1and the second substrate SUB2in the non-display area to seal the air gap VC. The first substrate SUB1and the second substrate SUB2may be coupled with each other by the sealing member SEAL. For example, the sealing member SEAL may be, but is not limited to, a frit adhesive layer, an ultraviolet curable resin, or a thermosetting resin. FIG.3Bis another example of a cross-sectional view taken along the line I-I′ ofFIG.2. The display device ofFIG.3Bmay further include an encapsulation layer TFEL that encapsulates the display unit100while the second substrate SUB2of the display device ofFIG.3Ais omitted. In the following description, the same elements as those described above will be briefly described or will not be described. Referring toFIG.3B, the display device10may include a display unit100and a touch sensing unit500. The display unit100may include a first substrate SUB1, a thin-film transistor layer TFTL, an emission material layer EML and an encapsulation layer TFEL. The first substrate SUB1may be a base substrate or a base member, and may be made of an insulating material, such as a polymer resin. The thin-film transistor layer TFTL may be located on the first substrate SUB1. The emission material layer EML may be located on the thin-film transistor layer TFTL. The encapsulation layer TFEL may be located on the emission material layer EML to cover a plurality of light-emitting elements. The encapsulation layer TFEL may prevent oxygen or moisture from permeating into the light-emitting elements. The touch sensing unit500may be located on the encapsulation layer TFEL, and may include a touch sensor layer TSL. The touch sensor layer TSL may be located on the encapsulation layer TFEL. The touch sensor layer TSL may include touch electrodes for sensing a user's touch by capacitive sensing, touch electrode pads, and touch signal lines for connecting the touch electrode pads with the touch electrodes. For example, the touch sensor layer TSL may sense a user's touch by self-capacitance sensing or mutual capacitance sensing. FIG.4is a plan view showing the display unit shown inFIG.3A. Referring toFIG.4, the display unit100may include a display area DA where pixels are located to display images, and a non-display area NDA that is the peripheral area of the display area DA. The non-display area NDA may be defined as the area from the outer side of the display area DA to the edge of the display unit100. The scan lines SL, the data lines DL, the power line PL and the pixels P may be located in the display area DA. The scan lines SL may be arranged to extend in the first direction (x-axis direction), while the data lines DL may be arranged to extend in the second direction (y-axis direction) intersecting the first direction. The power lines PL may include at least one vertical line in parallel with the data lines DL in the second direction, and a plurality of horizontal lines branching off from the at least one vertical line in the first direction. Each of the pixels P may be connected to at least one scan line SL, data line DL, and power line PL. Each of the pixels P may include thin-film transistors including a driving transistor and at least one switching transistor, a light-emitting element, and a capacitor. When a scan signal is applied from a scan line SL, corresponding ones of the pixels P receive a data voltage of a data line DL, and supply a driving current to the light-emitting element according to the data voltage applied to the gate electrode, so that light is emitted. The display unit100may include a scan driver110located in a non-display area NDA, a scan control line SCL, data connection lines DLL, and pad connection lines. In addition, the display driver200may be located in the non-display area NDA of the display unit100. The scan driver110may be connected to the display driver200through at least one scan control line SCL. The scan driver110may receive a scan control signal from the display driver200. The scan driver110may generate scan signals according to the scan control signal, and may supply the scan signals to the scan lines SL. For example, the scan driver110may be formed in the non-display area NDA on one side (e.g., outer side) of the display area DA. It is, however, to be understood that the present disclosure is not limited thereto. For another example, the scan driver110may be formed in the non-display area NDA as a plurality, and may be located on both sides, or each outer side, of the display area DA. The display driver200may be connected to display electrode pads DP of a display pad area DPA through display connection lines to receive digital video data and timing signals. The display driver200may convert the digital video data into analog positive/negative data voltages, and may supply them to the data lines DL through the data connection lines DLL. In addition, the display driver200may generate and supply a scan control signal for controlling the scan driver110through the scan control line SCL. The scan signals of the scan driver110may select pixels P to be supplied with data voltages, and the selected pixels P may receive the respective data voltages. The display driver200may be implemented as an integrated circuit (IC), and may be attached to the first substrate SUB1by a chip-on-glass (COG) technique, a chip-on-plastic (COP) technique, or ultrasonic bonding. FIG.5is a plan view showing an example of the touch sensing unit shown inFIG.3A. Referring toFIG.5, the touch sensing unit500may include a touch sensor area TSA for sensing a user's touch, and a touch peripheral area TPA located around the touch sensor area TSA. The touch sensor area TSA may overlap with the display area DA of the display unit100, and the touch peripheral area TPA may overlap with the non-display area NDA of the display unit100. The first touch electrodes TE and the second touch electrodes RE may be located in the touch sensor area TSA. The first touch electrodes TE and the second touch electrodes RE may be arranged such that they are spaced apart from one another (e.g., by a spacing or distance). For example, the first touch electrodes TE may be arranged in the first direction (x-axis direction), and may extend in the second direction (y-axis direction). The second touch electrodes RE may be located between the first touch electrodes TE, and may extend in the first direction (x-axis direction) while being spaced apart from one another in the second direction (y-axis direction). The first touch electrodes TE adjacent to one another in the second direction (y-axis direction) may be electrically connected with one another by touch island electrodes. The first touch electrodes TE and the second touch electrodes RE may be formed to have a diamond shape or a triangular shape when viewed from the top. For example, the first touch electrodes TE and the second touch electrodes RE located on the edges of the touch sensor area TSA may be formed in a triangular shape when viewed from the top, and the other first touch electrodes TE and second touch electrodes RE may be formed in a diamond shape when viewed from the top. In addition, to prevent moiré patterns by the first touch electrodes TE and the second touch electrodes RE when a user watches images on the display device10, the first touch electrode TE and the second touch electrodes RE may have curved sides when viewed from the top. For another example, the shape of the first touch electrodes TE and the second touch electrodes RE located in the touch sensor area TSA when viewed from the top is not limited to that shown inFIG.5. The first touch electrodes TE adjacent to one another in the second direction (y-axis direction) may be electrically connected to the touch island electrodes through connection electrodes. For example, a first touch electrode TE may be connected to a touch island electrode through a connection electrode, and the touch island electrode may be connected to another first touch electrode TE through another connection electrode. The connection electrodes are located on a different layer from the first touch electrodes TE and the second touch electrodes RE, and thus it is possible to reduce or prevent the likelihood of a short-circuit formed between the first touch electrodes TE and the second touch electrodes RE at their intersections. As a result, the first touch electrodes TE electrically connected in the second direction (y-axis direction) may be insulated from the second touch electrodes RE electrically connected in the first direction (x-axis direction). First to third touch signal lines TL1, TL2, and RL and the touch electrode pads TP may be located in the touch peripheral area TPA. One end of each of the first touch signal lines TL1may be connected to the respective one of the first touch electrodes TE at a first side of the touch sensor area TSA. The first side of the touch sensor area TSA may refer to one of the four sides of the touch sensor area TSA that is closest to the touch pad area TDA where the touch electrode pads TP are located. The other end of each of the first touch signal lines TL1may be connected to some of the touch electrode pads TP of the touch pad area TDA. Accordingly, the first touch signal lines TL1may respectively connect the first touch electrodes TE located on the first side of the touch sensor area TSA with some touch electrode pads TP of the touch pad area TDA. One end of each of the second touch signal lines TL2may be connected to the respective one of the first touch electrodes TE located on a second side of the touch sensor area TSA. The second side of the touch sensor area TSA may refer to the side that is opposite to the first side of the touch sensor area TSA, and that is farthest from the touch pad area TDA. The other end of each of the second touch signal lines TL2may be connected to others of the touch electrode pads TP of the touch pad area TDA. For example, the second touch signal lines TL2may be connected to the first touch electrodes TE located on the second side of the touch sensor area TSA, while passing around the first side and a fourth side (e.g., left side, as shown inFIG.5) of the touch sensor area TSA. Accordingly, the second touch signal lines TL2may respectively connect the first touch electrodes TE located on the second side of the touch sensor area TSA with some other touch electrode pads TP of the touch pad area TDA. One end of each of the third touch signal lines RL may be connected to the respective one of the second touch electrodes RE located on a third side (e.g., right side, as shown inFIG.5) of the touch sensor area TSA. The third side of the touch sensor area TSA may refer to the side that is opposite to the fourth side of the touch sensor area TSA. The other end of each of the third touch signal lines RL may be connected to others of the touch electrode pads TP of the touch pad area TDA. Accordingly, the third touch signal lines RL may respectively connect the second touch electrodes RE located on the third side of the touch sensor area TSA with others of the touch electrode pads TP of the touch pad area TDA. The touch electrode pads TP may be located on one side of the second substrate SUB2. The touch circuit board410may be attached on the touch electrode pads TP using an anisotropic conductive film. Accordingly, the touch electrode pads TP may be electrically connected to the touch circuit board410. The first touch electrodes TE and the second touch electrodes RE may be driven by mutual capacitive sensing or self-capacitive sensing. For example, when the first touch electrodes TE and the second touch electrodes RE are driven by mutual capacitive sensing, the touch driving signals may be respectively supplied to the first touch electrodes TE by the first touch signal lines TL1and the second touch signal lines TL2to thereby charge mutual capacitances formed at the intersections of the first touch electrodes TE and the second touch electrodes RE. The touch driver400may measure a change in the charge amount of the mutual capacitances formed between the first and second touch electrodes TE and RE through the third touch signal lines RL, and may determine whether there is a touch input based on the change in the charge amount of the mutual capacitances. The touch driving signals may have touch driving pulses. For another example, when the first touch electrodes TE and the second touch electrodes RE are driven by self-capacitive sensing, the first to third touch signal lines TL1, TL2and RL my supply the touch driving signals to the first touch electrodes TE as well as to the second touch electrodes RE to thereby charge the self-capacitance of the first touch electrodes TE and the second touch electrodes RE. The touch driver400may measure a change in the charge amount of the self-capacitances through the first to third touch signal lines TL1, TL2and RL, and may determine whether there is a touch input based on the change in the charge amount of the self-capacitances. In the following description, the touch driver400is driven by the mutual capacitive sensing, in which touch driving pulses are applied to the first touch electrodes TE, and a change in the charge amount of the mutual capacitances is measured through the third touch signal lines RL connected to the second touch electrodes RE. In the mutual capacitive sensing, the first touch electrodes TE may serve as touch driving electrodes, the second touch electrodes RE may serve as touch sensing electrodes, the first and second touch signal lines TL1and TL2may serve as touch driving lines, and the third touch signal lines RL may serve as touch sensing lines. For example, first to fourth guard lines GL1, GL2, GL3and GL4, and first and second ground lines GRL1and GRL2may be located at the touch peripheral area TPA. The first guard line GL1may be arranged on the outer side of the outermost one of the third touch signal lines RL. The first ground line GRL1may be located on the outer side of the first guard line GL1. Accordingly, the first guard line GL1is located between the outermost one of the third touch sing lines RL and the first ground line GRL1, so that it is possible to reduce influence caused by a change in the voltage of the first ground line GRL1on the third touch signal lines RL. One end of the first guard line GL1and one end of the first ground line GRL1may be connected to ones of the touch electrode pads TP that are located at the rightmost position, although the present disclosure isn't limited thereto. The second guard line GL2may be located between the innermost one of the third touch signal lines RL and the rightmost one of the first touch signal lines TL1. Accordingly, the second guard line GL2may reduce mutual influence between the third touch signal lines RL and the first touch signal lines TL1. One end of the second guard line GL2may be connected to the touch electrode pads TP. The third guard line GL3may be located between the leftmost one of the first touch signal lines TL1and the innermost one of the second touch signal lines TL2. Accordingly, the third guard line GL3may reduce mutual influence between the first touch signal lines TL1and the second touch signal lines TL2. One end of the third guard line GL3may be connected to the touch electrode pads TP. The fourth guard line GL4may be arranged on the outer side of the outermost one of the second touch signal lines TL2. The second ground line GRL2may be located on the outer side of the fourth guard line GL4. Accordingly, the fourth guard line GL4is located between the outermost one of the second touch signal lines TL2and the second ground line GRL2, so that it is possible to reduce the influence by a change in the voltage of the second ground line GRL2on the second touch signal lines TL2. One end of the fourth guard line GL4and one end of the second ground line GRL2may be connected to the touch electrode pads TP that are the leftmost ones. The first ground line GRL1may be located at the outermost position on the right side of the touch sensing unit500, and the second ground line GRL2may be located at the outermost positions on the lower, left and upper sides of the touch sensing unit500. The first ground line GRL1and the second ground line GRL2may receive a ground voltage. Therefore, when static electricity is applied from the outside, the static electricity may be discharged to the first ground line GRL1and the second ground line GRL2. For example, when the first touch electrodes TE and the second touch electrodes RE are driven by the mutual capacitance, the first to fourth guard lines GL1, GL2, GL3and GL4may receive the ground voltage. FIG.6is an enlarged view of the area A1ofFIG.5, andFIG.7is a cross-sectional view taken along the line II-II′ ofFIG.6. Referring toFIGS.6and7, the first substrate SUB1may be a base substrate or a base member, and may be made of an insulating material such as a polymer resin. A buffer layer BF may be located on the first substrate SUB1. The buffer layer BF may be formed of an inorganic film that may reduce or prevent the permeation of air or moisture. For example, the buffer layer BF may include a plurality of inorganic films alternatingly stacked on one another. The buffer layer BF may be made up of, but is not limited to, multiple layers in which one or more inorganic layers of a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer and an aluminum oxide layer are alternately stacked on one another. The thin-film transistor layer TFTL may include a thin-film transistor TFT, a gate insulating layer GI, an interlayer dielectric layer ILD, a passivation layer PAS, and a planarization layer OC. The thin-film transistor TFT may be located on the buffer layer BF, and may form a pixel circuit, or a portion thereof, of each of a plurality of pixels. For example, the thin-film transistor TFT may be a driving transistor or a switching transistor of the pixel circuit. The thin-film transistor TFT may include a semiconductor layer ACT, a gate electrode GE, a source electrode SE, and a drain electrode DE. The semiconductor layer ACT may be located on the buffer layer BF. The semiconductor layer ACT may overlap the gate electrode GE, the source electrode SE and the drain electrode DE. The semiconductor layer ACT may be in direct contact with the source electrode SE and the drain electrode DE, and may face the gate electrode GE with the gate insulating layer GI therebetween. The gate electrode GE may be located on the gate insulating layer GI. The gate electrode GE may overlap the semiconductor layer ACT with the gate insulating layer GI interposed therebetween. The source electrode SE and the drain electrode DE are located on the interlayer dielectric layer ILD such that they are spaced apart from each other. The source electrode SE may be in contact with one end of the semiconductor layer ACT through a contact hole formed in the gate insulating layer GI and the interlayer dielectric layer ILD. The drain electrode DE may be in contact with the other end of the semiconductor layer ACT through another contact hole formed in the gate insulating layer GI and the interlayer dielectric layer ILD. The drain electrode DE may be connected to a first electrode AND of the light-emitting element EL through a contact hole formed in the passivation layer PAS and the planarization layer OC. The gate insulating layer GI may be located on the semiconductor layer ACT. For example, the gate insulating layer GI may be located on the semiconductor layer ACT and the buffer layer BF, and may insulate the semiconductor layer ACT from the gate electrode GE. The gate insulating layer GI may include contact holes through which the source electrode SE and the drain electrode DE respectively penetrate. The interlayer dielectric layer ILD may be located over the gate electrode GE. For example, the interlayer dielectric layer ILD may include the contact hole via which the source electrode SE penetrates, and the contact hole via which the drain electrode DE penetrates. The contact holes of the interlayer dielectric layer ILD may be respectively connected to the contact holes of the gate insulating layer GI. The passivation layer PAS may be located over the thin-film transistor TFT to protect the thin-film transistor TFT. For example, the passivation layer PAS may include a contact hole through which the first electrode AND passes. For another example, the passivation layer PAS may be omitted from the display device10. In such case, the planarization layer OC may be located on the thin-film transistor TFT to provide a flat surface over the thin-film transistor TFT. The planarization layer OC may be located on the passivation layer PAS to provide a flat surface over the thin-film transistor TFT. For example, the planarization layer OC may include a contact hole through which the first electrode AND of the light-emitting element EL passes. The contact hole of the planarization layer OC may be connected to the contact hole of the passivation layer PAS. The light-emitting element EL may be located on the thin-film transistor TFT. The light-emitting element EL may include a first electrode AND, an emissive layer E, and a second electrode CAT. The first electrode AND may be located on the planarization layer OC. For example, the first electrode AND may be located to overlap the emission area or the open area defined by the pixel-defining layer. The first electrode AND may be connected to the drain electrode DE of the thin-film transistor TFT. The emissive layer E may be located on the first electrode AND. The emissive layer E may include a hole injecting layer, a hole transporting layer, a light-receiving layer, an electron blocking layer, an electron transporting layer, an electron injecting layer, etc. For example, the emissive layer E may be, but is not limited to, an organic emission layer made of an organic material. If the emissive layer E is an organic emissive layer, when the thin-film transistor applies a voltage (e.g., a predetermined voltage) to the first electrode AND of the light-emitting element EL, and when the second electrode CAT of the light-emitting element EL receives a common voltage or cathode voltage, the holes and electrons may move to the organic emissive layer E through the hole transporting layer and the electron transporting layer, respectively, and they combine in the organic layer E to emit light. The second electrode CAT may be located on the emissive layer E. For example, the second electrode CAT may be implemented as an electrode common to all pixels, as opposed to being located as a separated electrode for each of the pixels. The second electrode CAT may be located on the emissive layer E in the emission area, and may be located on the pixel-defining layer in regions other than the emission area. The pixel-defining layer may define the emission area or the open areas. The pixel-defining layer may separate and insulate the first electrode AND of one of the plurality of light-emitting elements EL from the first electrode AND of another one of the light-emitting elements EL. The second substrate SUB2may be located on the display unit100. The second substrate SUB2may be a base substrate and may be made of an insulating material, such as a polymer resin. The second substrate SUB2may reduce or prevent oxygen or moisture permeating into the light-emitting elements EL. The touch sensor layer TSL may be located on the second substrate SUB2. The touch sensor layer TSL may include first and second touch electrodes TE and RE, touch island electrodes TEI, connection electrodes CE, and first and second insulating layers IL1and IL2. The connection electrodes CE may be located on the second substrate SUB2. Each of the connection electrodes CE may connect a respective first touch electrode TE with a respective touch island electrode TEI. For example, an end of each of the connection electrodes CE may be connected to a respective first touch electrode TE and the other end thereof may be connected to a respective touch island electrode TEI. The connection electrodes CE may be formed as an opaque metal conductive layer. For example, the connection electrodes CE may be made up of a single layer or multiple layers of one of molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. For example, the connection electrodes CE do not overlap with the emission area of the pixel P, so that the aperture ratio of the pixel P is not reduced. It is, however, to be understood that the present disclosure is not limited thereto. The touch island electrode TEI may be located between first touch electrodes TE that are adjacent to each other in the second direction (y-axis direction) to reduce the length of the connection electrodes CE. The first insulating layer IL1may cover the connection electrode CE and the second substrate SUB2. For example, the first insulating layer IL1may be formed of an inorganic layer, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The first touch electrodes TE, the touch island electrodes TEI, and the second touch electrodes RE may be located on the first insulating layer IL1. The first touch electrode TE may be connected to the connection electrode CE through a first contact hole CNT1penetrating through the first insulating layer IL1, and the connection electrode CE may be connected to the touch island electrode TEI through a second contact hole CNT2penetrating through the first insulating layer IL1. Accordingly, the connection electrode CE may electrically connect the first touch electrode TE with the touch island electrode TEI. Accordingly, the first touch electrodes TE spaced apart from one another in the second direction (y-axis direction) with the second touch electrodes RE therebetween may be electrically connected through the connection electrode CE and the touch island electrode TEI. For example, the first touch electrodes TE, the first touch island electrodes TEI, and the second touch electrodes RE may be made of a transparent metal oxide (TCO) that may transmit light, such as ITO and IZO. Accordingly, even if the first touch electrodes TE, the touch island electrodes TEI and the second touch electrodes RE overlap the pixels P, the aperture ratio of the pixel P is not reduced. The second insulating layer IL2may cover the first touch electrodes TE, the touch island electrodes TEI, and the second touch electrodes RE. For example, the second insulating layer IL2may be formed of an inorganic layer, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. FIG.8is an enlarged plan view showing an example of the area A2ofFIG.5,FIG.9is an example of a cross-sectional view, taken along the line III-III′ ofFIG.8,FIG.10is a view showing the dummies and the contact dummy shown inFIG.8,FIG.11is an enlarged plan view of the area A3ofFIG.8, andFIG.12is a cross-sectional view taken along the line IV-IV′ ofFIG.11. Referring toFIGS.8to12, a display unit100may include a display area DA and a transmitting portion TU (seeFIG.9). The display area DA may include a plurality of pixels P. Each of the pixels P may be connected to at least one scan line SL, data line DL, and power line PL. Each of the pixels P may include thin-film transistors including a driving transistor and at least one switching transistor, a light-emitting element, and a capacitor. When a scan signal is applied from a scan line SL, corresponding ones of the pixels P receive a data voltage of a corresponding data line DL, and supply a driving current to the light-emitting element according to the data voltage applied to the gate electrode, such that light is emitted. The transmitting portion TU of the display unit100may be surrounded by the display area DA when viewed from the top. The transmitting portion TU may include a transparent material, and may allow the transmission of light entering and exiting a sensor module SM. The touch sensor layer TSL of the touch sensing unit500may include first touch electrodes TE, second touch electrodes RE, an electrode dummy EDM, a transmissive area TA, dummies DM, and a contact dummy CDM. The electrode dummy EDM may be located between a respective first touch electrode TE and a respective second touch electrode RE to prevent a short circuit between the first touch electrode TE and the second touch electrode RE, and to reduce the basic capacitance of the touch sensor layer TSL. For example, if the first touch electrodes TE and the second touch electrodes RE are spaced apart from one another (e.g., by a minimum distance) and face each other, the basic capacitance may increase, and the touch sensitivity may deteriorate. The electrode dummy EDM may be located on a different layer from the first touch electrodes TE and the second touch electrodes RE, but might not overlap the first touch electrodes TE and the second touch electrodes RE in a z-axis direction (e.g., a fifth direction that is substantially perpendicular to each of the first and second directions). Accordingly, the electrode dummy EDM may control a separation distance between the first touch electrodes TE and the second touch electrodes RE, and it is possible to precisely detect a change in the charge amount of the mutual capacitances by adjusting the basic capacitance of the touch sensor layer TSL. The electrode dummy EDM may be located between the first touch electrodes TE and the second touch electrodes RE to improve the touch sensitivity of the touch sensing unit500. For example, when the first touch electrodes TE and the second touch electrodes RE have a diamond shape when viewed from the top, the electrode dummy EDM may be located between the first touch electrodes TE and the second touch electrodes RE. In such case, the electrode dummy EDM may extend in a third direction (e.g., a diagonal direction in a plan view), which is between the first direction (x-axis direction) and the second direction (y-axis direction), may extend in a direction that is opposite to the third direction, may extend in a fourth direction (e.g., a different diagonal direction (than the third direction) in a plan view), which is between a direction that is opposite to the first direction (x-axis direction) and the second direction (y-axis direction), or may extend in a direction that is opposite to the fourth direction, between respective adjacent ones of the first touch electrodes TE and the second touch electrodes RE. The electrode dummy EDM may be located to not be between first touch electrodes TE that are adjacent in the second direction (y-axis direction). Alternatively, the electrode dummy EDM may be located away from a region of the first touch electrodes TE that is adjacent to the touch island electrode TEI. Accordingly, the first touch electrodes TE that are adjacent in the first direction (x-axis direction) may be spaced apart from each other by a distance that is equal to the area of the electrode dummy EDM, and may be insulated from each other. Similarly, the second touch electrodes RE that are adjacent in the second direction (y-axis direction) may be spaced apart from each other by a distance that is equal to the area of the electrode dummy EDM, and may be insulated from each other. For example, the electrode dummy EDM may be floating and may receive no voltage. It is, however, to be understood that the present disclosure is not limited thereto. For another example, the electrode dummy EDM may receive a voltage that would not substantially affect the capacitance of the touch sensor layer TSL. As the touch sensing unit500includes the electrode dummy EDM, it is possible to sensitively measure a change in the charge amount of the mutual capacitances between the first and second touch electrodes TE and RE. The electrode dummy EDM may be located on the second substrate SUB2, and may be covered by the first insulating layer IL1. The electrode dummy EDM may be located on a different layer from the first touch electrodes TE and the second touch electrodes RE, but might not overlap the first touch electrodes TE and the second touch electrodes RE in the z-axis direction. The electrode dummy EDM may be formed as, but is not limited to, an opaque metal conductive layer. The transmissive area TA of the touch sensing unit500may be surrounded by at least one of the first touch electrodes TE and at least one of second touch electrodes RE in the touch sensor area TSA. For example, the transmissive area TA may overlap the transmitting portion TU of the display unit100. For example, the transmissive area TA may have, but is not limited to, a circular shape when viewed from the top. As another example, the transmissive area TA may have a shape of a polygonal column or an amorphous column. In such case, the transmissive area TA may have a polygonal shape including a quadrangle or an amorphous shape when viewed from the top. The display unit100may include a camera module or the sensor module SM located in line with, or aligned with, the transmissive area TA. The camera module or the sensor module SM may be located below the display unit100(e.g., at the back side of the display unit100). For example, the sensor module SM may include at least one of an illuminance sensor, a proximity sensor, an infrared sensor, and an ultrasonic sensor. Accordingly, the display device10includes the transmitting portion TU surrounded by the display area DA, the transmissive area TA overlapping the transmitting portion TU, and the camera module or sensor module SM overlapping the transmissive area TA, and thus dead space may be reduced when compared to other display devices where a camera module or a sensor module is located on one side of the non-display area. In addition, the thickness of the display device10may be reduced as the camera module or the sensor module SM overlaps the transmissive area TA. The dummies DM may overlap the transmitting portion TU of the display unit100to surround the transmissive area TA, and may be insulated from the first and second touch electrodes TE and RE. The portions of the transmitting portion TU of the display unit100that overlap the dummies DM may fall in the non-display area. The dummies DM may remove external noise through the transmissive area TA, and may reduce or prevent coupling between the first touch electrodes TE and the second touch electrodes RE. For example, signals transmitted or received by the camera module or the sensor module SM may be transmitted through the transmissive area TA, and such signals may affect the capacitance between first touch electrodes TE or second touch electrodes RE. The signals transmitted through the transmissive area TA may cause noise in the touch sensor layer TSL. To avoid such noise, the dummies DM surround the transmissive area TA to thereby separate the first touch electrodes TE and the second touch electrodes RE from the transmissive area TA. The dummies DM may control the capacitance of the touch sensor layer TSL like the electrode dummy EDM, and may reduce or prevent the coupling between the first touch electrodes TE and the second touch electrodes RE. For example, the dummies DM may be located on a different layer from the first touch electrodes TE and the second touch electrodes RE, but might not overlap the first touch electrodes TE and the second touch electrodes RE in the z-axis direction. Accordingly, the dummies DM may adjust the basic capacitance of the touch sensor layer TSL, and may improve the touch sensitivity of the touch sensing unit500. For example, the dummies DM may be floating and may receive no voltage. It is, however, to be understood that the present disclosure is not limited thereto. For another example, the dummies DM may receive a voltage that would not substantially affect the capacitance of the touch sensor layer TSL. As the touch sensing unit500includes the dummies DM, it is possible to sensitively measure a change in the charge amount of the mutual capacitances between the first and second touch electrodes TE and RE. The dummies DM may include a main dummy MDM directly surrounding the transmissive area TA and at least one sub-dummy surrounding the main dummy MDM. For example, the dummies DM may include the main dummy MDM and the first to third subsidiary dummies/sub-dummies DM1, DM2, and DM3. It is to be noted that the number of the subsidiary dummies is not limited to three. The main dummy MDM directly surrounds the transmissive area TA, thereby blocking external noise through the transmissive area TA. For example, the main dummy MDM may have a circular shape having a thickness (e.g., a predetermined thickness) in a plan view. The thickness of the main dummy MDM may be greater than the sum of the thicknesses of the first to third subsidiary dummies DM1, DM2, and DM3when viewed from the top. Therefore, the main dummy MDM has thickness (e.g., a predetermined thickness) so that external noise through the transmissive area TA may be efficiently reduced or removed. The first sub-dummy DM1may surround the main dummy MDM, the second sub-dummy DM2may surround the first sub-dummy DM1, and the third sub-dummy DM3may surround the second sub-dummy DM2. The third sub-dummy DM3may be an outermost sub-dummy that is located at the outermost position of the dummies (e.g., at the outermost position of the sub-dummies DM). At least one of the first touch electrodes TE and at least one of the second touch electrodes RE may be partially removed depending on the positions of the transmissive area TA and the dummies DM, and may face each other directly. In addition, the third sub-dummy DM3may directly face the partially removed first touch electrode and second touch electrode when viewed from the top/in a plan view. The first sub-dummy DM1may include a (1-1) cut CUT11and a (1-2) cut CUT12overlapping a first axis Axis1extending in the first direction (x-axis direction) passing through the center CP of the transmissive area TA. The first sub-dummy DM1may include a (1-3) cut CUT13and a (1-4) cut CUT14overlapping a second axis Axis2extending in the second direction (y-axis direction) passing through the center CP of the transmissive area TA. As the first sub-dummy DM1includes the (1-1) cut CUT11, the (1-2) cut CUT12, the (1-3) cut CUT13, and the (1-4) cut CUT14, it is possible to reduce or prevent coupling through the first sub-dummy DM1. The second sub-dummy DM2may include a (2-1) cut CUT21and a (2-2) cut CUT22overlapping a third axis Axis3extending in the third direction, which is between the first direction (x-axis direction) and the second direction (y-axis direction), and passing through the center CP of the transmissive area TA. The second sub-dummy DM2may include a (2-3) cut CUT23and a (2-4) cut CUT24overlapping a fourth axis Axis4extending in the fourth direction, which is between the opposite direction to the first direction (x-axis direction) and the second direction (y-axis direction), and passing through the center CP of the transmissive area TA. As the second sub-dummy DM2includes the (2-1) cut CUT21, the (2-2) cut CUT22, the (2-3) cut CUT23and the (2-4) cut CUT24, it is possible to reduce or prevent the coupling through the second sub-dummy DM2. The third sub-dummy DM3may include a (3-1) cut CUT31corresponding to a gap between directly adjacent first and second touch electrodes TE and RE among the first and second touch electrodes TE and RE. The (3-1) cut CUT31may be formed by cutting the third sub-dummy DM3so that its size is equal to the gap between the directly adjacent first and second touch electrodes TE and RE. Therefore, both ends of the third sub-dummy DM3at the (3-1) cut CUT31may be insulated from each other. The third sub-dummy DM3might not overlap the first touch electrodes TE and the second touch electrodes RE in the z-axis direction. For example, if an error occurs during the process of patterning the first touch electrodes TE, the second touch electrodes RE, and the dummies DM, then a directly adjacent first touch electrode TE and the third sub-dummy DM3may partially overlap, or may be too close, and/or a directly adjacent second touch electrode RE and the third sub-dummy DM3may partially overlap, or may be too close. When this happens, coupling may occur between the directly adjacent first touch electrode TE and the third sub-dummy DM3, or between the directly adjacent second touch electrode RE and the third sub-dummy DM3. If a part of the third sub-dummy DM3that is coupled with the first touch electrode TE and another part of the third sub-dummy DM3that is coupled with the second touch electrode RE, are not insulated from each other, undesirable coupling may occur between the first touch electrode TE and the second touch electrode RE, such that the sensitivity of the touch sensing unit500may be deteriorated or such that the touch sensing unit500may not even work. In this regard, as the third sub-dummy DM3includes the (3-1) cut CUT31, a part of the third sub-dummy DM3directly facing (e.g., most adjacent, or closest to) the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. As the third sub-dummy DM3includes the (3-1) cut CUT31, it is possible to reduce or eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE. For example, the (3-1) cut CUT31of the third sub-dummy DM3may be located at a shortest distance from (e.g., may be located to be directly adjacent) the gap between the directly adjacent first and second touch electrodes TE and RE. The length of the (3-1) cut CUT31may be larger than a corresponding dimension of the gap between the directly adjacent first and second touch electrodes TE and RE. It is, however, to be understood that the present disclosure is not limited thereto. An imaginary straight line connecting the gap between the first and second touch electrodes TE and RE with the (3-1) cut CUT31of the third sub-dummy DM3may pass through the center CP of the transmissive area TA. It is, however, to be understood that the present disclosure is not limited thereto. The design of the (3-1) cut CUT31of the third sub-dummy DM3may be altered in a variety of manners as long as a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. The outermost sub-dummy DM3located at the outermost position of the dummies DM may further include a (3-2) cut CUT32and a (3-3) cut CUT33respectively corresponding to gaps between an adjacent electrode dummy EDM and a respective one of the first and second touch electrodes TE and RE. InFIG.8, the lower right side of the dummies DM, collectively, may directly face the electrode dummy EDM. The first touch electrode TE may be spaced apart from the second touch electrode RE by a distance that is equal to a distance between the dummies DM and the electrode dummy EDM at an area where the dummies DM and the electrode dummy EDM directly face each other. A part of the third sub-dummy DM3may be partially surrounded by the first touch electrode TE and/or the second touch electrode RE, and another part of the third sub-dummy DM3may be partially surrounded by the electrode dummy EDM. Alternatively, a part of the third sub-dummy DM3may be surrounded by the contact dummy CDM, and another part of the third sub-dummy DM3may be surrounded by the electrode dummy EDM. Therefore, the (3-2) cut CUT32of the third sub-dummy DM3may be located between the first touch electrode TE and the electrode dummy EDM, which directly face each other, and the (3-3) cut CUT33may be located between the second touch electrode RE and the electrode dummy EDM, which directly face each other. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, it is possible to reduce or entirely eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE. The dummies DM may be located on the second substrate SUB2and may be covered by the first insulating layer IL1. The dummies DM may be located on a different layer from the first touch electrodes TE and the second touch electrodes RE, but might not overlap the first touch electrodes TE and the second touch electrodes RE in the z-axis direction. The dummies DM may be formed as, but are not limited to, an opaque metal conductive layer. The contact dummy CDM may partially or substantially surround the sub-dummy DM3, which is an outermost sub-dummy that is located at the outermost position of the dummies DM, and may come in contact with the first touch electrode TE or the second touch electrode RE that directly face the sub-dummy DM3. At least one of the first touch electrodes TE and at least one of the second touch electrodes RE may be partially removed depending on the positions of the transmissive area TA and the dummies DM. The partially removed first touch electrode TE or second touch electrode RE may directly face the third sub-dummy DM3. The area of the partially removed first touch electrode TE or second touch electrode RE may be smaller than the area of the other first touch electrodes TE or second touch electrodes RE. The internal resistance of the partially removed first touch electrode TE or second touch electrode RE may be smaller than that of the other electrodes. In this regard, the first touch electrode TE or the second touch electrode RE directly facing the third sub-dummy DM3may be electrically connected to the contact dummy CDM to thereby increase the internal resistance. The shape and size of the contact dummy CDM may be designed to compensate for the reduced internal resistance. Accordingly, the shape or size of the contact dummy CDM may be increased as the amount of the portions removed from the first touch electrode TE or the second touch electrode RE increases. The internal resistance of the partially removed first touch electrode TE or second touch electrode RE connected to the contact dummy CDM may be equal to the inner resistance of the other first touch electrode TE or second touch electrode, which is not partially removed. For example, the width of the contact dummy CDM may be greater than the width of the sub-dummy DM3located at the outermost position of the dummies DM. It is to be noted that the width of the contact dummy CDM in a plan view may be altered depending on the design of the transmissive area TA, the first touch electrode The, and the second touch electrode RE, and is not limited to that described above. The contact dummy CDM may include a (4-1) cut CUT41overlapping the gap between the first and second touch electrodes TE and RE when viewed from the top. A part of the contact dummy CDM may be connected to the first touch electrode TE of the first and second touch electrodes TE and RE facing each other directly, and another part of the contact dummy CDM may be connected to the second touch electrode RE of the first and second touch electrodes TE and RE facing each other directly. As the contact dummy CDM includes the (4-1) cut CUT41, it is possible to prevent coupling between the first touch electrodes TE and the second touch electrodes RE. For example, the (4-1) cut CUT41of the contact dummy CDM may be located at the shortest distance from, or directly next to, the (3-1) cut CUT31of the third sub-dummy DM3. The length of the (3-1) cut CUT31may be larger than the length of the (4-1) cut CUT41of the contact dummy CDM. It is, however, to be understood that the present disclosure is not limited thereto. An imaginary straight line connecting the (4-1) cut CUT41of the contact dummy CDM with the (3-1) cut CUT31of the third sub-dummy DM3may pass through the center CP of the transmissive area TA. It is, however, to be understood that the present disclosure is not limited thereto. The contact dummy CDM may be removed from a region where the dummies DM and the electrode dummy EDM directly face each other. InFIG.8, the lower right side of the dummies DM may directly face the electrode dummy EDM. In such case, one end of the contact dummy CDM may be located between the first touch electrode TE and the electrode dummy EDM directly facing each other, and the other end of the contact dummy CDM may be located between the second touch electrode RE and the electrode dummy EDM directly facing each other. An end CDMa of the contact dummy CDM may be located in line with the (3-2) cut CUT32of the third sub-dummy DM3, and another end CDMb of the contact dummy CDM may be located in line with the (3-3) cut CUT33of the third sub-dummy DM3. As the ends CDMa and CDMb of the contact dummy CDM are located in line with the (3-2) cut CUT32and the (3-3) cut CUT33, respectively, it is possible to eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE (e.g., which might otherwise be possible due to the presence of the contact dummy CDM). The contact dummy CDM may be located on the second substrate SUB2, and may be covered by the first insulating layer IL1. For example, the contact dummy CDM may be connected to the first touch electrode TE or the second touch electrode RE through a contact hole passing through the first insulating layer IL1. The contact dummy EDM may be formed as, but is not limited to, an opaque metal conductive layer. FIG.13shows another example of a cross-sectional view taken along the line III-III′ ofFIG.8. The display device ofFIG.13is substantially identical to the display device ofFIG.9, except for the configuration of the transmitting portion TU, and, therefore, redundant description thereof will be omitted. Referring toFIG.13, the display unit100may further include a non-display area NDA surrounded by a display area DA and a transmitting portion TU surrounded by the non-display area NDA. The transmitting portion TU of the display unit100may overlap the transmissive area TA of the touch sensing unit500. The non-display area NDA surrounding the transmitting portion TU of the display unit100may overlap dummies DM of the touch sensing unit500. The transmitting portion TU may be formed by removing portions of the display unit100that corresponds to the transmissive area TA of the touch sensing unit500. For example, the transmitting portion TU may be formed by removing portions of the first substrate SUB1, the thin-film transistor layer TFTL, and the emission material layer EML that would otherwise fall in the transmissive area TA. The transmitting portion TU may accommodate at least a part of the camera module or sensor module SM. Accordingly, the display device10may improve the sensitivity of the camera module or sensor module SM by reducing the layers overlapping the camera module or sensor module SM. FIG.14is an enlarged plan view showing another example of the area A2ofFIG.5,FIG.15is a view showing the dummies and the contact dummy shown inFIG.14,FIG.16is an enlarged plan view of the area A4ofFIG.14, andFIG.17is a cross-sectional view taken along the line V-V′ ofFIG.16. A touch sensing unit500ofFIGS.14to17is substantially identical to the touch sensing unit500shown inFIGS.8to12except that the touch sensing unit500further includes a (2-5) cut CUT25, and, therefore, redundant description thereof will be omitted. Referring toFIGS.14to17, the touch sensor layer TSL may include first touch electrodes TE, second touch electrodes RE, an electrode dummy EDM, a transmissive area TA, and dummies DM, and a contact dummy CDM. The dummies DM may include a main dummy MDM directly surrounding the transmissive area TA, and at least one sub-dummy surrounding the main dummy MDM. For example, the dummy part DM may include the main dummy MDM and the first to third subsidiary dummies DM1, DM2, and DM3. It is to be noted that the number of the subsidiary dummies is not limited to three. The main dummy MDM directly surrounds the transmissive area TA, thereby blocking external noise through the transmissive area TA. For example, the main dummy MDM may have a circular shape having a predetermined thickness when viewed from the top. The thickness of the main dummy MDM may be greater than the sum of the thicknesses of the first to third subsidiary dummies DM1, DM2, and DM3when viewed from the top. Therefore, the main dummy MDM has a predetermined thickness, so that external noise through the transmissive area TA may be efficiently removed. The first sub-dummy DM1may surround the main dummy MDM, the second sub-dummy DM2may surround the first sub-dummy DM1, and the third sub-dummy DM3may surround the second sub-dummy DM2. The third sub-dummy DM3may be located at the outermost position of the dummies DM. At least one of the first touch electrodes TE and at least one of the second touch electrodes RE may be partially removed depending on the positions of the transmissive area TA and the dummies DM, and may face each other directly. In addition, the third sub-dummy DM3may directly face the partially removed first touch electrode and the second touch electrode when viewed from the top. The first sub-dummy DM1may include a (1-1) cut CUT11and a (1-2) cut CUT12overlapping a first axis Axis1extended in the first direction (x-axis direction) passing through the center CP of the transmissive area TA. The first sub-dummy DM1may include a (1-3) cut CUT13and a (1-4) cut CUT14overlapping a second axis Axis2extended in the second direction (y-axis direction) passing through the center CP of the transmissive area TA. As the first sub-dummy DM1includes the (1-1) cut CUT11, the (1-2) cut CUT12, the (1-3) cut CUT13, and the (1-4) cut CUT14, it is possible to prevent unwanted coupling through the first sub-dummy DM1. The second sub-dummy DM2may include a (2-1) cut CUT21and a (2-2) cut CUT22overlapping a third axis Axis3extended in the third direction (e.g., a diagonal direction in a plan view) between the first direction (x-axis direction) and the second direction (y-axis direction) passing through the center CP of the transmissive area TA. The second sub-dummy DM2may include a (2-3) cut CUT23and a (2-4) cut CUT24overlapping a fourth axis Axis4extended in the fourth direction (e.g., a different diagonal direction in a plan view) between the opposite direction to the first direction (x-axis direction) and the second direction (y-axis direction) passing through the center CP of the transmissive area TA. As the second sub-dummy DM2includes the (2-1) cut CUT11, the (2-2) cut CUT12, the (2-3) cut CUT13and the (2-4) cut CUT24, it is possible to prevent the coupling through the second sub-dummy DM2. The second sub-dummy DM2may further include a (2-5) cut CUT25in line with, or aligned with, the (3-1) cut CUT31. The (2-5) cut CUT25may be located at the shortest distance from, or directly next to, the (3-1) cut CUT31. The (2-5) cut CUT25may be formed by cutting a part of the second sub-dummy DM2in line with the (3-1) cut CUT31. Therefore, both ends of the second sub-dummy DM2may be insulated from each other at the (2-5) cut CUT25therebetween. For example, the gap between the directly adjacent first and second touch electrodes TE and RE, the (3-1) cut CUT31, and the (2-5) cut CUT25may be located on a straight line. As the third sub-dummy DM3includes the (3-1) cut CUT31, a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. In addition, as the second sub-dummy DM2includes the (2-5) cut CUT25, a part of the second sub-dummy DM2in line with a part of the third sub-dummy DM3may be insulated from another part of the second sub-dummy DM2in line with another part of the third sub-dummy DM3. As the third sub-dummy DM3includes the (3-1) cut CUT31and the second sub-dummy DM2includes the (2-5) cut CUT25, it is possible to eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE. The sub-dummy DM3located at the outermost position of the dummies DM may further include a (3-2) cut CUT32and a (3-3) cut CUT33corresponding to a gap between directly adjacent electrode dummy EDM and a first touch electrode and to a gap between directly adjacent electrode dummy EDM and a second touch electrode. InFIG.14, the lower right side of the dummies DM may directly face the electrode dummy EDM. The first touch electrode TE may be spaced apart from the second touch electrode RE by a distance that is equal to a distance between the dummies DM and the electrode dummy EDM at an area where the dummies DM and the electrode dummy EDM face each other directly. Therefore, the (3-2) cut CUT32of the third sub-dummy DM3may be located between the first touch electrode TE and the electrode dummy EDM directly facing each other, and the (3-3) cut CUT33may be located between the second touch electrode RE and the electrode dummy EDM directly facing each other. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, it is possible to eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE due to the third-sub-dummy DM3. For example, the (4-1) cut CUT41of the contact dummy CDM may be located at the shortest distance from the (3-1) cut CUT31of the third sub-dummy DM3. In addition, the (4-1) cut CUT41of the contact dummy CDM may be located at the shortest distance from the (2-5) cut CUT25of the second sub-dummy DM2. An imaginary straight line connecting the (4-1) cut CUT41of the contact dummy CDM, the (3-1) cut CUT31of the third sub-dummy DM3, and the (2-5) cut CUT25of the second sub-dummy DM2may pass through the center CP of the transmissive area TA. It is, however, to be understood that the present disclosure is not limited thereto. FIG.18is an enlarged plan view showing yet another example of the area A2ofFIG.5,FIG.19is a view showing the dummies and the contact dummy shown inFIG.18,FIG.20is an enlarged plan view of the area A5ofFIG.18, andFIG.21is a cross-sectional view taken along the line VI-VI′ ofFIG.20. A touch sensing unit500ofFIGS.18to21is substantially identical to the touch sensing unit500shown inFIGS.14to17except that the touch sensing unit500shown inFIGS.14to17further includes a (1-5) cut CUT15, and, therefore, redundant description thereof will be omitted. Referring toFIGS.18to21, the touch sensor layer TSL may include first touch electrodes TE, second touch electrodes RE, an electrode dummy EDM, a transmissive area TA, and dummies DM, and a contact dummy CDM. The dummies DM may include a main dummy MDM directly surrounding the transmissive area TA and at least one sub-dummy surrounding the main dummy MDM. For example, the dummy part DM may include the main dummy MDM and the first to third subsidiary dummies DM1, DM2, and DM3. It is to be noted that the number of the subsidiary dummies is not limited to three. The first sub-dummy DM1may surround the main dummy MDM, the second sub-dummy DM2may surround the first sub-dummy DM1, and the third sub-dummy DM3may surround the second sub-dummy DM2. The third sub-dummy DM3may be located at the outermost position of the dummies DM. At least one of the first touch electrodes TE and at least one of the second touch electrodes RE may be partially removed depending on the positions of the transmissive area TA and the dummies DM. The partially removed first touch electrode and the second touch electrode may face each other directly. In addition, the third sub-dummy DM3may directly face the partially removed first touch electrode and the second touch electrode when viewed from the top. The first sub-dummy DM1may include a (1-1) cut CUT11and a (1-2) cut CUT12overlapping a first axis Axis1extended in the first direction (x-axis direction) passing through the center CP of the transmissive area TA. The first sub-dummy DM1may include a (1-3) cut CUT13and a (1-4) cut CUT14overlapping a second axis Axis2extended in the second direction (y-axis direction) passing through the center CP of the transmissive area TA. As the first sub-dummy DM1includes the (1-1) cut CUT11, the (1-2) cut CUT12, the (1-3) cut CUT13, and the (1-4) cut CUT14, it is possible to prevent the coupling through the first sub-dummy DM1. The first sub-dummy DM1may further include a (1-5) cut CUT15in line with the (2-5) cut CUT25. The (1-5) cut CUT15may be located at the shortest distance from the (3-1) cut CUT31or the (2-5) cut CUT25. The (1-5) cut CUT25may be formed by cutting a part of the first sub-dummy DM1in line with the (2-5) cut CUT25. Therefore, both ends of the first sub-dummy DM1may be insulated from each other at the (1-5) cut CUT15therebetween. For example, the gap between the directly adjacent first and second touch electrodes TE and RE, the (3-1) cut CUT31, the (2-5) cut CUT25, and the (1-5) cut CUT15may be located on a straight line. The second sub-dummy DM2may include a (2-1) cut CUT21and a (2-2) cut CUT22overlapping a third axis Axis3extended in the third direction (e.g., a diagonal direction in a plan view) between the first direction (x-axis direction) and the second direction (y-axis direction) passing through the center CP of the transmissive area TA. The second sub-dummy DM2may include a (2-3) cut CUT23and a (2-4) cut CUT24overlapping a fourth axis Axis4extended in the fourth direction (e.g., a different diagonal direction) between the opposite direction to the first direction (x-axis direction) and the second direction (y-axis direction) passing through the center CP of the transmissive area TA. As the second sub-dummy DM2includes the (2-1) cut CUT11, the (2-2) cut CUT12, the (2-3) cut CUT13and the (2-4) cut CUT24, it is possible to prevent the coupling through the second sub-dummy DM2. The second sub-dummy DM2may further include a (2-5) cut CUT25in line with the (3-1) cut CUT31. The (2-5) cut CUT25may be located at the shortest distance from the (3-1) cut CUT31. The (2-5) cut CUT25may be formed by cutting a part of the second sub-dummy DM2in line with the (3-1) cut CUT31. Therefore, both ends of the second sub-dummy DM3may be insulated from each other at the (2-5) cut CUT25therebetween. For example, the gap between the directly adjacent first and second touch electrodes TE and RE, the (3-1) cut CUT31, and the (2-5) cut CUT25may be located on a straight line. As the third sub-dummy DM3includes the (3-1) cut CUT31, a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. In addition, as the second sub-dummy DM2includes the (2-5) cut CUT25, a part of the second sub-dummy DM2in line with the third sub-dummy DM3may be insulated from another part of the second sub-dummy DM2in line with another part of the third sub-dummy DM3. As the third sub-dummy DM3includes the (3-1) cut CUT31and the second sub-dummy DM2includes the (2-5) cut CUT25, it is possible to eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE. The sub-dummy DM3located at the outermost position of the dummies DM may further include a (3-2) cut CUT32and a (3-3) current CUT33corresponding to a gap between directly adjacent electrode dummy EDM and a first touch electrode or a second touch electrode among the first and second touch electrodes TE and RE. InFIG.18, the lower right side of the dummies DM may directly face the electrode dummy EDM. The first touch electrode TE spaced apart from the second touch electrode RE by the distance that is equal to the distance between the dummies DM and the electrode dummy EDM facing each other directly. Therefore, the (3-2) cut CUT32of the third sub-dummy DM3may be located between the first touch electrode TE and the electrode dummy EDM directly facing each other, and the (3-3) cut CUT33may be located between the second touch electrode RE and the electrode dummy EDM directly facing each other. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, a part of the third sub-dummy DM3directly facing the first touch electrode TE may be insulated from another part of the third sub-dummy DM3directly facing the second touch electrode RE. As the third sub-dummy DM3includes the (3-2) cut CUT32and the (3-3) cut CUT33, it is possible to eliminate the possibility of the coupling between the first touch electrode TE and the second touch electrode RE. For example, the (4-1) cut CUT41of the contact dummy CDM may be located at the shortest distance from the (3-1) cut CUT31of the third sub-dummy DM3. In addition, the (4-1) cut CUT41of the contact dummy CDM may be located at the shortest distance from the (2-5) cut CUT25of the second sub-dummy DM2or the (1-5) cut of the first sub-dummy DM1. An imaginary straight line connecting the (4-1) cut CUT41of the contact dummy CDM, the (3-1) cut CUT31of the third sub-dummy DM3, the (2-5) cut CUT25of the second sub-dummy DM2and the (1-5) cut CUT15of the first sub-dummy DM1may pass through the center CP of the transmissive area TA. It is, however, to be understood that the present disclosure is not limited thereto. FIG.22is a plan view showing another example of the touch sensing unit shown inFIG.3A,FIG.23is an enlarged plan view of the area A6ofFIG.22,FIG.24is a view showing the dummies and the contact dummy shown inFIG.23, andFIG.25is an enlarged plan view of the area A7ofFIG.23. The touch sensing unitFIGS.22to25is substantially identical to the above-described touch sensing unit except for the location of a transmissive area, and, therefore, redundant description thereof will be omitted. Referring toFIGS.22to25, the transmissive area TA may be surrounded by at least one of the first touch electrodes TE and at least one of the second touch electrodes RE in the touch sensor area TSA. For example, the transmissive area TA may overlap the transmitting portion TU of the display unit100. InFIGS.22and23, the transmissive area TA may be surrounded by two first touch electrodes TE and one second touch electrode RE. Accordingly, there may be several regions where the first and second touch electrodes TE and RE face each other directly depending on the positions of the transmissive area TA and the dummies DM. The dummies DM may include a main dummy MDM directly surrounding the transmissive area TA, and at least one sub-dummy surrounding the main dummy MDM. For example, the dummy part DM may include the main dummy MDM and the first to third subsidiary dummies DM1, DM2, and DM3. It is to be noted that the number of the subsidiary dummies is not limited to three. InFIG.24, the locations of a (1-1) cut CUT11, a (1-2) cut CUT12, a (1-3) cut CUT13and a (1-4) cut CUT14of the first sub-dummy DM1may correspond to the locations of the (2-1) cut CUT21, the (2-2) cut CUT22, the (2-3) cut CUT23, and the (2-4) cut CUT24of the second sub-dummy DM2shown inFIG.8. InFIG.24, the locations of a (2-1) cut CUT21, a (2-2) cut CUT22, a (2-3) cut CUT23, and a (2-4) cut CUT24of the second sub-dummy DM2may correspond to the locations of the (1-1) cut CUT11, the (1-2) cut CUT12, the (1-3) cut CUT13, and the (1-4) cut CUT14of the first sub-dummy DM1shown inFIG.8. Therefore, the first and second sub-dummy DM1and DM2include the number of cuts, thereby preventing coupling through the first and second sub-dummy DM1and DM2. In the area A7ofFIG.25, the third sub-dummy DM3may include a (3-1) cut CUT31corresponding to the gap between a pair of first and second touch electrodes TE and RE facing each other directly. The third sub-dummy DM3may include a (3-2) cut CUT32corresponding to the gap between another pair of first and second touch electrodes TE and RE facing each other directly. The (3-1) cut CUT31and the (3-2) cut CUT32may achieve the same configuration and effects as the (3-1) cut CUT31described above with reference toFIGS.8to12. As such, the configuration of the cuts formed in the sub-dummy DM3located at the outermost portion of the dummies DM may be altered depending on the configurations of the transmission area TA, the dummies DM, and first and second touch electrodes TE and RE. As described above, the sub-dummy DM3located at the outermost portion of the dummies DM includes at least one cut in line with the gap between the directly adjacent first and second touch electrodes TE and RE, and thus a part of the dummies DM associated with the first touch electrode TE may be insulated from another part of the dummies DM associated with the second touch electrode RE. Accordingly, it is possible to prevent undesirable coupling between the first and second touch electrodes TE and RE even if unintended coupling occurs between the first touch electrodes TE and the dummies DM or between the second touch electrodes RE and the dummies DM. Accordingly, the sensitivity and reliability of the touch sensing unit500of the display device10may be improved. Although described with reference to some embodiments of the present disclosure, it will be understood that various changes and modifications of the present disclosure may be made by one ordinary skilled in the art or one having ordinary knowledge in the art without departing from the spirit and technical field of the present disclosure as hereinafter claimed. Hence, the technical scope of the present disclosure is not limited to the detailed descriptions in the specification but should be determined only with reference to the claims, with functional equivalents thereof to be included therein. | 93,329 |
11861132 | DETAILED DESCRIPTION The innovation is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of this innovation. It may be evident, however, that the innovation can be practiced without these specific details. In other instances, well-known structures and components are shown in block diagram form in order to facilitate describing the innovation. By way of introduction, the subject disclosure relates to dynamically determining or inferring content, provided by a content provider (e.g., via a network based platform such as a website or mobile application), that a user will be most receptive to at any given point during a current interaction session with the content provider based on the context and state of the user during the current session. Various content providers and advertisers often determine content to suggest to a user or advertisements to show to the user during a current session with the content provider based on historical analysis of the user's previous history with the content provider and/or other content providers. For example, when browsing the Internet, some advertisement systems will present the user with advertisements for content they viewed/accessed in a previous session or content similar to what they viewed or accessed in a previous session. However, this content may not be relevant to the user during the current session for a variety of reasons. For example, during a current session, a user may not be shopping online as the user was in a prior session but researching a technical subject for a work project. Accordingly, showing content to the user during the current session that includes advertisements for products previously viewed by the user will most likely disturb and distract the user. Similarly, where the user previously accessed content related to a technical subject for a work assignment and the user is now relaxing in the evening and browsing the Internet for entertainment and leisure based content, suggesting content to the user related to the technical work subject at this time will likely annoy the user and the user will likely discard the content. In view of the above noted drawbacks with existing techniques for identifying content to suggest or target to users, the subject disclosure employs a variety to signals received or extracted during a user's current session with a content provider to determine or infer a context of the current session and the user's mental state during the current session. These signals are then used to identify content that the user will be most receptive to at any given point during the current session. In particular, user research has shown that users are in different ‘modes’ according to various dynamic factors such as time of day, what device they are on, what brought them to a particular a particular content provider's website/application, where the user is located (e.g., home vs. work), how much time the user has available to interact with the website/application, what the user is doing in association with access of the website/application. In addition, depending on the type of content provided by the content provider, the user's state of mind can also influence characteristics of content that the user will be most receptive to during a current session. For example, media content (e.g., images, video, music, etc.) can evict various user emotions. When a user's current frame of mind or mood can be discerned, media content can be identified and provided to the user the reflects or effects the user's mood. Accordingly, systems and mechanisms are provided that will take into account various factors related to a user's mental state and context during a current session of the user with a content provider to determine or infer a particular content item to render to the user during the current session. In an exemplary embodiment, the disclosed techniques are specifically tailored to determine or infer media content that a user will be most receptive to engage with during a current session with a streaming media provider that offers a wide array of different media content of various types and durations for access by the user. The specific media items that a user selects to view or listen to and the manner in which the user engages with media items selected by the user or pushed to the user can provide a strong indication of a user's mental state. For example, when a user is accessing and/or searching for short clips of funny videos, it can be inferred that the user is in a joyful, leisurely state of mind. This information coupled with information related to the context of the current session can provide even greater insight into the user's state of mind. For example, the determination that the user is in a joyful and leisurely state of mind can be held with greater confidence when it is also determined that the user is at a party sharing the videos with friends. In addition the specific content of the funny videos (e.g., the plot, the characters, the script, the setting, etc.) can give an indication of the type of humor the user enjoys during the current session. In another example, when a user is searching for videos related to information on a specific technical subject, it can be determined that the user's frame mind is focused, diligent, serious, etc. The user's frame of mind can further be discerned depending on the time of day, location of the user, the user's profession and the specific technical subject searched. For example, the user's intentions and concerns may vary if the technical subject is related to work aspects or personal aspects. Further, the user's navigational tactics can indicate the user's intention of the current session and the user's state of mind regarding fulfilling the intention. For instance, when a user selects videos related to a specific keyword search, watches a few seconds of some of the videos in the query result, then quickly dismisses the respective videos and modifies the keyword search, it can be determined that the user is frustrated, in a hurry, and honed in on a very specific task related to understanding the technical subject. Based on various determined or inferred features related to a user's current mental state and context during a session with the streaming media provider, media content provided by the streaming media can be identified and rendered to the user that is relevant to the current user's mental state and context. In an aspect, this media content can include advertisements (e.g., video advertisements or static advertisements). For example, a video advertisement having a specific content type and duration can be identified based on characteristics of the user's mental state and context during a current session of the user with the media provider and rendered to the user during the current session at a point during the current session when the user will be most receptive to it. In another aspect, the media content identified for provision to the user can include a trailer for another video or channel offered by the media provider. Still in yet another aspect, the media content can include other videos, playlists and channels provided by the media provider and suggested to the user in a recommendation list for viewing during the current session. In one or more aspects, a system is provided that includes a state component configured to determine a state of a user during a current session of the user with the media system based on at least one of: navigation of the media system by the user during the current session, media items provided by the media system that are played for watching by the user during the current session, or a manner via which the user interacts with or reacts to the played media items, wherein the state of the user includes a mood of the user. The system further includes a selection component configured to select a media item provided by the media provider based on the state of the user, and a rendering component configured to effectuate rendering of the media item to the user during the current session. In another aspect, a method is disclosed that includes using a processor to execute computer executable instructions stored in a memory to perform various acts. These acts can include: determining user state attributes associated with a user's current state of mind during a current session of the user with a streaming media provider based on at least one of: navigation of the streaming media provider by the user during the current session, media items provided by the streaming media provider that are played during the current session, or a manner via which the user interacts with or reacts to the played media items, wherein the state of the user includes a mood of the user; selecting a media item provided by the streaming media provider based on the user state attributes; and rendering the media item to the user during the current session Further provided is a tangible computer-readable storage medium comprising computer-readable instructions that, in response to execution, cause a computing system to perform various operations. These operations include determining mood attributes associated with a user's current mood during a session of the user with a streaming media provider and determining context attributes associated with a current context of the session based on at least one of: navigation of the streaming media provider by the user during the current session, media items provided by the streaming media provider that are played during the current session, or an environment of the user. The operations further include, selecting a media item provided by the streaming media provider based on the mood attributes and the context attributes, and rendering the media item to the user during the session. Referring now to the drawings, with reference initially toFIG.1, presented is diagram of an example system100for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. Aspects of systems, apparatuses or processes explained in this disclosure can constitute machine-executable components embodied within machine(s), e.g., embodied in one or more computer readable ediums (or media) associated with one or more machines. Such components, when executed by the one or more machines, e.g., computer(s), computing device(s), virtual machine(s), etc. can cause the machine(s) to perform the operations described. System100includes at least a content provider102and a client device122wherein the content provider is configured to provide content to a user of the client device122via one or more networks120using a network based platform (e.g., a website or mobile application). The content provider102can include dynamic content selection platform104to determine or infer characteristics of a user's current mental state and context during a session of the user with the content provider and to identify other content (e.g., an advertisement, a suggested content item, etc.) that is relevant to the user's current mental state and context. The dynamic content selection platform104can then facilitate rendering of the identified other content to the user by the content provider102during the current session. For example, when the other content is an advertisement, dynamic content selection platform104can direct content provider102to provide the advertisement to the user during the user's current session. In various aspects, system100can also include external sources128, other client devices130and auxiliary input devices132. Dynamic content selection platform104and/or content provider102can include memory116for storing computer executable components and instructions and processor114to facilitate operation of the instructions (e.g., computer executable components and instructions) by the dynamic content selection platform104. Similarly, client device122can include memory for storing computer executable components and instructions and a processor to facilitate operation of the instructions (not shown). The various components and devices of system100can be connected either directly or via one or more networks120. Such networks can include wired and wireless networks, including but not limited to, a cellular network, a wide area network (WAD, e.g., the Internet), a local area network (LAN), or a personal area network (PAN). For example, client device122can communicate with content provider102(and vice versa) using virtually any desired wired or wireless technology, including, for example, cellular, WAN, wireless fidelity (Wi-Fi), Wi-Max, WLAN, and etc. In an aspect, one or more components of system100are configured to interact via disparate networks. In addition, although dynamic content selection platform104is depicted as being internal to content provider102, one or more aspects of dynamic content selection platform104can be provided locally at client device122. For example, content provider102can include an application service provider and client device122can employ a thin client application to interact with and receive various content and services provided by the content provider. The thin client application provided on the client device122can include one or more components (e.g., reception component106, state component108, context component112, etc.) of dynamic content selection platform104. Content provider102can include an entity configured to provide content and/or services to a user at a client device (e.g., client device120) via a network (e.g., the Internet). For example, content provider102can include a website or application service provider configured to provide videos, pictures, articles, blogs, messages, services, etc. or other types of content items to client devices via a network. According to this example, the content provided by the web site or application can be configured for downloading, streaming or merely viewing at a client device122via the network. In another example, content provider102can include a information store that provides access to data included in the information store via a network. In another example, content provider102can include an online merchant that provides goods and services. As used herein, the term content item refers to any suitable data object that can be accessed or otherwise shared via a network and includes but is not limited to: documents, articles, messages, webpages, programs, applications, data object and media items. The term media item or media content can include but is not limited to: video, live video, animations, video advertisements, music, music videos, sound files, pictures, and thumbnails. In an aspect, an owner of a media item is referred to herein as a content creator to indicate that the media item was created by the content creator (i.e., the content creator holds copyright authority to the media item). In some aspects, the term media item or media content refers to a collection of media items, such as a playlist or channel including several videos or songs. A channel can include data content available from a common source or data content having a common topic or theme. A channel can be associated with a curator who can perform management actions on the channel. Management actions can include, for example, adding media items to the channel, removing media items from the channel, defining subscription requirements for the channel, defining presentation attributes for channel content, defining access attributes for channel content, etc. In an aspect, this curator constitutes the channel owner or channel creator and the channel itself can be considered a content or media item owned or created by the channel owner. In an aspect, channel content can include digital content uploaded to an Internet-based content platform that hosts the channel (e.g., content provider102) by the channel curator and/or digital content selected by the channel curator from other content available on the Internet-based content platform. A channel curator can include a professional content provider (e.g., a professional content creator, a professional content distributor, a content rental service, a television (TV) service, etc.) or an amateur individual. Channel content can include professional content (e.g., movie clips, TV clips, music videos, educational videos) and/or amateur content (e.g., video blogging, short original videos, etc.). Users, other than the curator of the channel, can subscribe to one or more channels in which they are interested. Users in addition to the channel curator can access content provided by a channel. In an exemplary embodiment, content provider102includes a streaming media provider configured to provide streaming media and related services to client devices over a network. For example, content provider102can include a media provider that has access to a voluminous quantity (and potentially an inexhaustible number) of shared media (e.g., video and/or audio) files. The media provider can further stream these media files to one or more users at respective client devices (e.g., clients122) of the one or more users over a network. The media can be stored in memory associated with the media provider (e.g., memory116) and/or at various servers and caches employed by media provider and accessed by client devices using a networked platform (e.g., a website platform, a mobile application) employed by the media provider. For example, the media provider can provide and present media content to a user via a website that can be accessed by a client device using a browser. In another example, the media provider can provide and present media to a user via a mobile/cellular application provided on a client device (e.g., where the client device is a smartphone or the like). Client device122can include presentation component124to generate a user interface (e.g., a graphical user interface or virtual interface) that displays media content provided by the media provider to a user of the client device. In an aspect, presentation component124can include an application (e.g., a web browser) for retrieving, presenting and traversing information resources on the World Wide Web. For example, the media provider can provide and/or present media content to a client device122via a website that can be accessed using a browser of the client device122. In another example, the media provider can provide and/or present media content to a client device122via a mobile application platform. According to this application, presentation component124can employ a client application version of the media provider that that can access the cellular application platform of the media provider. In an aspect, the media content can be presented and/or played at client device122using a video player associated with the media provider and/or the client device122. Client device122can include any suitable computing device associated with a user and configured to interact with content provider102via a network. For example, client device122can include a desktop computer, a laptop computer, a television, an Internet enabled television, a mobile phone, a smartphone, a tablet personal computer (PC), a personal digital assistant PDA, or a wearable device. As used in this disclosure, the terms “content consumer” or “user” refer to a person, entity, system, or combination thereof that employs system100(or additional systems described in this disclosure) using a client device122. The various features of dynamic content selection platform104are exemplified herein wherein content provider102includes a streaming media provider (as described herein). Accordingly, dynamic contribution platform104is discussed in association with determining or inferring, in real-time or substantially real time, media content (e.g., media advertisement, video trailers, channel trailers, other videos provided by the media provider etc.) that is relevant to a user during the user's current session with the media provider. However, it should be appreciated that dynamic content selection platform104can be employed by a variety of content providers and systems to determine or infer content for provision to a user (e.g., advertisements, recommended content) that is relevant to the user's context and mental state at the time of rendering of the content. In accordance with an embodiment, dynamic content selection platform104can include reception component106, state component108, context component110and selection component112. Reception component106is configured to receive and/or extract information in association with a user's current session with content provider102that can be employed to determine or infer attributes of the user's current mental state and context. For example, when a user conducts a session with a streaming media provider that includes navigating and consuming media content provided by the streaming media provider, reception component106can receive or extract information about the user's current session that relates to the user's state of mind and context. Based on this information, state component108is configured to determine a state of the user and context component110is configured to determine a context of the user and/or the current session. Selection component112can then determine or infer advertisements, video trailers, channel trailers, and/or other media that the user is likely to be most receptive to during the current session based on the user's state and context. For example, reception component106can receive information regarding the manner in which the user navigates the media provider's content (e.g., searching via key word search, searching within a specific media category or channel, browsing, following recommendations, etc.), the specific content accessed and viewed by the user, and the manner in which the user interacts with the content selected by the user for viewing or pushed upon the user (e.g., watching or dismissing a video, controlling the playing of the video, commenting about the video, liking or disliking the video, sharing the video, having the video visible, having the volume of the video audible, interaction with the interface via which the video is included as based on cursor movement or touch screen interaction, etc.). Reception component106can also receive information regarding the watch history of the user during the current session, including durations of media items selected for watching or listening to by the user and respective amounts of the media items actually watched or listened to by the user. In an aspect, information regarding a user's navigation of a media provider, content accessed and viewed/watched, and the manner in which the user interacts with the content can be extracted in real-time as it is generated at the media provider during the user's session. In another aspect, signals regarding user interaction and engagement with the media provider during a current session can be collected at client device122via a signal collection component126and provided to reception component106during the course of the user's session (e.g., in real-time, in substantially real-time, or periodically). For example, signal collection component126can receive signals regarding cursor movement, interaction by the user with a graphical user interface via which the media provider's content is accessed, interaction and control of a media player via which video and/or audio content provided by the media provider is played, visibility of media played during a current session at the client device122(e.g., whether the media player is minimized/maximized, whether the media player is behind another tab or window, etc.), and volume of media played during the current session. In another example, reception component106can receive information regarding a mechanism via which the current session was initiated (e.g., in response to a general request by the user to open the network based platform of the streaming media provider, or in response to selection of a link to media content, provided by the streaming media provider, at an external source128or received by the user in an electronic message). According to this example, reception component106can receive information identifying the specific media item represented by a selected link, the referral source at which the link was located (e.g., an external source128), and information about the referral source. This information can be provided by the referring source, the client device122and/or identified by reception component106via metadata associated with the selected link and/or the referral source. For example, reception component106can receive information about the content of a web site or webpage at which the link was located. This information can provide insight into what mood the user was in at the time of selection of the link. Reception component106can also receive information related to the user's environment during a current session (e.g., including location, other people and things at the location, activities or events occurring at the location, etc.), what the user is doing in the environment in association with the current session, time of day of the current session, the type of device the user is employing to conduct the current session (e.g., mobile, stationary, tablet, phone, desktop, etc.). In an aspect, client device122(and other clients130as well) can determine its location (e.g., via a global positioning system method, triangulation, or any other suitable locating technique) and provide this location information to reception component106over the course of the current session. In another aspect, an external source (e.g., a cellular carrier system) can provide client location information to reception component106. Based on received location information for client device122(and other clients130), reception component106can look up information about the location (e.g., places and things associated with the location, events associated with the location, other clients130/users at the location, weather at the location, traffic at the location etc.). In other aspects, information regarding a user's environment can be captured and provided to reception component106via an auxiliary input device132. In an aspect, information regarding what a user is doing in association with a current session can be received by reception component106from a user's schedule (e.g., provided on client device122or at an external source128). In another aspect, information regarding a user's movement and motion can be captured by various motion sensors employed by the user (e.g., worn by the user) and/or provided at client device (e.g., an accelerometer, gyroscope). This motion/movement information can facilitate determining (e.g., by reception component, dynamic content selection platform104and/or another system), what the user is doing (e.g., walking, running, sitting, driving a car, etc.) and where the user is going. In other aspects, dynamic content selection platform104and/or another system can learn user patterns and behaviors over time based on where the user goes, what the user does, who the user is with, the user's schedule, etc. to determine or infer what the user is doing during a current session. Similarly, reception component106can also receive information related to what the user was doing before initiation of a current session (e.g., where the user was, an activity the user was performing, etc.), an amount of time the user has for conducting the current session (e.g., the user has a one hour lunch break during after which the user must return to work and end the current session), and what the user is likely to do or scheduled to do after the current session (e.g., return to work, attend an event, etc.), based on the user's schedule and/or learned user behaviors/patterns. Reception component106can also receive information about other users activity with content provider102during a user's current session with the content provider, wherein the other users have some connection with the user (e.g., a social connection, a shared preference, a shared demographic feature, etc.). For example, media content that is being watched, liked, shared, etc., by a user's friends at a streaming media provider while the user is conducting a current session with the streaming media provider can influence what content the user may also be interested in during the user's current session. According to this example, when a bunch of the user's friends are conducting sessions with the streaming media provider at the same time as the user and watching a particular live sports video, it can be assumed that the user would likely be interested in watching the sports video as well. Accordingly the live sports video can be recommended to the user during the user's current session. State component108is configured to determine or infer state of mind attributes associated with a user's current state of mind during a session with a content provider102(e.g., a streaming media provider) based on information received by reception component106. A user's state of mind can include aspects related to what the user is thinking and/or feeling during a current session. For example, state of mind attributes can correspond to aspects of a user's mood or attitude during a current session. A user's state of mind can also reflect a user's conscious or subconscious intention for performing or conducting a session with a streaming media provider. For example, a user's mood can indicate whether the user wants to be entertained, whether the user is in an educational frame of mind, or whether the user is in a work frame of mind. In another aspect, a user's state of mind can include a level of engagement a user has with a particular content item such as a video or song (e.g., whether the user is actively attentive towards the content item or passively engaged with the content item). In an aspect, state component108can determine state attributes representative of a user's current state of mind during a session with a content provider102based on how the user navigates about content provided by the content provider102. For example, state component108can determine attributes about a user's state of mind based on how the user navigates about a media provider's website (e.g., what categories the user's selects, how the user moves from one interface to another, how the user influence what media items are presented on a particular interface, whether the user is searching or browsing, etc.). State component108can also determine state attributes based on media items, provided by the media provider, that are played for watching by the user during the current session (e.g., either in response to selection by the user or automatically played/pushed to the user), and a manner via which the user interacts with or reacts to the played media items. For instance, if a user is accessing funny videos about puppies, the user is likely in a happy mood. In addition, if the user is liking, sharing, and providing positive comments about the funny videos, it can be discerned that the content brings the user joy and entertainment. Accordingly, state attributes for the user could include ‘happy,’ ‘joyful,’ entertainment mode,’ ‘humorous content,’ and ‘light hearted.’ In another example, if a user is selecting or searching for music playlists with classical music, state component108can determine the user is in a relaxed mood. In another aspect, based on the type of media content a user selects for watching/listening to, state component108can determine whether the user is looking to be entertained and how or whether the user is looking for informational/instructional content. For instance, when a user is selecting videos that are short movies of a thriller genre, state component108can determine the user is looking to be entertained with media content that has a thriller theme. According state attributes for the user could include ‘entertainment mode,’ ‘movie,’ and ‘thriller.’ In another example, if a user is selecting exercise videos, state component108can determine that the user is in a mindset of working out. In an aspect, respective media items provided by a media provider at which a user conducts a current session can be associated with one or more different mood or state of mind attribute values. In an aspect, these mood or state of mind attributes can be associated with the respective media items as metadata associated with the respective media items. In another aspect, these mood or state attributes can be associated with the respective media items on a database correlating the respective media items to mood/state of mind attributes. For example, a funny video about puppies can be associated with mood values of corresponding to happy, joyful, sappy, sensitive, and humorous. In another example, workout videos can be associated with moods reflective of exercise, motivation, health and energy. In another example, classical music based content can be associated with relaxation mood values. In yet another example, attributes of media items provided by the media provider (e.g. videos, channels, playlists, songs, etc.) related to a type of the media item (e.g., movie, sitcom, advertisement, music video), and a genre of the media item (e.g., comedy, drama, romance, thriller, reality, instructional, informational, etc.) can be associated with respective mood values. According to this aspect, state component108can analyze the various state or mood values associated with media items viewed (e.g., watched and/or listened to) by a user over the course of a current session to determine or infer one or more cumulative state attributes of the user's mood. In addition, information regarding a level of user interaction and engagement with the respective media items can further facilitate determining a user's mood. For example, where a user engages with and interacts more with media items having mood values of a, b, and c and less with media items having mood values x, y and z, state component108can place a greater weight on mood values a, b and c when determining the user's mood attributes. In another aspect, state component108can determine or infer a user's mood based on the manner in which a user navigates content provided by a media provider during the user's session with the media provider. For example, based on the user's navigational mechanisms, state component108can whether the user is in a state of haste or whether the user is not in a state of haste. According to this example, state component108can determine or infer that a user is in a state of haste or not based on how quickly and frequently a user selects new media items for viewing and the durations of the media items selected for viewing being relatively short or long (e.g., with respect to a threshold duration) as well as the amounts of the durations watched/listened to by the user (e.g., watching more than X % of a video can indicate the user is not in a state of haste while watching less than X % of a video can indicate the user is in a state of haste). In another example, based on the user's navigational mechanisms, state component108can determining whether the user is in a leisurely mindset or has is focused on a specific agenda or task. According to this example, when a user's navigation mechanisms indicate the user is browsing the various media content provided by a media provider (e.g., via selecting recommended media items or items associated with different media item categories), state component108can consider the user in a leisurely mindset. On the other hand, when a user is performing a specific keywords search and looking for videos of a particular subject matter or title, the user can be considered to be in focused and structured mindset. Accordingly, based on a user's navigational mechanisms, state component108can determine or infer state attributes to associated with a user during a current session that include ‘state of haste or hurry,’ ‘browsing mindset,’ ‘focused searching mindset,’ ‘structure searching mindset,’ and similar attributes. In another aspect, state component108can determine or infer attributes associated with a user's state of mind based on the manner via which a user interacts with or reacts to the media items played during the user's current session. For example, if a user stops playing a certain media item or disengages from the media item as it plays, state component108can determine that mood values associated with the media item do not reflect the user's current mood. Similarly, where a user engages with a particular media item during a current session, shares the media item, comments on the item etc., state component108can determine or infer that mood values associated with the media item are more reflective of the user's current mood. Thus in an aspect, state component108can weight mood values associated with media items accessed/watched/listened to by a user based on the manner and level of engagement the user has with the respective media items. In another aspect, state component108can determine or infer attributes corresponding to a user's current state of mind based on comments provided by the user about the media item (“I love this song,” “this was so scary,” “this video made be bawl,” etc.). In another aspect, a user's state of mind can include a level of engagement of a user during a current session with a media provider and/or particular content played during the session. For instance, state component108can determine or infer state attributes that indicate whether the user is actively engaged, passively engaged or disengaged. In an aspect, state component108can discern a user's level of engagement during a current session based on explicit engagement signals (e.g., like/dislike, comment, subscribe, seek, etc.) and implicit engagement signals (e.g., continued playback, mouse/keyboard movement, device movement, touchscreen activity, etc.) received by reception component106. In an aspect, state component108can determine a user's level of engagement during a current session based on visibility of the played media items via an interface presented to the user during the current session and volume of the played media items. For example, when a user has video content playing with the volume turned off or low, state component108can determine that the user is passively engaged in a ‘watching no volume mode.’ In another example, when a user has a video content playing with the volume turned up yet the video player minimized or provided behind another open window or tab, state component108can determine that the user is passively engaged in a ‘listening only mode.’ However where both the video player is not visible and the volume is turned off or down, state component108can determine that the user is disengaged. Further, state component108can employ information regarding a user's movement/motion to determine or infer a user's state of mind. For example, if a user is moving or headed somewhere, state component108can determine that the user is in a state of ‘haste’ or ‘on the go.’ Similarly, if the user is stationary, state component108can determine that the user is relaxed and has time on his hands. Context component110is configured to determine context characteristics related to the context of a current session between a user and content provider102. Context refers to the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed. With respect to a current session between a content provider, (such as a streaming media provider) and a user, context can include the circumstances that form the setting of the current session. In an exemplary embodiment, context component110is configured to determine context characteristics associated with a current session between a user and a media provider based on information received by reception component106. In an aspect, these context characteristics can include but are not limited to: where the user is located or going during the current session (e.g., as determined by context component110based on received or determined location information for client device122during the current session and/or received user movement/motion information), when (e.g., time of day) the current session is occurring, and who the user is with (e.g., alone, with a group other users, with a friend etc.) during the current session (e.g., as determined by context component110based on received location information for other clients130when authorized by the other clients). In addition, based on a user's location, content component110can further employ various external sources to look up current context characteristics about the location (e.g., persons, places, things, events, weather, traffic, etc. associated with the location). For example, context component110can determine that a user is located at a restaurant at a time when the restaurant that is throwing special event for alumni of a local college. In another aspect, context component110can determine or infer content characteristics that relate to what a user is doing aside from or in association with a current session with a content provider (e.g., walking, exercising, riding a train/bus, working, conducting a work meeting, attending a class, cleaning the house, attending a party with friends, etc.). Context component can also determine characteristics regarding what the user was doing before and/or will do after the current session. For example, context component110can determine or infer an activity performed by the user preceding initiation of the current session, a duration of time the user has available for the current session, or an activity for performance by the user at a known point in time after initiation of the current session. Context component110can determine these content characteristics related to what a user is doing, was doing and will do in the near future based on analysis of a various received signals related to where the user is located, time of day, what device the user is on, and movement data for the user in view of the user's schedule and/or learned behavior for the user. For example, based on information indicating the user is located at her home around 6 am on a weekday morning, information indicating the user is moving about the house, information indicating the user is conducting a current media session on her Internet enabled television, and the user's work schedule, context component110can determine that the user is getting ready to go to work and will be leaving the house at about 7 am. Context component110can also determine content characteristics related to a user's purpose for a current session. For example, context component110can determine whether a current session with a media provider is for an entertainment purpose, an educational purpose, or a work related purpose. In an aspect, context component110can determine or infer a purpose of a current session with a media provider based on the media items provided by the media system that are played for watching by the user during the current session, and the manner via which the user interacts with or reacts to the played media items. For example, context component110can determine whether a user is performing the current session to get information about a specific subject, to find a playlist for a party, to watch videos about a particular subject, to get the recent local news, to find a movie to watch, to browse for something entertaining to watch, to get instruction for performing a task, to find a yoga instruction video, etc. In an aspect, context component110can determine a purpose for a current session with a media provider based on how the current session was initiated. For example, the purpose of a session could initially include a user's desire to view a media item represented by a selected link at another source. According to this example, context component can determine110whether the user was directed to the media system from a referral source, a media item provided by the media provider represented by a link selected by the user at the referral source, and/or information about the referral source. Further, context component110can determine or infer information related to sessions of other users, related to the user (e.g., friends of the user, users sharing similar preferences or demographics of the user, etc.), with the content provider that are being conducted during the current session of the user. For example, context information regarding content the user's friends are accessing and sharing while the user is also accessing the content provider can influence what the user may find interesting during the user's current session, such as what videos and channels the user's friends are watching during the user's current session with a media provider. In various aspects, state component108can determine or infer various attributes about a user's state of mind based on characteristics of a user's context as determined by context component110. In particular, a user's context can greatly influence the user's state of mind. For example, a user's mood can vary depending on the time of day, the user's environment (e.g., where the user is located and current aspects associated with the location at the time of day and day of week, including other persons, things and events or activities), what the user is doing in association with the current session (e.g., driving, relaxing, at a party, etc.), what the user just did before the session and/or has to do after the session, what device the user is employing, or how the user initiated the session (e.g., in response to selection of a link to a specific video, to conduct a search, or to browse). In various aspects, state component110can analyze characteristics about a user's context to facilitate determining or inferring a user's current state of mind (e.g., mood, attitude, emotional state, energy level, etc.). For example, state component108can employ various rule based classification schemes and/or machine learning techniques that relate different context attributes associated with a user's current context particular to state of mind attributes (e.g., mood values). Selection component112is configured to select a content item for provision to a user during the user's current session with a content provider102based on the various state attributes and context characteristics determined or inferred by state component108and context component110, respectively, during the user's current session. In an exemplary embodiment, where content provider102is a streaming media provider, selection component112is configured to select a media item based on the user's state attributes and context characteristic. For example, the media item can include a media advertisement or video trailer for another media item or channel provided by the media provider. According to this example, the media advertisement or video trailer can be provided to the user (e.g., as an in-stream advertisement/trailer, as a banner advertisement, as an in-video advertisement, etc.) to the user in association with other media content accessed by the user during the current session. In another example, the media item can include another media item provided by the media provider. According to this example, the media item can be provided to the user in a recommendation section or list of media items recommended to the user during the user's current session. In an aspect, selection component112can employ a media index118that associates a plurality of media items and media advertisements, provided by the media provider, with state attributes and context attributes. The state and context attributes associated with a particular media item can be selected based on a particular user a state of mind and user context under which the media item is considered to be well received. For example, a video advertisement for a fast food breakfast restaurant could be associated with state attributes such as ‘hungry,’ ‘state of haste,’ ‘tired,’ or ‘on the go.’ Context attributes associated with the advertisement could include ‘morning,’ ‘driving,’ and ‘location N’ (wherein location N can vary and include different locations where the fast food breakfast restaurant is located). In another example, media items in media index118can be associated with various mood attributes indicative of modes the media items evoke or compliment, such as mood attributes indicating whether the media item is suited for users in an entertainment related or a work related mode. In another example, media items that are relatively short in length (based on a threshold length) can be associated with attributes that indicate they are suited for users who are in a state of haste/hurry or in distracted/passive state while media items that are longer in duration can be associated with attributes that indicate they are suited for users who are in leisurely, calm, time on their hands, attentive, etc., kind of state. In yet another example, a media advertisement can be associated with attributes that indicate whether it has audible branding or not (wherein a media advertisement without audible branding will be ineffective for a user who is passively engaged with a media content session based on failure to have a media player or interface visible to the user). Similarly, a media advertisement can be associated with an attribute indicating it has no or low visible branding (wherein the media advertisement will be ineffective when the user is passively engaged do to no or low volume). According to this aspect, selection component112can match or relate user state attributes and user context characteristics with state attributes and context characteristics associated with different media items provided in media index118to identify one or more media items that match or relate the user's current state of mind and context. For example, when a user is relaxing in the evening at home and selecting and watching relatively short videos with funny and light hearted content, selection component112can select a media item for provision to the user that is also relatively short and has an entertaining, funny and light hearted nature. In an aspect, a determination that a media item matches a user's current state and context can be based on a correspondence threshold (e.g., a percentage match threshold) between the user's current state and context attributes and the state and context attributes associated with the media item. The degree of contribution of certain state attributes and context characteristics to a match determination can also vary. For example, a location based context characteristic can provide a greater influence on a match determination over a context characteristic related to why a user's current session was initiated. In another aspect, selection component112can be configured to select media items that do not necessarily share the same state/context attributes as a user, but which are associated with state/context attributes that compliment those of the user. For example, where a user is in a sad mood in association with a cold winter weather, rather that identifying a media advertisement that is reflective of sad and cold winter weather attributes, selection component can identify a media advertisement that is associated with attributes designed to lift the user's spirits. For example, a suitable advertisement could include one for a warm weather vacation. In other aspects, selection component112can employ various rule based classification schemes or machine based learning techniques to facilitate selecting a media item that a user will likely be receptive to and engage with during the user's current session with a media provider based on the user's current state of mind and context. In furtherance to the above example, selection component112can infer that although a user's state and context characteristics indicate the user is a relatively good match for the fast food breakfast restaurant advertisement, based on a single context characteristic that indicates the user has X amount of time before she has to be at work, the user will not be able to stop at the fast food breakfast restaurant on the way to work. Accordingly, selection component112can determine that provision of the fast food breakfast restaurant advertisement to the user during the user's current session would not cause the user to act on the advertisement, and thus select a different media advertisement for provision to the user. Referring now toFIG.2presented is another example system200for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. System200includes same or similar features and functionalities as system100with the additions of scoring component202and rendering component204to dynamic content selection platform104. Repetitive description of like elements employed in respective embodiments of systems described herein is omitted for sake of brevity. Scoring component202is configured to score and rank content items (e.g., media items) provided by content provider102(e.g., a streaming media provider based on relevance or suitability to a user in association with a current session between the user and the content provider102, wherein the determination of relevance or suitability is based at least in part on the user's current state of mind and context. For example, scoring component202can score media advertisements provided by a media provider based on relevance and suitability of the media advertisements for a user during the user's current session with the media provider based on the user's current state of mind and context. In an aspect, scoring component202can analyze a plurality of media advertisements based on correspondence and fixed relationships between the user's current user state and context attributes and state and context attributes respectively associated with the media advertisements to determine a score for the respective media advertisements representative of relevance and suitability of the respective media advertisements to the user. According to this example, relevance and suitability can reflect a probability that the user will view the advertisement and/or the advertisement will have an impression upon the user. Selection component112can then select the media advertisement having the highest score (and/or a subset of the media advertisement having the highest scores) for provision to the user during the user's current media session. For example, scoring component202can evaluate each pairing between a user and a potential media advertisement by putting the advertisement state and context attributes and the user's current state and context attributes into a 1×N boolean feature vector: F=<f1,f2, . . . ,fN> wherein each attribute or feature (e.g., user is active, user is A, user is B . . . , advertisement is X, advertisement is Y, . . . , user has mood X, advertisement has mood Y, etc.) is assigned a fixed position. Then scoring component202can score a particular advertisement/user pairing by taking the dot product with a weight vector W F·W=S where S is the score. The weight vector W can be designed to account for a probability that the media advertisement will be viewed past a billable point and/or a probability that the media advertisement will have a lasting impression upon the user. In an aspect, W can be generated using historical data using a machine learning algorithm. In addition, as attributes for a user's current state of mind and context change over the course of a user's session with a media provider, scoring component202can re-score the respective media advertisements based on the new user attribute values. Accordingly, over the course of the user's session, selection component112can dynamically select the most relevant and suitable media advertisement for provision to the user at a current point in the user's session based on the user's current mental state and context at that point. In another example, scoring component202can score and rank trailers for channels provided by a media provider or trailers for other videos provided by the media provider based on relevance and suitability for a user during the user's current session with the media provider in view of the user's current state of mind and context. According to this example, relevance and suitability can reflect a probability that the user will view the trailer and choose to select the channel or video represented by the trailer for watching, subscribing to, or otherwise showing an affinity for. In an aspect, selection component112can then select the trailer having the highest score (and/or a subset of the media advertisement having the highest scores) for provision to the user during the user's current media session. It is to be appreciated that respective scores associated with trailers can change over the course of a user's media session based on changes to the user's mood or context. Accordingly, at any given time in a user's media session, selection component112can select the trailer having the highest score at that time. In yet another example, scoring component202can score other media items (e.g., videos, channels or playlists) provided by a media provider based on relevance and suitability for a user during the user's current session with the media provider based on the user's current state of mind and context. According to this example, relevance and suitability can reflect a probability that the user will view/play the media item or subscribe to the media item. Selection component112can then select a subset of media items having the highest scores for provision to the user in a recommendation section/list during the user's current media session. Rendering component204is configured to effectuate the rendering of a content item (e.g., a media item) identified be selection component112to the user during the user's current session. For example, when the session includes a session with a streaming media provider, rendering component204can direct the streaming media provider to stream a selected video advertisement or trailer (e.g., an in-stream video advertisement) to the user in association with another media item accessed by the user. In another example, rendering component204can direct the streaming media provider to include a subset of media items identified by selection component112in a recommendation menu or list included in a portion of a user interface employed by the streaming media provider. FIG.3presents another example system300for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. System300includes same or similar features and functionalities as system200with the addition of inference component302to dynamic content selection platform104. Repetitive description of like elements employed in respective embodiments of systems described herein is omitted for sake of brevity. Inference component302is configured to provide for or aid in various inferences or determinations associated with aspects of dynamic content selection platform104. For example, inference component302can aid state component108and context component110with inferring attributes about a user's current state of mind and context based on the various information received by reception component106discussed herein. In addition, inference component302can aid selection component112with identifying a content item (e.g., a media item) that is likely to be well received by a user during the user's current session based on the user's current state of mind and context. Similarly, inference component302can facilitate scoring component202with inferring scores of suitability and relevance for content items based on a user's current state of mind and context and state and context attributes respectively associated with the content items. In order to provide for or aid in the numerous inferences described herein, inference component302can examine the entirety or a subset of the data to which it is granted access and can provide for reasoning about or infer states of the system, environment, etc. from a set of observations as captured via events and/or data. An inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. An inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such an inference can result in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, etc.) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter. A classifier can map an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, such as by f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hyper-surface in the space of possible inputs, where the hyper-surface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches include, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority. FIG.4presents another example system400for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. System400includes same or similar features and functionalities as system300with the additions of trailer component402and recommendation component404to dynamic content selection platform104. Repetitive description of like elements employed in respective embodiments of systems described herein is omitted for sake of brevity. As previously discussed, selection component112is configured to select a content item for provision to a user during the user's current session with a content provider102based on the user's current state of mind and context. The type of the content item selected can vary depending on the content provider102. In an aspect, where content provider102is a streaming media provider, the content item can include a trailer for another media item provided by the streaming media provider. In video and film terminology a trailer is a series of short edited clips of selected scenes of a video that are put together into one montage. Trailers are often used as a way to advertise a video or film. Trailer for channels provided by the streaming media provider can include video content associated with the channel configured to provide a video advertisement for the channel. Trailer component402is specifically configured to identify video and/or channel trailers for videos and channels provided by a media provider based on a user's current state of mind and/or context during a current session with the media provider. In an aspect, a trailer identified by trailer component402can be streamed to a user during the user's current session as an in-stream video in association with another video selected for watching by the user. In another aspect, a trailer identified by trailer component402can be streamed to a user during the user's current session when the user is not playing another video (e.g., while the user is searching or browsing for media items provided by the media provider. Recommendation component404is configured to generate a list of recommended media items, provided by a streaming media provider, for suggesting or recommending to a user during a user's current session with a streaming media provider based on the user's current state of mind and/or context. For example, recommendation component404can identify videos, channels, and/or playlists that include media content that is reflective of or complementary to a user's current mood and context. FIG.5presents another example system500for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. System500includes same or similar features and functionalities as system400with the additions of advertisement component502and advertisement charging component504to dynamic content selection platform104. Repetitive description of like elements employed in respective embodiments of systems described herein is omitted for sake of brevity. Advertisement component502is configured to facilitate targeted advertising based on a user's current state of mind and context during a session with a content provider102. For example, advertisement component502can identify video advertisements, banner advertisements, image advertisements, audio advertisements, etc. that are relevant to a user and likely to provide an impression upon the user based on the user's current state of mind and context as determined or inferred by state component108and context component110, respectively. As a result, advertisement component502can target users with tailored advertisements at the right time, increasing user happiness and advertiser return on investment. Advertisement charging component504can facilitate dynamically charging an advertiser for provision of an advertisement based on a degree to which the advertisement capitalizes on a user's current state of mind and context. For example, in an aspect, scoring component is configured to dynamically score advertisements based on relevance and suitability of the advertisements for a user given various attributes associated with the user's state of mind and context, wherein relevance and suitability can reflect a probability that the user will view the advertisement and/or the advertisement will have an impression upon the user. Advertisement charging component504can factor in an advertisements score in association with charging provision of the advertisement. In view of the example systems and/or devices described herein, example methods that can be implemented in accordance with the disclosed subject matter can be further appreciated with reference to flowcharts inFIGS.6-8. For purposes of simplicity of explanation, example methods disclosed herein are presented and described as a series of acts; however, it is to be understood and appreciated that the disclosed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, a method disclosed herein could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, interaction diagram(s) may represent methods in accordance with the disclosed subject matter when disparate entities enact disparate portions of the methods. Furthermore, not all illustrated acts may be required to implement a method in accordance with the subject specification. It should be further appreciated that the methods disclosed throughout the subject specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computers for execution by a processor or for storage in a memory. FIG.6illustrates a flow chart of an example method600for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. At602, user state attributes associated with a user's current state of mind during a current session of the user with a streaming media provider are determined based on at least one of: navigation of the streaming media provider by the user during the current session, media items provided by the streaming media provider that are played during the current session, or a manner via which the user interacts with or reacts to the played media items, wherein the state of the user includes a mood of the user (e.g., via state component108). For example, when a user is employing a search mechanism to find and watch videos related to repair a broken cell phone and the user repeatedly watches the beginning on the found videos before starting a new one, state component108can determine that the user is probably frustrated, annoyed, focused on finding a solution to a specific problem as opposed to looking for entertainment content and attentive to the current session. At604, a media item provided by the streaming media provider is selected based on the user state attributes (e.g., via selection component112). For example, selection component112can select a video trailer for a channel provided on by the streaming media provider that includes several short do it yourself repairs for different cell phone types and issues. In another example, selection component112can select a media advertisement for a service that repairs broken cell phones. At604, the selected media item is then rendered to the user during the current session. FIG.7illustrates a flow chart of another example method700for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. At702, mood attributes associated with a user's current mood during a session of the user with a streaming media provider and context attributes associated with a current context of the session based are determined on at least one of: navigation of the streaming media provider by the user during the current session, media items provided by the streaming media provider that are played during the current session, or an environment of the user (e.g., via state component108and context component110). At704selecting a media item provided by the streaming media provider based on the mood attributes and the context attributes (e.g., via selection component112). At706, the media item is rendered to the user during the current session (e.g., via rendering component204). FIG.8illustrates a flow chart of another example method700for identifying and rendering content relevant to a user's current mental state and context in accordance with various aspects and embodiments described herein. At802, current mood attributes associated with a user's current mood during a session of the user with a streaming media provider and current context attributes associated with a current context of the session are determined based on at least one of: navigation of the streaming media provider by the user during the current session, media items provided by the streaming media provider that are played during the current session, or an environment of the user (e.g., via state component108and context component110). At804, a plurality of media advertisements provided by the streaming media provider are scored based on relevance and suitability for the user during the user's session, wherein the relevance and suitability is based on a correspondence between the current mood and current context attributes, and state and context attributes respectively associated with the plurality of media advertisements (e.g., via scoring component202). At806, one of the plurality of media advertisements associated with a score exceeding a threshold score is selected, wherein scores above the threshold score indicate a high degree of relevance and suitability for the user based on the user's current mood and the current context of the session (e.g., via selection component112). At808, streaming of the media advertisement to the user is effectuated during the user's current session (e.g., via rendering component204). Example Operating Environments The systems and processes described below can be embodied within hardware, such as a single integrated circuit (IC) chip, multiple ICs, an application specific integrated circuit (ASIC), or the like. Further, the order in which some or all of the process blocks appear in each process should not be deemed limiting. Rather, it should be understood that some of the process blocks can be executed in a variety of orders, not all of which may be explicitly illustrated in this disclosure. With reference toFIG.10, a suitable environment1000for implementing various aspects of the claimed subject matter includes a computer1002. The computer1002includes a processing unit1004, a system memory1006, a codec1005, and a system bus1008. The system bus1008couples system components including, but not limited to, the system memory1006to the processing unit1004. The processing unit1004can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit1004. The system bus1008can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 13104), and Small Computer Systems Interface (SCSI). The system memory1006includes volatile memory1010and non-volatile memory1012. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer1002, such as during start-up, is stored in non-volatile memory1012. In addition, according to present innovations, codec1005may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although, codec1005is depicted as a separate component, codec1005may be contained within non-volatile memory1012. By way of illustration, and not limitation, non-volatile memory1012can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory1010includes random access memory (RAM), which acts as external cache memory. According to present aspects, the volatile memory may store the write operation retry logic (not shown inFIG.10) and the like. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (ESDRAM. Computer1002may also include removable/non-removable, volatile/non-volatile computer storage medium.FIG.10illustrates, for example, disk storage1014. Disk storage1014includes, but is not limited to, devices like a magnetic disk drive, solid state disk (SSD) floppy disk drive, tape drive, Jaz drive, Zip drive, LS-70 drive, flash memory card, or memory stick. In addition, disk storage1014can include storage medium separately or in combination with other storage medium including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage devices1014to the system bus1008, a removable or non-removable interface is typically used, such as interface1016. It is to be appreciated thatFIG.10describes software that acts as an intermediary between users and the basic computer resources described in the suitable operating environment1000. Such software includes an operating system1018. Operating system1018, which can be stored on disk storage1014, acts to control and allocate resources of the computer system1002. Applications1020take advantage of the management of resources by operating system1018through program modules1024, and program data1026, such as the boot/shutdown transaction table and the like, stored either in system memory1006or on disk storage1014. It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems. A user enters commands or information into the computer1002through input device(s)1028. Input devices1028include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit1004through the system bus1008via interface port(s)1030. Interface port(s)1030include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s)1036use some of the same type of ports as input device(s). Thus, for example, a USB port may be used to provide input to computer1002, and to output information from computer1002to an output device1036. Output adapter1034is provided to illustrate that there are some output devices1036like monitors, speakers, and printers, among other output devices1036, which require special adapters. The output adapters1034include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device1036and the system bus1008. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s)1038. Computer1002can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s)1038. The remote computer(s)1038can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and typically includes many of the elements described relative to computer1002. For purposes of brevity, only a memory storage device1040is illustrated with remote computer(s)1038. Remote computer(s)1038is logically connected to computer1002through a network interface1042and then connected via communication connection(s)1044. Network interface1042encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks. LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL). Communication connection(s)1044refers to the hardware/software employed to connect the network interface1042to the bus1008. While communication connection1044is shown for illustrative clarity inside computer1002, it can also be external to computer1002. The hardware/software necessary for connection to the network interface1042includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers. Referring now toFIG.11, there is illustrated a schematic block diagram of a computing environment1100in accordance with this disclosure. The system1100includes one or more client(s)1102(e.g., laptops, smart phones, PDAs, media players, computers, portable electronic devices, tablets, and the like). The client(s)1102can be hardware and/or software (e.g., threads, processes, computing devices). The system1100also includes one or more server(s)1104. The server(s)1104can also be hardware or hardware in combination with software (e.g., threads, processes, computing devices). The servers1104can house threads to perform transformations by employing aspects of this disclosure, for example. One possible communication between a client1102and a server1104can be in the form of a data packet transmitted between two or more computer processes wherein the data packet may include video data. The data packet can include a metadata, e.g., associated contextual information, for example. The system1100includes a communication framework1106(e.g., a global communication network such as the Internet, or mobile network(s)) that can be employed to facilitate communications between the client(s)1102and the server(s)1104. Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s)1102include or are operatively connected to one or more client data store(s)1108that can be employed to store information local to the client(s)1102(e.g., associated contextual information). Similarly, the server(s)1104are operatively include or are operatively connected to one or more server data store(s)1110that can be employed to store information local to the servers1104. In one embodiment, a client1102can transfer an encoded file, in accordance with the disclosed subject matter, to server1104. Server1104can store the file, decode the file, or transmit the file to another client1102. It is to be appreciated, that a client1102can also transfer uncompressed file to a server1104and server1104can compress the file in accordance with the disclosed subject matter. Likewise, server1104can encode video information and transmit the information via communication framework1106to one or more clients1102. The illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. Moreover, it is to be appreciated that various components described in this description can include electrical circuit(s) that can include components and circuitry elements of suitable value in order to implement the embodiments of the subject innovation(s). Furthermore, it can be appreciated that many of the various components can be implemented on one or more integrated circuit (IC) chips. For example, in one embodiment, a set of components can be implemented in a single IC chip. In other embodiments, one or more of respective components are fabricated or implemented on separate IC chips. What has been described above includes examples of the embodiments of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but it is to be appreciated that many further combinations and permutations of the subject innovation are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Moreover, the above description of illustrated embodiments of the subject disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described in this disclosure for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as those skilled in the relevant art can recognize. In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the disclosure illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter. The aforementioned systems/circuits/modules have been described with respect to interaction between several components/blocks. It can be appreciated that such systems/circuits and components/blocks can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described in this disclosure may also interact with one or more other components not specifically described in this disclosure but known by those of skill in the art. In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements. As used in this application, the terms “component,” “module,” “system,” or the like are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit), a combination of hardware and software, software, or an entity related to an operational machine with one or more specific functionalities. For example, a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Further, a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific function; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof. Moreover, the words “example” or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Computing devices typically include a variety of media, which can include computer-readable storage media and/or communications media, in which these two terms are used in this description differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium. On the other hand, communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. In view of the exemplary systems described above, methodologies that may be implemented in accordance with the described subject matter will be better appreciated with reference to the flowcharts of the various figures. For simplicity of explanation, the methodologies are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described in this disclosure. Furthermore, not all illustrated acts may be required to implement the methodologies in accordance with certain aspects of this disclosure. In addition, those skilled in the art will understand and appreciate that the methodologies could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methodologies disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computing devices. The term article of manufacture, as used in this disclosure, is intended to encompass a computer program accessible from any computer-readable device or storage media. | 93,414 |
11861133 | While various embodiments are amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the disclosure to the particular examples and embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. DETAILED DESCRIPTION Aspects of the present disclosure are believed to be applicable to a variety of different types of apparatuses, systems, and methods involving computing servers. While not necessarily so limited, various aspects may be appreciated through a discussion of examples within this context. Various example implementations are directed to circuits, apparatuses, and methods for monitoring and/or analysis of computing servers. The disclosed embodiments are applicable to various types of computing servers including physical and/or virtual servers, which may provide remote services including, for example, file servers, email servers, web hosting, domain name resolution and routing, virtual meeting services (e.g., VoIP), billing, and/or remote computing services (e.g., virtual desktops, virtual private servers, and/or virtual enterprise services). While the disclosed embodiments are not necessarily limited to such applications, various aspects of the present disclosure may be appreciated through a discussion of various examples in this context. For example, in some embodiments, an apparatus includes a processing circuit configured to retrieve operating status data that describes the operational state of each of a plurality of computing servers. For a plurality of time periods, the processing circuit determines an operating state of each of the plurality of servers. For example, in some implementations, the determined operating state may be one of three states including an “up” state, a “warning” state, and a “down” state. The processing circuit may use various criteria to determine whether a server is operating in the various operating states. In some implementations, a server is determined to be in the up state when the server is fully operational and in the down state when the server is non-responsive. The server may be determined to be in the warning state when the server is responsive, but exhibits one or more indications of incorrect operation or excessive load. As one example, a server may be identified by the processing circuit as operating in the warning state if a traffic load on the server exceeds a threshold value. As another example, a server may be identified as operating in the warning state if latency of the server exceeds a threshold value. As yet another example, a server may be identified as operating in the warning state if processing time or memory usage of the server exceeds a threshold value. Other criteria may additionally or alternatively be used. Some implementations may use more or fewer operating states to categorize the operating states of a server. For example, multiple warning states may be used in addition to the up and down states. The different warning states may indicate the severity, number, or frequency of the indication(s) that the server is not operating correctly. For ease of explanation, the examples are primarily described with reference to the three operating states indicated above (i.e., up, warning, and down). In some embodiments, an apparatus includes a processing circuit configured to provide a graphical user interface (GUI) for display and time-based assessment of the operating state of multiple servers. In some implementations, the GUI is configured to display a respective timeline for each of a plurality of servers. Each timeline has a graphical time block for each of the plurality of time periods. Each time block has a graphical indication that describes the operating state of the server during the corresponding time period. In some implementations, the time blocks are color-coded, with a respective color for each of the possible states of operation. In some implementations, the time blocks are texture-coded, with a respective texture for each of the three states of operation. Other types of markers, such as icons, may also be used to provide a visual indication of the operating state at each time block in the timelines. In some implementations, the timelines of the different servers are vertically aligned, such that time blocks associated with the different servers and that correspond to the same time period are vertically aligned. Such alignment may allow timelines to be visually compared/analyzed to distinguish between network-side events affecting multiple servers, server-side events affecting only a single server, or user-side events affecting a single user. In some embodiments, the GUI includes a mechanism that allows a user to modify the order and/or vertical placement of the timelines in the list. For instance, the GUI may be configured to allow a user to reorder the timelines using a drag-and-drop control. Reordering of timelines may be helpful to allow a user to more closely compare the timelines of two or more servers. In some embodiments, the GUI is configured to mark ones of the timelines with a graphical marker in response to the timeline being selected by a user. For instance, in some implementations, the graphical marker may be an image of a push pin. The GUI may display the marked/pinned timelines in a separate area of the display. For instance, marked/pinned timelines may be displayed in a first display area and other ones of the timelines in a second display area. In some embodiments, the GUI is configured to allow a user to save various arrangements/orders of the server timelines. This may allow a user to quickly compare the same servers at a later time without having to repeat the previous rearrangement/ordering operations performed by the user. In some implementations, the GUI includes a first button to save an arrangement/order, a second button to restore the original arrangement/order, and/or a third button to load a previously saved arrangement/order. The processing circuit that provides the GUI may also be configured to monitor the servers and/or determine the operating states of servers. In some implementations, the operating states of the servers may be determined by a separate processing circuit, such as a network monitor. In some implementations, the servers may be configured to determine their operating state and provide the determined operating state data to the GUI. As indicated above, various criteria may be used to determine the state in which a server is operating. In some embodiments, a single criterion may be used to determine the operating states of the servers. In some other embodiments, the operating state of each of the servers may be determined according to a respective set of criteria indicated in an account settings file associated with a customer account. The sets of criteria may be configured, based on user requirements, to include a number of different conditions to detect various operating states. In some embodiments, the apparatus may provide a web-based GUI that may be used to adjust the criteria indicated in the settings file. Turning now to the figures,FIG.1shows a telecommunication network including a plurality of computing servers (140,142,146), each configured to provide remote services to various end-point devices including, for example, mobile devices120, plain-old telephones (POTS)122, computer(s)126, and IP phones128. The computing servers may provide a variety of different remote services. In this example, the network includes a VoIP server140, a virtual desktop server142, and an application server146. The application server may be a virtual private server or an enterprise service, for example. Data transactions related to the remote services are communicated between the computing servers and the remote users over various data networks including, for example, the Internet112, public service telephone networks102, wireless networks104(e.g., GSM, CDMA, or LTE), and private data networks, including, but not limited to LAN130, WiFi network124, and/or Private Branch Exchange servers (not shown). In this example, the computing servers (140,142and146) are monitored by a processing circuit150, which is communicatively-coupled thereto. The communicative-coupling of the computing servers (140,142and146) and the processing circuit150may include either a direct connection or an indirect connection having, e.g., multiple connections, relay nodes, and/or networks in a communication path between the computing servers and the processing circuit150. The processing circuit150is configured to determine/retrieve an operating state of each of the computing servers for multiple time periods. The determination/retrieval of the operating state(s) may be performed, for example by an analysis circuit154included in the processing circuit150. The processing circuit150also includes an interface circuit152configured to provide a GUI that is configured to display a timeline for each of the servers. The timelines include graphical indications of the operating states of the servers in the multiple time periods and are displayed simultaneously. The GUI may also provide a mechanism for a user to rearrange or reorder the displayed timelines for visual comparison/analysis. In some embodiments, the analysis circuit154and/or the interface circuit152may be implemented as processes executed by one or more processors. FIG.2Ashows a first graphical user interface (GUI), in accordance with one or more implementations. The GUI includes a main/primary container201for the simultaneous display of operating state timelines of multiple servers. Each timeline includes a set of time blocks in a respective row. As explained above, each time block includes a graphical indication of the operating status of the server for the corresponding time period. In this example, the time blocks indicate one of three possible operating states: up, warning, and down. Time block202shows a time block having a color/pattern (lined) indicative of the warning state. Time block203shows a time block having a color/pattern (solid black) indicative of the down state. Time block204shows a time block having a color/pattern (solid white) indicative of the up state. The color/pattern coding in this example is provided for explanation purposes only. The examples and embodiments may be adapted to use other color/pattern codings to indicate the various possible operation states of the servers. In this example, each timeline is preceded by a title cell209indicating the name of the server whose status is displayed in the timeline. In this example, the GUI includes a time-scale configurator dropdown211that can be used to select different timescales (e.g., 1 hour, 24 hours, 1 week, 1 month) to display. The time-series data will go through a map-reduce algorithm and the reference header time labels will change to resize the time blocks, corresponding to the selected time period, for display. The GUI includes a clickable/draggable handle208for each row that may be used by a user to reorder the displayed timelines, via a drag-and-drop mechanism. Sorting rows makes it possible for a network operator to see correlation visually across a very large set of aligned time series data. The GUI includes a set of buttons for saving, restoring, and resetting the arrangement/view of the timelines in the displayed list of timelines. In some implementations, the GUI includes a clickable button205, which invokes a drop down dialog allowing the user to name the current arrangement/view and save it for later use. The GUI also includes a second clickable button206, which invokes a drop down for selecting and opening a saved arrangement/view selection. In this example, the GUI also includes a clickable button207that resets the view back to the original order and scale. In some implementation, the GUI is also configured to mark title cells209that are selected by a user. For instance, when the user hovers over a title cell, the color changes slightly and a clickable push pin icon210appears in the cell. When the push pin icon210is clicked, this row will be moved to the top of the list of timelines. FIG.2Bshows a second graphical GUI, in accordance with one or more implementations. The second GUI is similar to the GUI shown inFIG.2Abut includes a second display area230for display of timelines selected by clicking the push pin icon210discussed with reference toFIG.2A. The display area230always appears at the top of the table and expands as additional timelines are selected. In some implementations, as the user scrolls down the page, the display area230remains at the top of the browser window, so that the user can quickly compare the pinned timelines in area230to timelines in the original set. When a timeline is selected/pinned, it may be hidden from the original set displayed in a second area below area230. A timeline may be added back to the original set when it is unpinned by a user again clicking on the push pin icon210. The user can sort the items inside the pinned set. FIG.3shows a block diagram of an example system configured to provide respective groups of virtual services for a number of accounts, and provide customizable billing for each account.FIG.3shows various subsystems included in the example system (Z_system). The system includes multiple subsystems configured to provide respective virtual servers/services for various user accounts. For example, the system includes a first subsystem virtual desktop interface (VDI)315, which is configured to provide virtual desktops for accounts subscribing to the service. Virtual desktops allow end-users to remotely connect to and run applications on a virtual desktop computer. The VDI subsystem provides a non-technical interface for authorized end-users of an account to provision virtual resources. In some implementations, the VDI subsystem315uses a subsystem VBROKER to issue commands to VMWARE View Horizon Environment. VBROKER is a full VMARE view software development kit (SDK) that provides the ability to run custom Windows Powershell scripts against a VMWARE View Horizon server in order to create, manage and synchronize information about desktop pool resources of the system. VBROKER may also be applicable to other subsystems shown inFIG.3, as well as various other applications utilizing VMWARE. The system also includes a second subsystem Virtual Private Servers (VPS320), which can be configured to virtualize various servers for an account. In some implementations, the VPS subsystem320automates deployment of resources allocated for an account. For instance, the VPS may provide various virtual servers/services including, but not limited to, file servers, email servers, web hosting, and virtual meeting services (e.g., VoIP), etc. In some scenarios, the VPS may be accessible by virtual desktops (via VDI), by external computers (via the internet), or both. In some implementations, the virtual servers/services provided by the VPS system320may be configured using a SDK such as XEN. The SDK may be used, for example, to customize and/or maintain virtual services provided by the VPS system for an account. The system shown inFIG.3also includes a third subsystem (Enterprise Cloud)330that is configured to provide a virtual data center for an account. The Enterprise Cloud subsystem330allows users to dynamically subscribe to provision resources (e.g., virtual servers/services). Users may create a virtual data center having a pool of resources, which may include a number of VPS-like servers/services. For each account, a respective virtual data center may be configured to include a number of VPS and/or virtual desktops connected in any number of different configurations. For instance, a virtual data center may include a plurality of redundant virtual file servers and a virtual load balancer that routes traffic to balance traffic load of the virtual file servers. The virtual data center may include a firewall between a network and the virtual data center. Additionally or alternatively the virtual data center may include firewalls to protect individual virtual servers/desktops in the virtual data center. In some implementations, the virtual data center for an account includes a group of virtual desktops and/or virtual servers indicated in respective settings files for the account. The virtual desktops and/or virtual servers in the virtual data center may be provided by the VDI and VPS subsystems315and320via a shared user interface. The settings file for each account may include server settings for each virtual desktop and/or virtual servers included in the respective virtual data center. The server settings may include a pointer to a VMWARE image and also specify computing resources to dedicate to execution of the corresponding virtual desktops and/or virtual servers. The virtual servers may provide various types of services including, for example, file servers, email servers, web hosting, virtual meeting services (e.g., VoIP), billing, and/or remote computing services, routing, load balancing, and/or switch board services (e.g., Private Branch Exchange). The virtual desktops and/or virtual servers are interconnected in the virtual data center according to data center configuration settings included in the respective settings files for the account. During operation, the computing services emulate the virtual data center by emulating the virtual desktops and/or virtual servers indicated in the server settings and also emulating the virtual connections specified in the data center configuration settings. In some implementations, emulation of the virtual data center includes execution of a resource management process, configured to assign computing resources allocated for the data center for emulation of the virtual desktops, virtual servers, and connections of the data center. In some implementations, the virtual data center provides a perimeter firewall between an internal network of the virtual data center and an external network. The perimeter firewall may provide network protection for the virtual data center with stateful packet inspection, access-control lists, NAT and VPN. In some implementations, the virtual data center may also include individual firewalls isolating one or more virtual servers/desktops from other virtual servers/desktops in the virtual data center. In some implementations a web-based graphical user interface (GUI) is provided for configuration of access rules enforced by the firewall(s) which may include, for example, whitelists or blacklists of services to pass/block and/or users or IP addresses to allow access. The GUI may also be used to configure internet access rules for public facing applications, or to create one or more VPN tunnels connecting one or more end-user networks to the virtual data center. In some implementations, the virtual data centers run on a VMWARE platform leveraging a fault tolerant storage area network (SAN). In some implementations, the Enterprise Cloud subsystem330uses VBROKER to issue commands to VMWARE hosting the virtual servers/desktops. VBROKER provides an application program interface (API) to communicate with VMWARE. For example, VBROKER may translate VPS API calls into commands/scripts against VBLOCK. VBROKER may be used as middleware to issue commands to various platforms (e.g., VMWARE or OPENSTACK). VMWARE vSphere availability features may be employed to keep the virtual network, and/or the virtual servers and virtual desktops therein, running in the event of a server failure. Features such as vMotion and storage vMotion may also be used to protect against interruption of service due to hardware failure. In some implementations, the servers providing the virtual data center may include fault-tolerant hard-disk storage. For example, each disk may have two serial attached small-computer system-interface (SAS) connectors attaching it to diverse storage processors inside the storage area network. The dual SAS connections allow the storage area network to see the disks on separate data paths and, in the event of a failure, reroute the storage operations through an available path with no noticeable performance impact. In addition, the potential for data loss or corruption due to a bus reset is completely eliminated. The disks themselves reside in storage shelves with redundant power supplies, and cabling attaching the disks to the multiple storage processors. As redundancy is built into the system, redundant virtual servers are not needed to achieve system fault tolerance in the virtual data center. In some implementations, each account may be allocated a dedicated amount of computing resources of a plurality of computing servers (e.g., in a cloud). For instance, each account may be provided with a certain number of CPU cores, memory, storage, and/or bandwidth, which are dedicated to the account. The pre-allocation of dedicated resources improves reliability in high-traffic conditions. In some implementations, the plurality of computing servers is also configured to provide a GUI for adjusting configuration settings of the data center. For example, the GUI may provide an interface for an authorized user of the account to configure virtual desktops, virtual servers, connections, and/or settings of the virtual data center. For instance, the GUI may provide an interface to assign a subset of available computing resources (e.g., processing cores/time, memory or storage) for the account to particular virtual desktops and/or virtual servers in the data center. The GUI may also provide a mechanism to import and/or replicate virtual machines in the data center. In some implementations, the GUI may provide the ability to save a backup or snapshot of the layout and configuration of the virtual data center. The system shown inFIG.3also includes a domain name server (DNS) subsystem340. The DNS subsystem is configured to dynamically map each domain name associated with an account to an IP address of a select virtual server or service provided for the account. For each account, the mapping of domain names is performed according to a respective set of mapping criteria indicated in a settings file of the account. As indicated above, various mapping criteria may be utilized by the various accounts to map the domain names to the virtual servers/services. For example, the mapping criteria may map domain names as a function of the operating status of the virtual servers/services, processing load of the virtual servers (e.g., load balancing), network traffic conditions (e.g., latency and bandwidth), quality of service requirements, geographical location of an end-user submitting a DNS query, permissions of the end user, date or time of the DNS query, type of virtual server associated with the domain name, and/or number of servers associated with the domain name. In some implementations, the system provides a web-based GUI configured and arranged to allow one or more authorized users for the account to adjust the mapping criteria indicated in the settings file. In some implementations, the DNS subsystem340performs the mapping of the domain name associated with an account according to a respective set of mapping criteria indicated in a settings file of the account. For each account, the DNS subsystem340may map domain names to IP addresses of the virtual servers according to various mapping criteria. For example, in some implementations, the mapping criteria may cause the DNS subsystem340to map domain names based on the operating status of the virtual servers. For instance, the mapping criteria may map a domain name to a first virtual server while the first virtual server is operating correctly. In response to the first server going down, the mapping criteria may map the domain name to a backup virtual server. As another example, the mapping criteria may cause the DNS subsystem340to map domain names based on processing load of the virtual servers. For instance, domain names may be mapped to balance processing load between a plurality of virtual servers. In some implementations, the mapping criteria may cause the DNS subsystem340to map domain names based on the geographical location of the user submitting a domain name query to the DNS subsystem340. Various implementations may additionally or alternatively use other criteria for mapping of domain names including, but not limited to, date or time of the DNS query, type of virtual server associated with the domain name, number of servers associated with the domain name, and/or permissions of user submitting the DNS query. In various implementations, a respective set of mapping criteria may be used for each account. This allows the domain names mapping criteria to be customized for the particular needs and services of each account. In some implementations, the DNS subsystem340provides a representational state transfer (REST) API for configuration of DNS mapping for an account. In some implementations domain templates, having various preconfigured mapping criteria, may be provided for easy configuration of the DNS subsystem340for an account. In some implementations, the DNS subsystem340auto-configures mapping based on virtual services provided for the account by the other subsystems (e.g.,315,320and/or330). In some implementations, the DNS subsystem340provides a web-based GUI configured and arranged to allow one or more authorized users of the account to adjust the mapping criteria indicated in the settings file. An authorized user may specify a single set of mapping criteria for all virtual servers associated with the account or may specify a different set of mapping criteria for different types of virtual servers or for different virtual servers of the same type. Further, an authorized user may specify different sets of mapping criteria for different departments or users associated an account. In this example, the system also includes a fourth subsystem (Watchdog)350configured to monitor status of the virtual servers/services provided for the various accounts. The Watchdog subsystem350is configured to determine the operating status of the virtual servers/services provided for each account. For instance, Watchdog subsystem350may be configured to monitor services provided by the other subsystems (e.g.,315,320and/or330) for fall over. Watchdog subsystem350may provide domain monitoring across multiple services. The Watchdog subsystem350may provide real-time event tracking for the services for each account. In some implementations, the Watchdog subsystem350provides a GUI for display and analysis of the operating status of virtual servers/services provided for an account. In some implementations, the GUI is configured to display a respective timeline for each of a plurality of servers. Each timeline may have graphical time blocks for each of the plurality of time periods. Each time block has a graphical indication that describes the operating state of the server during the corresponding time period. The timelines may be rearranged by a user for visual comparison and analysis of the operating state of the virtual servers/services. Such visual analysis may be useful, for instance, for distinguishing between network events affecting multiple servers, and server events affecting an individual server. Consistent with the above described examples, in some implementations, the Watchdog subsystem350may be configured to provide an alert to one or more authorized users of the account if the operating status of the virtual servers/services satisfies alert criteria indicated in an alert policy for the account. The Watchdog subsystem350may provide alerts using various messaging mechanisms including, for example, SMS text messages, automated phone calls, emails, and/or other messaging services (e.g., Facebook, Myspace, Twitter, and/or Instant Messengers). In some implementations, multiple notifications are sent to multiple recipients and/or use multiple types of messages. In some implementations, the GUI for adjusting mapping criteria may also be used to adjust the trigger conditions and/or alert message options. In some implementations, the DNS subsystem340is configured to map domain names to the virtual servers/services based on the operating statuses of the virtual servers/services, as determined by the Watchdog subsystem350. For example, the DNS subsystem may be configured to remap a domain name from a first virtual server to a backup virtual server in response to the first virtual server becoming unresponsive. The system shown inFIG.3also includes a subsystem (Z Common)360configured to provide billing for various services provided for an account. The subsystem handles rate plans, usage statistics, and billings for various services of the system. The Z Common subsystem360may bill services using flat rates for specified time-periods (e.g., a monthly rate), or using usage rates indicating a specified billing rate for a specified amount of use (e.g., time, amount of data, and/or number of users). The Z Common subsystem360is configurable as a plug-and-play component to provide billing services for various independent systems. In some implementations, a GUI is provided for authorized users to manage services, billing options, payment options, account specific alerts, and/or various administrated options. In some implementations, the GUI provides an interface for a user to configure subscription and billing. The system includes a subsystem (Z Control)310, which configures settings of one or more of the subsystems for respective accounts of the system. In some implementations, the Z Control subsystem310stores data indicating services, provided by the various subsystems (e.g.,315,320,330,340, and/or350), which are subscribed to for each account. The Z Control subsystem310may further store user-configurable settings for the subscribed to services for each respective account. For example, the settings for an account may indicate settings for one or more virtual servers provided for the account by the VPS subsystem320. In some implementations, the Z Control subsystem310may provide a GUI for authorized users of an account to manage virtual services subscriptions, and/or various administrated options. Various blocks, modules or other circuits may be implemented to carry out one or more of the operations and activities described herein and/or shown in the figures. In these contexts, a “block” (also sometimes “logic circuitry” or “module”) is a circuit that carries out one or more of these or related operations/activities (e.g., a computing server, a network monitor, and/or a GUI). For example, in certain of the above-discussed embodiments, one or more modules are discrete logic circuits or programmable logic circuits configured and arranged for implementing these operations/activities, as in the blocks shown inFIG.1. In certain embodiments, such a programmable circuit is one or more computer circuits programmed to execute a set (or sets) of instructions (and/or configuration data). The instructions (and/or configuration data) can be in the form of firmware or software stored in and accessible from a memory (circuit). As an example, first and second modules include a combination of a CPU hardware-based circuit and a set of instructions in the form of firmware, where the first module includes a first CPU hardware circuit with one set of instructions and the second module includes a second CPU hardware circuit with another set of instructions. Certain embodiments are directed to a computer program product (e.g., nonvolatile memory device), which includes a machine or computer-readable medium having stored thereon instructions which may be executed by a computer (or other electronic device) to perform these operations/activities. The various embodiments described above are provided by way of illustration only and should not be construed to limit the disclosure. Based upon the above discussion and illustrations, those skilled in the art will readily recognize that various modifications and changes may be made without strictly following the exemplary embodiments and applications illustrated and described herein. For instance, although implementations may in some cases be described in individual figures, it will be appreciated that features from one figure can be combined with features from another figure even though the combination is not explicitly shown or explicitly described as a combination. The disclosure may also be implemented using a variety of approaches such as those involving a number of different circuits, operating systems, and/or software programs/packages. Such modifications and changes do not depart from the true spirit and scope of the present disclosure, including that set forth in the following claims. | 33,158 |
11861134 | Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION Machine vision system owners/operators periodically have a need to visually evaluate images captured by the system's imaging equipment. In doing so, there arises a need to zoom in on certain elements (like barcodes) within the captured images for closer evaluation. This can be difficult to accomplish in an automatic manner, especially if multiple elements of the same kind are present on the screen. Approaches described herein address these difficulties and provide a solution which helps automate and simplify the zooming process. FIG.1illustrates an example imaging system100configured to enhance image content captured by a machine vision camera, in accordance with various embodiments disclosed herein. In the example embodiment ofFIG.1, the imaging system100includes a user computing device102and an imaging device104communicatively coupled to the user computing device102via a network106. Generally speaking, the user computing device102and the imaging device104may be capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. The user computing device102is generally configured to enable a user/operator to create a machine vision job for execution on the imaging device104. When created, the user/operator may then transmit/upload the machine vision job to the imaging device104via the network106, where the machine vision job is then interpreted and executed. The user computing device102may comprise one or more operator workstations, and may include one or more processors108, one or more memories110, a networking interface112, an input/output (I/O) interface114, a smart imaging application116, and an image enhancement application128. It is to be understood, that a “machine vision job” as referenced herein may be or include any suitable imaging job including any suitable executable tasks, such as machine vision tasks, barcode decoding tasks, and/or any other tasks or combinations thereof. The imaging device104is connected to the user computing device102via a network106, and is configured to interpret and execute machine vision jobs received from the user computing device102. Generally, the imaging device104may obtain a job file containing one or more job scripts from the user computing device102across the network106that may define the machine vision job and may configure the imaging device104to capture and/or analyze images in accordance with the machine vision job. For example, the imaging device104may include flash memory used for determining, storing, or otherwise processing imaging data/datasets and/or post-imaging data. The imaging device104may then receive, recognize, and/or otherwise interpret a trigger that causes the imaging device104to capture an image of the target object in accordance with the configuration established via the one or more job scripts. Once captured and/or analyzed, the imaging device104may transmit the images and any associated data across the network106to the user computing device102for further analysis and/or storage. In various embodiments, the imaging device104may be a “smart” camera and/or may otherwise be configured to automatically perform sufficient functionality of the imaging device104in order to obtain, interpret, and execute job scripts that define machine vision jobs, such as any one or more job scripts contained in one or more job files as obtained, for example, from the user computing device102. Broadly, the job file may be a JSON representation/data format of the one or more job scripts transferrable from the user computing device102to the imaging device104. The job file may further be loadable/readable by a C++ runtime engine, or other suitable runtime engine, executing on the imaging device104. Moreover, the imaging device104may run a server (not shown) configured to listen for and receive job files across the network106from the user computing device102. Additionally, or alternatively, the server configured to listen for and receive job files may be implemented as one or more cloud-based servers, such as a cloud-based computing platform. For example, the server may be any one or more cloud-based platform(s) such as MICROSOFT AZURE, AMAZON AWS, or the like. In any event, the imaging device104may include one or more processors118, one or more memories120, a networking interface122, an I/O interface124, and an imaging assembly126. The imaging assembly126may include a digital camera and/or digital video camera for capturing or taking digital images and/or frames. Each digital image may comprise pixel data that may be analyzed by one or more tools each configured to perform an image analysis task. The digital camera and/or digital video camera of, e.g., the imaging assembly126may be configured, as disclosed herein, to take, capture, or otherwise generate digital images and, at least in some embodiments, may store such images in a memory (e.g., one or more memories110,120) of a respective device (e.g., user computing device102, imaging device104). For example, the imaging assembly126may include a photo-realistic camera (not shown) for capturing, sensing, or scanning 2D image data. The photo-realistic camera may be an RGB (red, green, blue) based camera for capturing 2D images having RGB-based pixel data. In various embodiments, the imaging assembly may additionally include a three-dimensional (3D) camera (not shown) for capturing, sensing, or scanning 3D image data. The 3D camera may include an Infra-Red (IR) projector and a related IR camera for capturing, sensing, or scanning 3D image data/datasets. In some embodiments, the photo-realistic camera of the imaging assembly126may capture 2D images, and related 2D image data, at the same or similar point in time as the 3D camera of the imaging assembly126such that the imaging device104can have both sets of 3D image data and 2D image data available for a particular surface, object, area, or scene at the same or similar instance in time. In various embodiments, the imaging assembly126may include the 3D camera and the photo-realistic camera as a single imaging apparatus configured to capture 3D depth image data simultaneously with 2D image data. Consequently, the captured 2D images and the corresponding 2D image data may be depth-aligned with the 3D images and 3D image data. In embodiments, imaging assembly126may be configured to capture images of surfaces or areas of a predefined search space or target objects within the predefined search space. For example, each tool included in a job script may additionally include a region of interest (ROI) corresponding to a specific region or a target object imaged by the imaging assembly126. The composite area defined by the ROIs for all tools included in a particular job script may thereby define the predefined search space which the imaging assembly126may capture in order to facilitate the execution of the job script. However, the predefined search space may be user-specified to include a field of view (FOV) featuring more or less than the composite area defined by the ROIs of all tools included in the particular job script. It should be noted that the imaging assembly126may capture 2D and/or 3D image data/datasets of a variety of areas, such that additional areas in addition to the predefined search spaces are contemplated herein. Moreover, in various embodiments, the imaging assembly126may be configured to capture other sets of image data in addition to the 2D/3D image data, such as grayscale image data or amplitude image data, each of which may be depth-aligned with the 2D/3D image data. The imaging device104may also process the 2D image data/datasets and/or 3D image datasets for use by other devices (e.g., the user computing device102, an external server). For example, the one or more processors118may process the image data or datasets captured, scanned, or sensed by the imaging assembly126. The processing of the image data may generate post-imaging data that may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or sensed image data. The image data and/or the post-imaging data may be sent to the user computing device102executing the smart imaging application116for viewing, manipulation, and/or otherwise interaction. In other embodiments, the image data and/or the post-imaging data may be sent to a server for storage or for further manipulation. As described herein, the user computing device102, imaging device104, and/or external server or other centralized processing unit and/or storage may store such data, and may also send the image data and/or the post-imaging data to another application implemented on a user device, such as a mobile device, a tablet, a handheld device, or a desktop device. Each of the one or more memories110,120may include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others. In general, a computer program or computer based product, application, or code (e.g., smart imaging application116, or other computing instructions described herein) may be stored on a computer usable storage medium, or tangible, non-transitory computer-readable medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having such computer-readable program code or computer instructions embodied therein, wherein the computer-readable program code or computer instructions may be installed on or otherwise adapted to be executed by the one or more processors108,118(e.g., working in connection with the respective operating system in the one or more memories110,120) to facilitate, implement, or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. In this regard, the program code may be implemented in any desired program language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via Golang, Python, C, C++, C #, Objective-C, Java, Scala, ActionScript, JavaScript, HTML, CSS, XML, etc.). The one or more memories110,120may store an operating system (OS) (e.g., Microsoft Windows, Linux, Unix, etc.) capable of facilitating the functionalities, apps, methods, or other software as discussed herein. The one or more memories110may also store the smart imaging application116and/or the image enhancement application128, which may be configured to enable machine vision job construction/execution, as described further herein. Additionally, or alternatively, the smart imaging application116and/or the image enhancement application128may also be stored in the one or more memories120of the imaging device104, and/or in an external database (not shown), which is accessible or otherwise communicatively coupled to the user computing device102via the network106. The one or more memories110,120may also store machine readable instructions, including any of one or more application(s), one or more software component(s), and/or one or more application programming interfaces (APIs), which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. For example, at least some of the applications, software components, or APIs may be, include, otherwise be part of, a machine vision based imaging application, such as the smart imaging application116and/or the image enhancement application128, where each may be configured to facilitate their various functionalities discussed herein. It should be appreciated that one or more other applications may be envisioned and that are executed by the one or more processors108,118. The one or more processors108,118may be connected to the one or more memories110,120via a computer bus responsible for transmitting electronic data, data packets, or otherwise electronic signals to and from the one or more processors108,118and one or more memories110,120in order to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. The one or more processors108,118may interface with the one or more memories110,120via the computer bus to execute the operating system (OS). The one or more processors108,118may also interface with the one or more memories110,120via the computer bus to create, read, update, delete, or otherwise access or interact with the data stored in the one or more memories110,120and/or external databases (e.g., a relational database, such as Oracle, DB2, MySQL, or a NoSQL based database, such as MongoDB). The data stored in the one or more memories110,120and/or an external database may include all or part of any of the data or information described herein, including, for example, machine vision job images (e.g., images captured by the imaging device104in response to execution of a job script) and/or other suitable information. The networking interfaces112,122may be configured to communicate (e.g., send and receive) data via one or more external/network port(s) to one or more networks or local terminals, such as network106, described herein. In some embodiments, networking interfaces112,122may include a client-server platform technology such as ASP.NET, Java J2EE, Ruby on Rails, Node.js, a web service or online API, responsive for receiving and responding to electronic requests. The networking interfaces112,122may implement the client-server platform technology that may interact, via the computer bus, with the one or more memories110,120(including the applications(s), component(s), API(s), data, etc. stored therein) to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. According to some embodiments, the networking interfaces112,122may include, or interact with, one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and that may be used in receipt and transmission of data via external/network ports connected to network106. In some embodiments, network106may comprise a private network or local area network (LAN). Additionally, or alternatively, network106may comprise a public network such as the Internet. In some embodiments, the network106may comprise routers, wireless switches, or other such wireless connection points communicating to the user computing device102(via the networking interface112) and the imaging device104(via networking interface122) via wireless communications based on any one or more of various wireless standards, including by non-limiting example, IEEE 802.11a/b/c/g (WIFI), the BLUETOOTH standard, or the like. The I/O interfaces114,124may include or implement operator interfaces configured to present information to an administrator or operator and/or receive inputs from the administrator or operator. An operator interface may provide a display screen (e.g., via the user computing device102and/or imaging device104) which a user/operator may use to visualize any images, graphics, text, data, features, pixels, and/or other suitable visualizations or information. For example, the user computing device102and/or imaging device104may comprise, implement, have access to, render, or otherwise expose, at least in part, a graphical user interface (GUI) for displaying images, graphics, text, data, features, pixels, and/or other suitable visualizations or information on the display screen. The I/O interfaces114,124may also include I/O components (e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.), which may be directly/indirectly accessible via or attached to the user computing device102and/or the imaging device104. According to some embodiments, an administrator or user/operator may access the user computing device102and/or imaging device104to construct jobs, review images or other information, make changes, input responses and/or selections, and/or perform other functions. As described above herein, in some embodiments, the user computing device102may perform the functionalities as discussed herein as part of a “cloud” network or may otherwise communicate with other hardware or software components within the cloud to send, retrieve, or otherwise analyze data or information described herein. FIG.2Ais a perspective view of the imaging device104ofFIG.1, in accordance with embodiments described herein. The imaging device104includes a housing202, an imaging aperture204, a user interface label206, a dome switch/button208, one or more light emitting diodes (LEDs)210, and mounting point(s)212. As previously mentioned, the imaging device104may obtain job files from a user computing device (e.g., user computing device102) which the imaging device104thereafter interprets and executes. The instructions included in the job file may include device configuration settings (also referenced herein as “imaging settings”) operable to adjust the configuration of the imaging device104prior to capturing images of a target object. For example, the device configuration settings may include instructions to adjust one or more settings related to the imaging aperture204. As an example, assume that at least a portion of the intended analysis corresponding to a machine vision job requires the imaging device104to maximize the brightness of any captured image. To accommodate this requirement, the job file may include device configuration settings to increase the aperture size of the imaging aperture204. The imaging device104may interpret these instructions (e.g., via one or more processors118) and accordingly increase the aperture size of the imaging aperture204. Thus, the imaging device104may be configured to automatically adjust its own configuration to optimally conform to a particular machine vision job. Additionally, the imaging device104may include or otherwise be adaptable to include, for example but without limitation, one or more bandpass filters, one or more polarizers, one or more DPM diffusers, one or more C-mount lenses, and/or one or more C-mount liquid lenses over or otherwise influencing the received illumination through the imaging aperture204. The user interface label206may include the dome switch/button208and one or more LEDs210, and may thereby enable a variety of interactive and/or indicative features. Generally, the user interface label206may enable a user to trigger and/or tune to the imaging device104(e.g., via the dome switch/button208) and to recognize when one or more functions, errors, and/or other actions have been performed or taken place with respect to the imaging device104(e.g., via the one or more LEDs210). For example, the trigger function of a dome switch/button (e.g., dome/switch button208) may enable a user to capture an image using the imaging device104and/or to display a trigger configuration screen of a user application (e.g., smart imaging application116, image enhancement application128). The trigger configuration screen may allow the user to configure one or more triggers for the imaging device104that may be stored in memory (e.g., one or more memories110,120) for use in later developed machine vision jobs, as discussed herein. As another example, the tuning function of a dome switch/button (e.g., dome/switch button208) may enable a user to automatically and/or manually adjust the configuration of the imaging device104in accordance with a preferred/predetermined configuration and/or to display an imaging configuration screen of a user application (e.g., smart imaging application116, image enhancement application128). The imaging configuration screen may allow the user to configure one or more configurations of the imaging device104(e.g., aperture size, exposure length, etc.) that may be stored in memory (e.g., one or more memories110,120) for use in later developed machine vision jobs, as discussed herein. To further this example, and as discussed further herein, a user may utilize the imaging configuration screen (or more generally, the smart imaging application116and/or the image enhancement application128) to establish two or more configurations of imaging settings for the imaging device104. The user may then save these two or more configurations of imaging settings as part of a machine vision job that is then transmitted to the imaging device104in a job file containing one or more job scripts. The one or more job scripts may then instruct the imaging device104processors (e.g., one or more processors118) to automatically and sequentially adjust the imaging settings of the imaging device in accordance with one or more of the two or more configurations of imaging settings after each successive image capture. The mounting point(s)212may enable a user connecting and/or removably affixing the imaging device104to a mounting device (e.g., imaging tripod, camera mount, etc.), a structural surface (e.g., a warehouse wall, a warehouse ceiling, structural support beam, etc.), other accessory items, and/or any other suitable connecting devices, structures, or surfaces. For example, the imaging device104may be optimally placed on a mounting device in a distribution center, manufacturing plant, warehouse, and/or other facility to image and thereby monitor the quality/consistency of products, packages, and/or other items as they pass through the imaging device's104FOV. Moreover, the mounting point(s)212may enable a user to connect the imaging device104to a myriad of accessory items including, but without limitation, one or more external illumination devices, one or more mounting devices/brackets, and the like. In addition, the imaging device104may include several hardware components contained within the housing202that enable connectivity to a computer network (e.g., network106). For example, the imaging device104may include a networking interface (e.g., networking interface122) that enables the imaging device104to connect to a network, such as a Gigabit Ethernet connection and/or a Dual Gigabit Ethernet connection. Further, the imaging device104may include transceivers and/or other communication components as part of the networking interface to communicate with other devices (e.g., the user computing device102) via, for example, Ethernet/IP, PROFINET, Modbus TCP, CC-Link, USB 3.0, RS-232, and/or any other suitable communication protocol or combinations thereof. FIG.2Bis a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example imaging device104ofFIG.2A. The example logic circuit ofFIG.2Bis a processing platform230capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs). The example processing platform230ofFIG.2Bincludes a processor232such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor. The example processing platform230ofFIG.2Bincludes memory (e.g., volatile memory, non-volatile memory)234accessible by the processor232(e.g., via a memory controller). The example processor232interacts with the memory234to obtain, for example, machine-readable instructions stored in the memory234corresponding to, for example, the operations represented by the flowcharts of this disclosure. The memory234also includes the smart imaging application116and, optionally, the image enhancement application128that are each accessible by the example processor232. The smart imaging application116and/or the image enhancement application128may comprise rule-based instructions, an artificial intelligence (AI) and/or machine learning-based model, and/or any other suitable algorithm architecture or combination thereof configured to, for example, enhance image content captured by a machine vision camera (e.g., imaging device104). To illustrate, the example processor232may access the memory234to execute the smart imaging application116and/or the image enhancement application128when the imaging device104(via the imaging assembly126) captures an image that includes a plurality of indicia that each encode a payload. Additionally, or alternatively, machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform230to provide access to the machine-readable instructions stored thereon. The example processing platform230ofFIG.2Balso includes a networking interface236to enable communication with other machines via, for example, one or more networks. The example networking interface236includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s) (e.g., Ethernet for wired communications and/or IEEE 802.11 for wireless communications). The example processing platform230ofFIG.2Balso includes input/output (I/O) interfaces238to enable receipt of user input and communication of output data to the user. Such user input and communication may include, for example, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc. FIG.3Adepicts an example application interface300which includes an example image314captured by an imaging device104. The example application interface300may be displayed as part of a smart imaging application (e.g., smart imaging application106), an image enhancement application (e.g., image enhancement application128), and/or any other suitable application or combinations thereof. For example, the example application interface300may be rendered on an interface of a user computing device (e.g., user computing device102) as a result of the imaging device executing a machine vision job, and may be formatted in accordance with instructions included as part of the smart imaging application106. The particular image renderings and/or other aspects of the example application interface300may be determined and displayed in accordance with instructions included as part of the image enhancement application128, as described herein. In some instances, each image displayed in the example application interface300will include a barcode, such as barcode302and barcode304. Depending on how a machine vision job is configured, the application (e.g., image enhancement application128) may receive, from the imaging device, decoded barcode data associated with either barcode302/304(this may also be referred to as “barcode result data”). This information can be displayed in an appropriate location within the interface. In the depicted interface300, the data decoded from barcode302is displayed as a string306and data decoded from barcode304is displayed as a string308within the entry window (noted as “View Results” inFIGS.3A and3B)310. As illustrated inFIG.3A, the example application interface300additionally includes a settings portion316and a filmstrip portion318. The settings portion316may generally allow a user to configure particular actions performed as part of the machine vision job, barcode decoding job, and/or any other suitable executable job that is executed by the imaging device. For example, the user may enable the “Decode All” option illustrated inFIG.3Ato configure the indicia decoder included as part of the machine vision job to decode any decodable indicia that is identifiable within the example image314. As a result, the indicia decoder may decode each of the payloads from barcodes302,304, and display the results of the decoding within the entry window310. The filmstrip portion318may include all captured images by the imaging device during an individual execution of the machine vision job. For example, as a target object passes by the imaging device, the imaging device may capture one or more images of the target object, and each of those captured images may be displayed within the filmstrip portion318. FIG.3Bdepicts another example application interface330which includes another example image320captured by an imaging device104. This example image320includes multiple indicia (e.g., quick response (QR) codes), and several of these indicia322,324may be decoded by the indicia decoder included as part of the machine vision job executed by the imaging device. The data decoded from indicia322is displayed as a string326, and the data decoded from indicia324is displayed as a string328within the entry window310. Thus, the indicia decoding performed as part of machine vision jobs described in the present disclosure may be configured to decode a payload from any suitable indicia, such as barcodes, QR codes, data matrices, etc. According to some aspects of the present disclosure, the application (e.g., image enhancement application128) provides a means for a user to select a desired indicia and have the application automatically center the image about that indicia and zoom in on that indicia to a predetermined zoom level. Achieving this functionality can be particularly difficult due to the fact that indicia may come in a wide variety of shapes and sizes (e.g., barcodes302,304and indicia322,324), and that depending on the operating environment, the dimensions of the indicia as they appear in the captured image may vary greatly. In some implementations, the application may achieve this via the following. To identify which indicia the user wishes to zoom in on, the user may select the particular indicia by selecting a particular entry from the entry window310. This can be done by hovering the pointer of a mouse over a line having the payload associated with the indicia of interest and then making a selection by clicking a mouse. Similar functionality may, for example, be achieved via a keyboard or any other input device that could allow for a selection of a specific entry associated with a desired indicia in the entry window310. In other instances, the user may make a selection by hovering the mouse (or any other input device) over the desired indicia in the display region312and then executing the selection by clicking a mouse button. It should be appreciated that throughout this disclosure, references to input devices like a mouse should not be seen as limiting and other input devices should be considered to be within the scope of this disclosure. For example, it should be appreciated that in the event of the application being executed on a mobile device like a tablet or a notebook having touch-screen capabilities, a user's finger and the respective input functions via a screen may function just like the input functions of a computer mouse. Prior or subsequent to the selection of a desired indicia, the application determines the bounds of a bounding box that substantially encompasses at least some of the indicia in the image. In some implementations where the determination is made prior to the selection of the desired indicia, the application may be configured to display at least one bounding box around each corresponding indicia visible in the image shown in the display region312. Generally, each bounding box may be comprised of a series of pixel points which correspond to the outer edges of each respective indicia. Consequently, each point of the bounding box will have an x,y pixel coordinate that is within the coordinate system of the image. From this, and as discussed herein, for each desired bounding box, the application (e.g., image enhancement application128) can determine the highest pixel coordinate value in the vertical direction (x-axis) (also referred to as the upper pixel coordinate limit), the lowest pixel coordinate value in the vertical direction (x-axis) (also referred to as the lower pixel coordinate limit), the farthest side (e.g., left side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the first side pixel coordinate limit), and the farthest other side (e.g., right side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the second side pixel coordinate limit). Having this data allows the application to derive a secondary box for any desired indicia, where the secondary box is derived not from the shape or the orientation of the indicia, but from its coordinate limits. This provides the benefit of being able to orient the secondary box in a manner that is consistent with the display region, which in most implementations would result in the secondary box being approximately square or rectangular in shape with the top and bottom sides extending along a respective single height coordinate, and the two vertical sides extending along a respective single width coordinate. An example of such a secondary box is illustrated in both ofFIGS.4A and4Bas412and422, respectively.FIG.4Adepicts an example application interface410that includes the secondary box412. As illustrated inFIG.4A, the displayed image may include multiple indicia, and more than one (e.g., the barcode contained within the secondary box412and the barcode below the secondary box412) may be decodable. In this case, the image enhancement application may automatically recognize and decode each indicia included within the image, and may display the payloads resulting from the decoding. In certain aspects, the image enhancement application may determine the secondary box412in response to a user selection of the indicia included within the secondary box412. Additionally, or alternatively, the image enhancement application may automatically determine the secondary box412and a secondary box substantially encompassing the other indicia in response to decoding each indicia. Of course, in either case, the image enhancement application may determine the secondary box(es) based on the coordinate limits corresponding to the decoded indicia. Alternatively, in certain aspects, the displayed image may include multiple indicia, but only one (e.g., the barcode contained within the secondary box412) may be decodable. In this case, the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box412. Thereafter, the image enhancement application may determine the secondary box412based on the coordinate limits corresponding to the decoded indicia. As yet another example,FIG.4Bdepicts an example application interface420that includes the secondary box422. As illustrated inFIG.4B, the displayed image may include portions of multiple indicia, but only one (e.g., the QR code contained within the secondary box422) may be fully decodable. In this case, the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box422. Thereafter, the image enhancement application may determine the secondary box422based on the coordinate limits corresponding to the decoded indicia. In any event, once the secondary box has been determined, the application (e.g., image enhancement application128) may further be configured to reposition the image in the display region such that the center point of the secondary box associated with the indicia of interest (e.g., the indicia that was previously selected for viewing/analysis) is positioned within some threshold distance from the center point of the display region. In some aspects, the threshold distance may be zero and the center point of the secondary box can overlay the center point of the display region. In other implementations, a non-zero distance threshold (that could be expressed, for example, in terms of pixels) may be implemented. Ultimately, the outcome of such positioning is that the secondary box (and thus the indicia associated with it) will be centered at or visually near the center of the display region. For example, and as illustrated inFIG.5A, the image enhancement application may display the example application interface500in response to receiving a selection of an indicia from a user, and repositioning the image within the display region such that the indicia is substantially centered within the display region. The image enhancement application may receive an indication that a user has selected the indicia502within the image, and may proceed to determine the secondary box504based on the various coordinate limits (e.g., upper pixel coordinate limit, lower pixel coordinate limit, first side pixel coordinate limit, second side pixel coordinate limit) corresponding to the indicia502. Generally, as previously mentioned, the image enhancement application may reposition the image containing the indicia502such that the distance506cbetween the two center-points506a,506bis at least less than a threshold distance. When the image enhancement application repositions the image, the application may also compare the center-point506aof the secondary box to the center-point506bof the display region to determine whether or not to reposition the image in order to more optimally decrease the distance506cbetween the two center-points506a,506b. In this manner, the image enhancement application may position the indicia502chosen by the user as close to the center-point of the display region as possible (e.g., the distance506cbetween the two center-points506a,506bis eliminated). Further, when the image enhancement application repositions the image, the application may also scale the viewpoint (up or down) such that the secondary box504occupies some predetermined amount of the display window. For instance, the scaling performed by the image enhancement application may be expressed as a ratio of the pixels occupied by the secondary box504in a vertical direction to the vertical pixel count of the display region. Thus, if the desired ratio is 1:2, and the secondary box504has a height of 200 pixels while the display region has a height of 1000 pixels, the viewpoint will be scaled 2.5 times such that the secondary box504occupies 500 pixels. The image enhancement application may apply the same or a similar approach along the horizontal axis. Of course, it should be appreciated that unless the aspect ratio of the secondary box (e.g., secondary box504) matches the aspect ratio of the display region, the horizontal secondary box to display region ratio may not be equal to the vertical secondary box to display region ratio. Thus, when the image enhancement application scales the image in certain aspects, the application will scale the viewpoint such that the ratio of both the horizontal secondary box to display region and the vertical secondary box to display region is at or below the desired threshold. This can help prevent instances where a relatively narrow but tall viewpoint is scaled based on a horizontal ratio without regard for a vertical ratio. For instance, with a display region of 1000×2000 pixels and a secondary box of 400×100 pixels, scaling the image to where the horizontal secondary box to display region ratio is 1:2 requires scaling the image by 10. However, such an increase would cause the secondary box (and thus the indicia that is displayed therein) to increase to a height of 4000 pixels; which is beyond the display capabilities of the display region. Thus, in this instance the image enhancement application may limit scaling the image to 1.25, causing the secondary box to increase to dimensions of 500×125 pixels, meeting the 1:2 threshold requirement along the vertical axis. Alternatively, in certain aspects, a user may configure the image enhancement application to scale the image such that a portion of the selected indicia is not featured within the display region after the image is scaled. For example, as illustrated inFIG.5A, the image enhancement application may determine horizontal distances508between the vertical sides of the secondary box504and the vertical edges of the display region and vertical distances509between the horizontal sides of the secondary box504and the horizontal edges of the display region. Assume that the desired aspect ratio for the indicia502relative to the display region is 1:2. In this example, the image enhancement application may scale the image such that the total length of the horizontal distances509is equal to the length of one vertical side of the secondary box504and the total length of the vertical distances508is equal to the length of one horizontal side of the secondary box504. Of course, it is to be understood that the image enhancement application may scale the image to any suitable aspect ratio, as previously described. In some instances, as for example inFIGS.5B and5C, before/after scaling, the region outside of the selected indicia may be masked to highlight the selected indicia. This mask may be applied outside of the secondary box or outside the bounding box of the selected indicia. The mask may be transparent, opaque, color-changing, sharpness decreasing, or any other in nature which would call the indicia of interest to the forefront of the user's attention. For example, as illustrated inFIG.5B, the image enhancement application may display an example application interface510that features a selected indicia512and a masked background portion514. The user may configure the image enhancement application to automatically mask the masked background portion514upon completion of the image scaling, and/or at any other point after the user selects the selected indicia512. Additionally, or alternatively, the image enhancement application may provide a variety of graphical indications to allow a user to understand what indicia is currently displayed, and where the indicia is located within the displayed image. As an example,FIG.5Cdepicts an example application interface520depicting the selected indicia512, the masked background portion514, and a miniature image522that includes a scaled image indicator524. The miniature image522may generally represent the original image from which the displayed image was generated (e.g., via repositioning, scaling, masking), and the scaled image indicator524may provide a graphical indication of the displayed image as part of the original image for a user's reference. The image enhancement application may automatically, upon selection of the indicia, scale the image to generate the displayed image and display the miniature image522and scaled image indicator524so that the user does not lose track of where the displayed image and selected indicia512are relative to the original image. Further, in certain aspects, the image enhancement application may render an animation, upon the user selection of the selected indicia512, within the display region that minimizes the original image into the miniature image522and thereafter generate the scaled image indicator524as an overlay over the miniature image522. FIG.6is a flowchart representative of a method600for enhancing image content captured by a machine vision camera, in accordance with embodiments described herein. The method600includes receiving an image captured by the machine vision camera (block602). The image may be received at an application executing on a user computing device communicatively coupled to a machine vision camera, and the image may include a plurality of indicia (e.g., barcodes, QR codes, etc.). Moreover, each of the plurality of indicia may encode a payload. The method600may also include identifying, in the received image, each of the indicia (block604). Accordingly, for each respective indicia in the image, the method600may include determining bounds of a respective bounding box that substantially encompass each respective indicia in the image (block606). For example, the image enhancement application may determine the bounds of a respective bounding box corresponding to a single indicia by identifying the extreme coordinates of the indicia that define the outermost boundary of the indicia within the image. Thus, in certain aspects, the bounding box may be any suitable shape in order to substantially encompass the indicia. The method600may also include displaying a plurality of entries, wherein each of the plurality of entries corresponds to a respective indicia of the plurality of indicia (block608). In certain aspects, each of the plurality of entries includes a payload of the corresponding indicia. A user may interact with the interface in order to indicate a selection of one or more of the displayed entries and/or indicia. As such, the method600may include receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia (block610). In certain aspects, the application (e.g., image enhancement application) may mask a region of the display region upon selection of an entry/indicia by a user. Thus, the method600may include masking a masked region in the display region responsive to receiving the selection. In these aspects, the masked region may be a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia. The method600may also include determining an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit (block612). The image enhancement application may determine these coordinate limits based on the respective bounding box of the corresponding indicia. Based on these coordinate limits, the image enhancement application may determine a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound (block614). The method600may also include displaying the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region (block616). In certain aspects, for each respective indicia in the image, the image enhancement application may display the respective bounding box on an interface of the application. In some aspects, the image enhancement application may display the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region. The method600may also include scaling the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region (block618). In certain aspects, at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3. In some aspects, the image enhancement application may scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count. Moreover, in certain aspects, at least one of the first predetermined ratio threshold and the second predetermined ratio threshold is user-definable such that a portion of the corresponding indicia is excluded from the scaled image. Additionally, or alternatively, the image enhancement application may display a miniature version of the image in the display region as an overlay covering a portion of the scaled image. In these aspects, the miniature version of the image includes an indicated portion representing the scaled image. ADDITIONAL CONSIDERATIONS The above description refers to a block diagram of the accompanying drawings. Alternative implementations of the example represented by the block diagram includes one or more additional or alternative elements, processes and/or devices. Additionally, or alternatively, one or more of the example blocks of the diagram may be combined, divided, re-arranged or omitted. Components represented by the blocks of the diagram are implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware. In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. The above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted. In some examples, the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)). In some examples, the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)). In some examples the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s). As used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)). Further, as used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 58,384 |
11861135 | DETAILED DESCRIPTION The illustrative embodiments recognize and take into account one or more different considerations. For example, when modifying or reconfiguring an aircraft to match another aircraft, identifying discrepancies oor differences between the two aircraft can be important to determine whether a modification of a reconfiguration has been performed as desired. This comparison of configurations between the two aircraft can be a time-consuming and tedious process. For example, the configurations of two aircraft can be compared by an analysis of the bills of materials (BOMs) for parts used to manufacture the two aircraft. Further, spatial positions for these parts are also needed to determine whether the parts are installed in the same positions in both aircraft. Additionally, the bills of materials are also analyzed to determine whether a particular part meets the same specifications. For example, the bills of materials for the two aircraft can be examined to determine whether the same types of fasteners are used in both aircraft. This analysis is time-consuming and tedious. As result, it may be difficult to determine whether the specified change in configuration has been properly implemented. As another example, when examining configuration differences between two airplane lines, the configuration differences can be determined by examining the part structures. With this type of comparison, thousands of differences can be present between the airplanes being manufactured on the two airplane lines. These numerous differences in part structures can be difficult for a human operator to visualize or comprehend by viewing part structures. The illustrative embodiments recognize and take into account that another solution can involve determining the weight of the aircraft in different spatial zones based on the weight of the parts in those spatial zones. These weight differences are displayed in a visual representation of the aircraft on a graphical user interface on a display system to a human operator. In this manner, the human operator can compare two or more aircraft configurations by viewing the visual representation of the weight differences. This visualization of the weight differences can enable a human operator to more easily visualize and understand differences between airplane configurations. For example, the human operator can visualize where differences are present between configurations of aircraft based on the visualization of weight differences between spatial zones. Thus, the illustrative embodiments provide a method, apparatus, computer system, and computer program product for visualizing aircraft configurations. These visualizations of aircraft configurations can be based on weight differences between weights in different spatial zones for the aircraft being compared. In one illustrative example, a computer system determines a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft. The computer system displays a visual representation of the aircraft in a graphical user interface on a display system. The computer system displays the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. With reference now to the figures and, in particular, with reference toFIG.1, a pictorial representation of a network of data processing systems is depicted in which illustrative embodiments may be implemented. Network data processing system100is a network of computers in which the illustrative embodiments may be implemented. Network data processing system100contains network102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system100. Network102may include connections, such as wire, wireless communication links, or fiber optic cables. In the depicted example, server computer104and server computer106connect to network102along with storage unit108. In addition, client devices110connect to network102. As depicted, client devices110include client computer112, client computer114, and client computer116. Client devices110can be, for example, computers, workstations, or network computers. In the depicted example, server computer104provides information, such as boot files, operating system images, and applications to client devices110. Further, client devices110can also include other types of client devices such as mobile phone118, tablet computer120, and smart glasses122. In this illustrative example, server computer104, server computer106, storage unit108, and client devices110are network devices that connect to network102in which network102is the communications media for these network devices. Some or all of client devices110may form an Internet of things (IoT) in which these physical devices can connect to network102and exchange information with each other over network102. Client devices110are clients to server computer104in this example. Network data processing system100may include additional server computers, client computers, and other devices not shown. Client devices110connect to network102utilizing at least one of wired, optical fiber, or wireless connections. Program instructions located in network data processing system100can be stored on a computer-recordable storage medium and downloaded to a data processing system or other device for use. For example, program instructions can be stored on a computer-recordable storage medium on server computer104and downloaded to client devices110over network102for use on client devices110. In the depicted example, network data processing system100is the Internet with network102representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers consisting of thousands of commercial, governmental, educational, and other computer systems that route data and messages. Of course, network data processing system100also may be implemented using a number of different types of networks. For example, network102can be comprised of at least one of the Internet, an intranet, a local area network (LAN), a metropolitan area network (MAN), or a wide area network (WAN).FIG.1is intended as an example, and not as an architectural limitation for the different illustrative embodiments. As used herein, “a number of” when used with reference to items, means one or more items. For example, “a number of different types of networks” is one or more different types of networks. Further, the phrase “at least one of,” when used with a list of items, means different combinations of one or more of the listed items can be used, and only one of each item in the list may be needed. In other words, “at least one of” means any combination of items and number of items may be used from the list, but not all of the items in the list are required. The item can be a particular object, a thing, or a category. For example, without limitation, “at least one of item A, item B, or item C” may include item A, item A and item B, or item B. This example also may include item A, item B, and item C or item B and item C. Of course, any combinations of these items can be present. In some illustrative examples, “at least one of” can be, for example, without limitation, two of item A; one of item B; and ten of item C; four of item B and seven of item C; or other suitable combinations. In this illustrative example, human operator130at client computer112can compare configurations for different aircraft to determine configuration differences. These differences can be displayed on graphical user interface132in client computer112to human operator130. For example, human operator130can select aircraft138and comparison aircraft142for comparison. As depicted, this comparison of aircraft configurations can be made using configuration manager134running on server computer104. In one illustrative example, configuration manager134can compare weights135for spatial zones136for aircraft138with weights139in corresponding spatial zones140in comparison aircraft142. With the weight comparison, configuration manager134creates visual representation133of spatial zones136in aircraft138that includes graphical indicators137identifying whether weight differences are present between spatial zones136and corresponding spatial zones140. In this example, graphical indicators137can also indicate an amount of weight difference between spatial zones136and corresponding spatial zones140. This visualization of weight differences between spatial zones136for aircraft138and corresponding spatial zones140for comparison aircraft142enables human operator130to understand and evaluate differences between configurations of aircraft138and comparison aircraft142as compared to current comparison techniques more easily. In addition to these visualizations of weight differences in spatial zones, human operator130can obtain more information about a spatial zone that has a weight difference between aircraft138and comparison aircraft142. For example, configuration manager134can display to human operator130a visualization of parts with weight differences, a bill of materials identifying part differences, a table of parts with weight differences, or other types of visualizations. As a result, human operator130can more quickly determine where differences are present between particular parts such as fasteners, modules, or other parts. Additionally, human operator130can focus on particular spatial zones in aircraft138. For example, if a reconfiguration of aircraft138has been performed to update a galley to match the galley in comparison aircraft142, human operator130can identify the spatial zone in aircraft138where the galley reconfiguration was performed. The visualization can enable human operator130to quickly determine whether configuration differences are present in the spatial zone containing the galley. With reference now toFIG.2, a block diagram of a visualization environment is depicted in accordance with an illustrative embodiment. In this illustrative example, visualization environment200includes components that can be implemented in hardware such as the hardware shown in network data processing system100inFIG.1. As depicted, configuration visualization system202in visualization environment200can operate to provide a visualization of platform configurations for platform204such as aircraft206. Aircraft206can take a number of different forms. For example, aircraft206can be selected from one of a commercial airplane, a cargo aircraft, a refueling airplane, a tilt-rotor aircraft, a tilt wing aircraft, a vertical takeoff and landing aircraft, an electrical vertical takeoff and landing vehicle, a personal air vehicle, and other types of aircraft. In this illustrative example, configuration visualization system202comprises computer system208and configuration manager210. As depicted, configuration manager210is located in computer system208. Configuration manager134inFIG.1is an example of configuration manager210. Configuration manager210can be implemented in software, hardware, firmware or a combination thereof. When software is used, the operations performed by configuration manager210can be implemented in program instructions configured to run on hardware, such as a processor unit. When firmware is used, the operations performed by configuration manager210can be implemented in program instructions and data and stored in persistent memory to run on a processor unit. When hardware is employed, the hardware may include circuits that operate to perform the operations in configuration manager210. In the illustrative examples, the hardware may take a form selected from at least one of a circuit system, an integrated circuit, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform a number of operations. With a programmable logic device, the device can be configured to perform the number of operations. The device can be reconfigured at a later time or can be permanently configured to perform the number of operations. Programmable logic devices include, for example, a programmable logic array, a programmable array logic, a field programmable logic array, a field programmable gate array, and other suitable hardware devices. Additionally, the processes can be implemented in organic components integrated with inorganic components and can be comprised entirely of organic components excluding a human being. For example, the processes can be implemented as circuits in organic semiconductors. Computer system208is a physical hardware system and includes one or more data processing systems. When more than one data processing system is present in computer system208, those data processing systems are in communication with each other using a communications medium. The communications medium may be a network. The data processing systems may be selected from at least one of a computer, a server computer, a tablet, or some other suitable data processing system. As depicted, computer system208includes a number of processor units212that are capable of executing program instructions214implementing processes in the illustrative examples. In other words, program instructions214are computer readable program instructions. As used herein a processor unit in the number of processor units212is a hardware device and is comprised of hardware circuits such as those on an integrated circuit that respond and process instructions and program code that operate a computer. When the number of processor units212execute program instructions214for a process, the number of processor units212can be one or more processor units that are on the same computer or on different computers. In other words, the process can be distributed between processor units on the same or different computers in a computer system208. Further, the number of processor units212can be of the same type or different type of processor units. For example, a number of processor units212can be selected from at least one of a single core processor, a dual-core processor, a multi-processor core, a general-purpose central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or some other type of processor unit. In this illustrative example, configuration manager210determines weights218for spatial zones220in aircraft206and corresponding spatial zones222in comparison aircraft224. Weights218can be selected from one of actual part weights and estimated part weights. The comparisons can be made when the same type of weights is used in the different illustrative examples. Actual part weights can be weights in a specification for actual parts such as a bill of materials. The estimated part weights can be weights from a model of the aircraft that are selected for design or estimation purposes. Spatial zones220can be selected in a number of different ways. For example, spatial zones220can be selected based on at least one of structure, user selection, or other parameters. Additionally, spatial zones220can be a portion of aircraft206. In other words, spatial zones220do not have to make up the entire volume of aircraft206. When spatial zones220are based on structure, the structure can be, for example, a fuselage, a wing, an engine, a horizontal stabilizer, a tail section, a cockpit, or some other suitable structure. In this illustrative example, the determination of weights218can be performed in a number of different ways. For example, configuration manager210can determine weights218for spatial zones220and corresponding spatial zones222using parts data226. Parts data226can be comprised of at least one of a bill of materials identifying parts used in the aircraft and the comparison aircraft, spatial part information identifying part locations for the parts, weight data for the parts, a parts list, a part location table, a weight list for the parts, or other types of parts data226. From the determination of weights218, configuration manager210determines a set of weight differences216between weights218for spatial zones220in an aircraft and corresponding spatial zones222in comparison aircraft224. Configuration manager210creates visual representation228of aircraft206. Configuration manager210associates a set of graphical indicators231with one or more of spatial zones220in visual representation228. In this illustrative example, the set of graphical indicators231identify the set of weight differences216between spatial zones220and corresponding spatial zones222. The set of graphical indicators231can take a number of different forms. For example, the set of graphical indicators can be selected from at least one of an icon, a pictogram, an ideogram, a graphic, an image, text, animation, bolding, a color, a line, an arrow, or other suitable graphic. A graphical indicator can be associated with a spatial zone by drawing attention to the spatial zone. In this illustrative example, configuration manager210displays visual representation228of aircraft206in graphical user interface230on display system232. Configuration manager210also displays spatial zones220in visual representation228of aircraft206in association with the set of graphical indicators231identifying the set of weight differences216between spatial zones220and corresponding spatial zones222in graphical user interface230on display system232. As depicted, display system232is a physical hardware system and includes one or more display devices on which graphical user interface230can be displayed. The display devices can include at least one of a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a computer monitor, a projector, a flat panel display, a heads-up display (HUD), a head-mounted display (HMD), smart glasses, augmented reality glasses, or some other suitable device that can output information for the visual presentation of information. Human operator234is a person that can interact with graphical user interface230through user input236generated by input system238for computer system208. Input system238is a physical hardware system and can be selected from at least one of a mouse, a keyboard, a touch pad, a trackball, a touchscreen, a stylus, a motion sensing input device, a gesture detection device, a data glove, a cyber glove a haptic feedback device, or some other suitable type of input device. Display system232and input system238form human machine interface (HMI)240. In this illustrative example, human operator234can interact with visual representation228of aircraft206displayed in graphical user interface230on display system232in human machine interface240. This interaction can be formed through user input236. Turning next toFIG.3, an illustration of a block diagram of user interactions with a visual representation of aircraft is depicted in accordance with an illustrative embodiment. In the illustrative examples, the same reference numeral may be used in more than one figure. This reuse of a reference numeral in different figures represents the same element in the different figures. In this illustrative example, aircraft206and spatial zones220are displayed in visual representation228. In this example, graphical indicators231are displayed in association with spatial zones220to indicate weight differences that may be present between aircraft206and comparison aircraft224. Graphical indicators231can indicate different types of information regarding the weight comparison between aircraft206and comparison aircraft224. For example, graphical indicators231can indicate at least one of a presence of a weight difference, an absence of a weight difference, an amount of a weight difference, a part category, or other suitable information. This type of information can be conveyed by graphical indicators231in first visual representation317displayed on graphical user interface230using graphical indicators such as, for example, text, bolding, color, icons, animation, legends, pictograms, tables, or other suitable types of graphical indicators. For example, a graphical indicator such as color can be used in a spatial zone to indicate the presence or absence of a weight difference between the spatial zone and a corresponding spatial zone. Another graphical indicator can be used to indicate the amount of weight difference. This amount of weight difference can be, for example without limitation, based on ranges of weight differences of actual weights. For example, the ranges of weight differences can be identified using text or icons indicating a particular range for the weight difference. In another example, numbers can be used to represent the actual weight differences identified. These numbers can be displayed in a manner that draws attention to the spatial zone in which the weight difference is present. In this depicted example, human operator234can interact with visual representation228to obtain additional information about weight differences in spatial zones220displayed in visual representation228. For example, human operator234can generate spatial zone selection300in user input236. Spatial zone selection300is a selection of spatial zone302in spatial zones220in aircraft206as displayed in visual representation228. In response to receiving spatial zone selection300in user input236, configuration manager210can identify group of parts306in spatial zone302and in the corresponding spatial zone having the weight difference in response to spatial zone selection300of spatial zone302. Configuration manager210can display the group of parts306in association with spatial zone302in graphical user interface230. In this illustrative example, parts306can be, for example, one of a module, an assembly, components, or some other type of part. The display of the group of parts in association with spatial zone302can be within spatial zone302or in a location in visual representation228that indicates that the group of parts306are within spatial zone302. In this manner, human operator234can visualize the group of parts306within spatial zone302. As a result, human operator234can more easily identify and evaluate differences in configuration between aircraft206and comparison aircraft224through the visualization of these differences within visual representation228. In this illustrative example, the display of the group of parts306can take a number of different forms. For example, the group of parts306can be displayed as graphical three-dimensional objects, a parts list, a bill of materials containing the group of parts, table of parts, or in some other suitable manner. Additionally, human operator234can select part312from the display of parts306for spatial zone302in visual representation228displayed in graphical user interface230. This selection of part312by human operator234can result in part selection314in user input236. In response to receiving part selection314, configuration manager210can determine hierarchical structure315of components316containing part312. In this illustrative example, components316are parts that can be assembled to form hierarchical structure315. In this example, components316in one level in hierarchical structure315can be assembled or connected to form another component at higher level in hierarchical structure315. Configuration manager210can display hierarchical structure315of components316with graphical indicator322identifying part312in components316in hierarchical structure315having the weight difference. In this example, components316are a grouping of parts having hierarchical structure315. For example, components316can be parts for a structure such as a fan, a power module, an engine housing, a landing gear system, an antenna array, or some other structure. As depicted, graphical indicator322can identify part312that has the weight difference in components316and hierarchical structure315. As result, human operator234can more easily identify one or more components in a hierarchal structure of components that has the weight difference. In another illustrative example, configuration manager210identifies a group of parts310in spatial zone302that is absent from corresponding spatial zone320in comparison aircraft224. In this example, configuration manager210can display the group of parts310in spatial zone302with graphical indicator322that indicates the group of parts is absent from corresponding spatial zone320in comparison aircraft224. In another illustrative example, configuration manager210can display additional visualizations in response to a group of parts310being missing from spatial zone302. For example, visual representation228of aircraft206is first visual representation317. With this example, configuration manager210can display second visual representation326in graphical user interface230. As depicted, second visual representation326includes comparison aircraft224with corresponding spatial zones222. In this illustrative example, configuration manager210displays the group of parts310in a number of corresponding spatial zones222in second visual representation326of comparison aircraft224in graphical user interface230on display system232. With second visual representation326of aircraft206, human operator234can more easily visualize the differences in configurations between aircraft206and comparison aircraft224. This type of visualization enables human operator234to see parts that are present in both aircraft but in different locations. Thus, human operator234can see corresponding parts ended up in comparison aircraft224. As result, human operator324can more easily perceive and understand where differences are present between aircraft206and comparison aircraft224based on weight differences displayed in the different spatial zones for these aircraft. Further, when a spatial zone of interest is identified, additional formation can be displayed such as details about parts with weight differences between the two aircraft. The additional information can also include an identification of the location of these parts in the two aircraft. In one illustrative example, one or more technical solutions are present that overcome a technical problem with comparing and visualizing configurations of aircraft. As a result, one or more technical solutions may provide a technical effect providing visualizations of aircraft configurations based on weights of parts. ** In one or more illustrative examples, weights for parts are identified for spatial zones in an aircraft and a comparison aircraft. The comparison aircraft can be an aircraft having a desired configuration for comparison. In this example, the weight differences between spatial zones can be displayed in a visual representation that enables a user to more quickly and easily comprehend and understand whether differences are present in configurations based on the identification of weight differences. Further, one or more illustrative examples also enable a user to select a spatial zone with weight differences and identify one or more parts that may have a weight difference between the aircraft and the comparison aircraft. In another illustrative example, visual representation can also identify when parts in the aircraft are absent from the comparison aircraft. Other visualizations can include determining particular components a hierarchy of components in a structure or assembly. In the illustrative example, computer system208can be configured to perform at least one of the steps, operations, or actions described in the different illustrative examples using software, hardware, firmware or a combination thereof. As a result, computer system208operates as a special purpose computer system in which configuration manager210in computer system208enables a human operator to see weight differences in spatial zones between an aircraft and a comparison aircraft. In particular, configuration manager210transforms computer system208into a special purpose computer system as compared to currently available general computer systems that do not have configuration manager210. In the illustrative example, the use of configuration manager210in computer system208integrates processes into a practical application for visualizing aircraft configurations. In other words, configuration manager210in computer system208is directed to a practical application of processes integrated into configuration manager210in computer system208that determine weight differences between weights in spatial zones in an aircraft and a comparison aircraft and displays a visual representation of the aircraft in a graphical user interface. Graphical indicators are associated with the spatial zones in the visual representation to indicate whether weight differences are present. In one or more illustrative examples, graphical user interface230solves problems present with current graphical user interface devices (GUIs) in the context of data validation, relating to speed, accuracy, and usability. Rather than reciting a mathematical algorithm, a fundamental economic or longstanding commercial practice, or a challenge in business, graphical user interface230improves on existing graphical user interface devices that do not utilize configuration manager134inFIG.1or configuration manager210inFIG.2. The illustrative examples of graphical user interface230provide significantly more than prior graphical user interface devices that merely allow for setting, displaying, and selecting data or information that is visible on a graphical user interface device. Instead, graphical user interface230utilizes a specific, structured interface directly related to a prescribed functionality that resolves a specifically identified problem of automatic data retrieval and comparison. For example, in the illustrative examples, visualization of configuration differences between aircraft can be more easily perceived by a user through displaying graphical indicators that indicate weight differences in different spatial zones in an aircraft and a comparison aircraft. Graphical user interface230is improved from prior interfaces that merely allow a user to review the bill of materials or parts lists between an aircraft and a comparison aircraft. Further, this graphical user interface is improved over other graphical user interfaces that display computer aided design (CAD) models between an aircraft and a comparison aircraft. Furthermore, the specific structure and concordant functionality of graphical user interface230distinguishes this system as compared to conventional computer implementations of known procedures. The function of graphical user interface230is not simply the generalized use of computer system208as a tool to conduct a known or obvious process. Instead, graphical user interface230provides an inventive concept that allows for determining weights of spatial zones for identifying differences in aircraft configurations and displaying those weight differences as a visual representation of spatial zones with graphical indicators that identify the weight differences between the aircraft and the comparison aircraft. The illustration of visualization environment200inFIG.2is not meant to imply physical or architectural limitations to the manner in which an illustrative embodiment may be implemented. Other components in addition to or in place of the ones illustrated may be used. Some components may be unnecessary. Also, the blocks are presented to illustrate some functional components. One or more of these blocks may be combined, divided, or combined and divided into different blocks when implemented in an illustrative embodiment. For example, graphical user interface230displays first visual representation317of aircraft206and second visual representation326of comparison aircraft224. In other illustrative examples, a single visual representation can contain both aircraft. As another example, one or more aircraft in addition to comparison aircraft224can be compared to aircraft206using configuration manager210. In this example, graphical indicators231in visual representation228can be used to indicate which aircraft weight differences are present in comparing those aircraft to aircraft206. In another example, aircraft206and comparison aircraft224can be designs of aircraft and the comparison weights can be between spatial zones in the designs of the two aircraft. For example, a comparison can be made between the current design and a new design that is being developed. In yet another illustrative example, aircraft206and comparison aircraft224can be the same aircraft at different points in time or different locations in an assembly line. For example, aircraft206can be the configuration of aircraft206at an earlier point in time in an assembly line as compared to the configuration of the same aircraft, which is comparison aircraft224, at a later time farther down the assembly line. In this manner, weight differences can be determined to enable a human operator to evaluate whether parts have been installed as desired during different phases of aircraft manufacturing. With reference next toFIG.4, an illustration of a block diagram of generating output identifying weight differences between an aircraft and a comparison aircraft is depicted in accordance with an illustrative embodiment. In this illustrative example, configuration manager400is an example of an implementation for configuration manager210inFIG.2. As depicted, configuration manager400can determine the weight for each spatial zone in an aircraft for a particular aircraft configuration using parts data402. In this illustrative example, parts data402is an example of parts data226inFIG.2. As depicted, parts data402comprises a number of different types of data regarding the parts in aircraft. As depicted, parts data402comprises bill of materials (BOM)404, spatial part information406, weight data408. Bill of materials404is a list of the materials, parts, and the quantities of parts needed to manufacture an aircraft or parts for an aircraft. These parts can be, for example, assemblies, subassemblies, intermediate assemblies, intermediate structures, or other objects. Bill of materials404includes part identifiers that are associated with the parts in this example. Spatial part information406identifies locations for parts. The locations can be described using a three dimensional coordinate system such as (x, y, z) coordinates in a Cartesian coordinate system. In the illustrative example, the parts in spatial part information406are correlated to parts in bill of materials404by using part identifiers. Weight data408is the weight of the parts. In some illustrative examples, weight data408may be located in bill of materials404. As depicted, configuration manager400can determine weight from parts data402for each spatial zone in an aircraft and a comparison aircraft. With the determination of weight in spatial zones for the two aircraft, the difference between the weight of the two aircraft in the different spatial zones can be determined. Configuration manager400uses these weight differences to generate output410. In this example, output410comprises tables and reports412and visual representation414. Tables and reports412are tables and documents that contain weight difference information. Tables and reports412can identify spatial zones in the aircraft and provide numbers for the weight differences between spatial zones. Additionally, these tables and reports can also identify weight differences between corresponding components found in both the spatial zone for the aircraft and the corresponding spatial zone for the comparison aircraft. As another example, these tables and reports can also include information about parts in a spatial zone in the aircraft that are not found in a corresponding spatial zone in the comparison aircraft. As depicted, visual representation414in output410can be displayed in a graphical user interface for viewing by a human operator. In this example, visual representation414is a graphical depiction of the aircraft and spatial zones in the aircraft. This visual representation also identifies any weight differences that are present in the spatial zones. In some cases, the weight differences may be no differences in weight. With reference next toFIG.5, an illustration of a diagram of a weight difference for parts in a hierarchical structure is depicted in accordance with an illustrative embodiment. In this illustrative example, a comparison is made between hierarchical structure500in airplane #1502and hierarchical structure504in airplane #2506. In this example, hierarchical structure500is a part in airplane #1502and hierarchical structure504is the corresponding part in airplane #2506. As part of the comparison, these two parts are compared to determine whether the weight difference is present. In making this determination, the components forming hierarchical structure500and hierarchical structure504are compared to determine whether any components forming hierarchical structure500and hierarchical structure504have weight differences. As depicted, a difference is present at level 0510, at level 1512, and level 3514between the components in hierarchical structure500and hierarchical structure504. At level 4516, two components are present in hierarchical structure500and hierarchical structure504. As depicted, component D1520in hierarchical structure500and corresponding component D1522in hierarchical structure504have the same weight. At level 4516, component D2524in hierarchical structure500and corresponding component D3526in hierarchical structure504do not have the same weight. A weight difference is present between these two parts. For example, if component D2524and corresponding component D3526are fasteners, then these fasteners may be comprised of different alloys resulting in a weight difference. In another illustrative example, different dimensions for these fasteners can also result in a weight difference. The weight difference of these two parts causes a weight difference to be present in the upper hierarchical levels of these two article structures. For example, the weight difference of component D2524and corresponding component D3526causes a weight difference to be present at level 3514. In the different illustrative examples, this identification of the weight difference between one or more components in hierarchical structures can presented to the user in a visual representation. This additional detail can be displayed in response to a user input requesting the additional detail. With reference next toFIG.6, an illustration of a visual representation of weight differences in spatial zones in an aircraft is depicted in accordance with an illustrative embodiment. In this illustrative example, visual representation600is an example of one implementation for visual representation228inFIG.2andFIG.3. As depicted, airplane602is displayed in visual representation600with spatial zones in airplane602. In this example, the spatial zones in airplane602include horizontal stabilizers606, vertical stabilizer608, wings610, engines612, nose section614, body616, and tail section618. In this illustrative example, graphical indicators are displayed in association with some spatial zones in airplane602. As depicted, the graphical indicators include graphical indicator620, graphical indicator622, and graphical indicator624. In this example, graphical indicator620and graphical indicator622indicate that weight differences are present in body616. As shown, graphical indicator624indicates that a weight difference is present in horizontal stabilizers606. In this illustrative example, these graphical indicators are positioned with respect to spatial zones to indicate a location in the spatial zones where the weight differences are found. In this example, two locations are present in body616in which weight differences are present. These graphical indicators can be selected to present more information. For example, the selection of graphical indicator624can result in a display of the parts in which weight differences are present in horizontal stabilizers606. This presentation of the weight differences can be in a table, a three-dimensional view of the parts, or in other forms. In another illustrative example, the table can include information about the values for the weight differences, part identifiers, and other information. With reference next toFIG.7, an illustration of a visual representation of weight differences in spatial zones in an aircraft is depicted in accordance with an illustrative embodiment. In this illustrative example, visual representation700is another example of an implementation for visual representation228inFIG.2andFIG.3. As depicted, visual representation700provides a visualization of airplane702with four spatial zones. The spatial zones are zone 1704, zone 2706, zone 3708, and zone 4710. In this illustrative example, these four spatial zones are only a portion of the airplane702. In other words, the spatial zones do not have to make up the entire volume within an aircraft. In this illustrative example, visual representation700also includes table720. Table720identifies information about parts in an aircraft and a comparison aircraft. Table720comprises columns comprising zone number722, part number724, Plane A726, Plane B728, location730, and weight732. Zone number722identifies the spatial zone for the aircraft and the comparison aircraft. Part number724identifies a part number for a part. Plane A728is the aircraft and indicates whether the part is present in the aircraft. Plane B728is the comparison aircraft and indicates whether the part is present in the comparison aircraft. Location730identifies the location in three-dimensional space of the part identified by the part number within the zone. Weight732identifies the weight of the part identified by part number. In this illustrative example, zone 1704contains part XXH and part XXI in Plane A, and contains part XXH and part XXJ in Plane B. In this example, part XXI is not present in Plane B and part XXJ is not present in Plane A. The sum of the weights for the part in Plane A is WX+WY and the sum of the weights for the part in Plane B is WY+WZ. In this example, the weight difference or zone 1704is WY-WZ. As a result, zone 1704is associated with graphical indicator731, which is a first color, such as red. Also in this example, zone 2706contains part XXK and part XL. Both of these parts are present in both Plane A and Plane B. These parts both have the same weight as a result, a weight difference for zone 2706is zero. In other words, the weight difference is no difference in weight between the parts in zone 2706in Plane A and Plane B. In this example, zone 2706is associated with graph indicator733, which is a second color such as green. In this illustrative example, zone 3708as graph indicator734. In this example, graph indicator734is third color such as yellow. Zone 3708in both Plane A and Plane B may have no weight difference. However, in this case, one of the parts in Plane A may be in a different location in zone 3708from the corresponding part in Plane B. In this example, zone 4710is associated with graphical indicator736, which is the same color as graphical indicator731. The graphical representations illustrated inFIG.6andFIG.7are provided as example implementations for visual representation228displayed in graphical user interface230on display system232inFIG.2andFIG.3. These example illustrations are not meant to limit the manner in which other visual representations can be presented. For example, in another illustrative example, visual representation700can omit table720. In other illustrative examples, table720can take other forms. For example, table720can use a single column in place of Plane A726, Plane B728to indicate if a part is in a particular airplane. For example, the column can indicate whether a part is in “both” airplanes, in “A only, or in “B only”. In yet another example, table720can include another column for Plane C such that a comparison of more than two airplanes can be made. In yet another illustrative example, visual representation can display a hierarchy of parts when hierarchical structures are present in an aircraft. This display can identify parts within the aircraft structure that have a weight difference between the two aircraft. In yet another illustrative example, graphical indicators can also include numbers identifying the weight difference between a part in an aircraft and the corresponding part in the comparison aircraft. Additionally, these graphical representations can be three-dimensional graphical representations that can be manipulated by a human operator. For example, a human operator can change rotate or turn the airplane in the graphical representation. In yet another illustrative example, the graphical indicators can include links that selectable to provide more detailed information about parts. For example, graphical indicator can be selected in a spatial zone to identify parts that have weight differences between the aircraft and the comparison aircraft. The identification of parts can be presented in a table in response to a selection of the spatial zone, a graphical display of the part, a display of a part number, or other information. Turning next toFIG.8, an illustration of a flowchart of a process for visualizing aircraft configurations is depicted in accordance with an illustrative embodiment. The process inFIG.8can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that is run by one of more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in configuration manager210in computer system208inFIG.2. The process begins by determining a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft (operation800). The process displays a visual representation of the aircraft in a graphical user interface on a display system (operation802). The process displays the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system (operation804). The process terminates thereafter. With reference toFIG.9, an illustration of a flowchart of a process for determining weights for spatial zones in aircraft is depicted in accordance with an illustrative embodiment. The operation inFIG.9is an example of an additional operation that can be used with the operations in the process inFIG.8. The process determines the weights for the spatial zones in the aircraft and the corresponding spatial zones in the comparison aircraft (operation900). The process terminates thereafter. Turning toFIG.10, an illustration of a flowchart of a process for determining weights for spatial zones in aircraft is depicted in accordance with an illustrative embodiment. The process inFIG.10is an example of an implementation of operation900inFIG.9. The process determines the weights for the spatial zones and the corresponding spatial zones using parts data (operation1000). The process terminates thereafter. Turning next toFIG.11, an illustration of a flowchart of a process for determining weight difference between for selected spatial zone and a corresponding spatial zone in an aircraft is depicted in accordance with an illustrative embodiment. The process inFIG.11is an example of an implementation of operation800inFIG.8. The process begins by identifying parts located in a selected spatial zone in the spatial zones (operation1100). The process determines first weights for the parts in the selected spatial zone (operation1102). The process sums the first weights to obtain a first summed weight for the selected spatial zone (operation1104). The process identifies the parts in a selected corresponding spatial zone that corresponds to the selected spatial zone (operation1106). The process determines the weights for the parts in the selected corresponding spatial zone (operation1108). The process sums second weights for the parts in the selected corresponding spatial zone to obtain a second summed weight for the selected corresponding spatial zone (operation1110). The process determines a difference between the first summed weight and the second summed weight, wherein the difference is for the selected spatial zone and the selected corresponding spatial zone (operation1112). The process terminates thereafter. With reference toFIG.12, an illustration of a flowchart of a process for displaying a group of parts in a spatial zone and a corresponding spatial zone for aircraft is depicted in accordance with an illustrative embodiment. The operations inFIG.12are examples of additional operations that can be used with the operations in the process inFIG.8. The process begins by receiving a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones (operation1200). The process identifies a group of parts in the spatial zone and in the corresponding spatial zone having the weight difference in response to the spatial zone selection of the spatial zone (operation1202). The process displays the group of parts in the spatial zone in the graphical user interface (operation1204). The process terminates thereafter. Turning toFIG.13, an illustration of a flowchart of a process for displaying a hierarchical structure of components containing a part is depicted in accordance with an illustrative embodiment. The operations inFIG.13are examples of additional operations that can be used with the operations in the process inFIG.12. The process begins by receiving a part selection of a part in the group of parts (operation1300). The process determines a hierarchical structure of components containing the part (operation1302). The process displays the hierarchical structure of components with a graphical indicator identifying the part in the components in the hierarchical structure having the weight difference (operation1304). The process terminates thereafter. Turning next toFIG.14, an illustration of a flowchart of a process for displaying a group of parts in a spatial zone that is absent from a corresponding spatial zone is depicted in accordance with an illustrative embodiment. The operations inFIG.14are examples of additional operations that can be used with the operations in the process inFIG.8. The process begins by receiving a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones (operation1400). The process identifies a group of parts in the spatial zone that is absent from the corresponding spatial zone (operation1402). The process displays the group of parts in the spatial zone with a graphical indicator indicating that the group of parts is absent from the corresponding spatial zone (operation1404). The process terminates thereafter. With reference toFIG.15, an illustration of a flowchart of a process for displaying parts in corresponding spatial zones in a second visual representation is depicted in accordance with an illustrative embodiment. The operations inFIG.15are examples of additional operations that can be used with the operations in the process inFIG.14. The process begins by displaying a second visual representation of the comparison aircraft in the graphical user interface on the display system (operation1500). The process displays the group of parts in a number of the corresponding spatial zones in the second visual representation of the comparison aircraft in the graphical user interface on the display system (operation1502). The process terminates thereafter. Turning toFIG.16, an illustration of a flowchart of a process for visualizing platform configurations is depicted in accordance with an illustrative embodiment. The process inFIG.16can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that is run by one of more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in configuration manager210in computer system208inFIG.2. The process begins by determining a set of weight differences between weights for spatial zones in a platform and corresponding spatial zones in a comparison platform (operation1600). The process displays a visual representation of the platform in a graphical user interface on a display system (operation1602). The process displays the spatial zones in the visual representation of the platform in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system (operation1604). The process terminates thereafter. The flowcharts and block diagrams in the different depicted embodiments illustrate the architecture, functionality, and operation of some possible implementations of apparatuses and methods in an illustrative embodiment. In this regard, each block in the flowcharts or block diagrams can represent at least one of a module, a segment, a function, or a portion of an operation or step. For example, one or more of the blocks can be implemented as program instructions, hardware, or a combination of the program instructions and hardware. When implemented in hardware, the hardware can, for example, take the form of integrated circuits that are manufactured or configured to perform one or more operations in the flowcharts or block diagrams. When implemented as a combination of program instructions and hardware, the implementation may take the form of firmware. Each block in the flowcharts or the block diagrams can be implemented using special purpose hardware systems that perform the different operations or combinations of special purpose hardware and program instructions run by the special purpose hardware. In some alternative implementations of an illustrative embodiment, the function or functions noted in the blocks may occur out of the order noted in the figures. For example, in some cases, two blocks shown in succession may be performed substantially concurrently, or the blocks may sometimes be performed in the reverse order, depending upon the functionality involved. Also, other blocks may be added in addition to the illustrated blocks in a flowchart or block diagram. Turning now toFIG.17, a block diagram of a data processing system is depicted in accordance with an illustrative embodiment. Data processing system1700can be used to implement server computer104, server computer106, client devices110, inFIG.1. Data processing system1700can also be used to implement computer system208inFIG.2. In this illustrative example, data processing system1700includes communications framework1702, which provides communications between processor unit1704, memory1706, persistent storage1708, communications unit1710, input/output (I/O) unit1712, and display1714. In this example, communications framework1702takes the form of a bus system. Processor unit1704serves to execute instructions for software that can be loaded into memory1706. Processor unit1704includes one or more processors. For example, processor unit1704can be selected from at least one of a multicore processor, a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a network processor, or some other suitable type of processor. Further, processor unit1704can may be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit1704can be a symmetric multi-processor system containing multiple processors of the same type on a single chip. Memory1706and persistent storage1708are examples of storage devices1716. A storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, at least one of data, program instructions in functional form, or other suitable information either on a temporary basis, a permanent basis, or both on a temporary basis and a permanent basis. Storage devices1716may also be referred to as computer-readable storage devices in these illustrative examples. Memory1706, in these examples, can be, for example, a random-access memory or any other suitable volatile or non-volatile storage device. Persistent storage1708may take various forms, depending on the particular implementation. For example, persistent storage1708may contain one or more components or devices. For example, persistent storage1708can be a hard drive, a solid-state drive (SSD), a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage1708also can be removable. For example, a removable hard drive can be used for persistent storage1708. Communications unit1710, in these illustrative examples, provides for communications with other data processing systems or devices. In these illustrative examples, communications unit1710is a network interface card. Input/output unit1712allows for input and output of data with other devices that can be connected to data processing system1700. For example, input/output unit1712may provide a connection for user input through at least one of a keyboard, a mouse, or some other suitable input device. Further, input/output unit1712may send output to a printer. Display1714provides a mechanism to display information to a user. Instructions for at least one of the operating system, applications, or programs can be located in storage devices1716, which are in communication with processor unit1704through communications framework1702. The processes of the different embodiments can be performed by processor unit1704using computer-implemented instructions, which may be located in a memory, such as memory1706. These instructions are referred to as program instructions, computer usable program instructions, or computer-readable program instructions that can be read and executed by a processor in processor unit1704. The program instructions in the different embodiments can be embodied on different physical or computer-readable storage media, such as memory1706or persistent storage1708. Program instructions1718is located in a functional form on computer-readable media1720that is selectively removable and can be loaded onto or transferred to data processing system1700for execution by processor unit1704. Program instructions1718and computer-readable media1720form computer program product1722in these illustrative examples. In the illustrative example, computer-readable media1720is computer-readable storage media1724. Computer-readable storage media1724is a physical or tangible storage device used to store program instructions1718rather than a medium that propagates or transmits program instructions1718. Computer-readable storage media1724can be at least one of an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or other physical storage medium. Some known types of storage devices that include these mediums include: a diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device, such as punch cards or pits/lands formed in a major surface of a disc, or any suitable combination thereof. Computer-readable storage media1724, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as at least one of radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, or other transmission media. Further, data can be moved at some occasional points in time during normal operations of a storage device. These normal operations include access, de-fragmentation or garbage collection. However, these operations do not render the storage device as transitory because the data is not transitory while the data is stored in the storage device. Alternatively, program instructions1718can be transferred to data processing system1700using a computer-readable signal media. The computer-readable signal media are signals and can be, for example, a propagated data signal containing program instructions1718. For example, the computer-readable signal media can be at least one of an electromagnetic signal, an optical signal, or any other suitable type of signal. These signals can be transmitted over connections, such as wireless connections, optical fiber cable, coaxial cable, a wire, or any other suitable type of connection. Further, as used herein, “computer-readable media1720” can be singular or plural. For example, program instructions1718can be located in computer-readable media1720in the form of a single storage device or system. In another example, program instructions1718can be located in computer-readable media1720that is distributed in multiple data processing systems. In other words, some instructions in program instructions1718can be located in one data processing system while other instructions in program instructions1718can be located in one data processing system. For example, a portion of program instructions1718can be located in computer-readable media1720in a server computer while another portion of program instructions1718can be located in computer-readable media1720located in a set of client computers. The different components illustrated for data processing system1700are not meant to provide architectural limitations to the manner in which different embodiments can be implemented. In some illustrative examples, one or more of the components may be incorporated in or otherwise form a portion of, another component. For example, memory1706, or portions thereof, may be incorporated in processor unit1704in some illustrative examples. The different illustrative embodiments can be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system1700. Other components shown inFIG.17can be varied from the illustrative examples shown. The different embodiments can be implemented using any hardware device or system capable of running program instructions1718. Illustrative embodiments of the disclosure may be described in the context of aircraft manufacturing and service method1800as shown inFIG.18and aircraft1900as shown inFIG.19. Turning first toFIG.18, an illustration of an aircraft manufacturing and service method is depicted in accordance with an illustrative embodiment. During pre-production, aircraft manufacturing and service method1800may include specification and design1802of aircraft1900inFIG.19and material procurement1804. During production, component and subassembly manufacturing1806and system integration1808of aircraft1900inFIG.19takes place. Thereafter, aircraft1900inFIG.19can go through certification and delivery1810in order to be placed in service1812. While in service1812by a customer, aircraft1900inFIG.19is scheduled for routine maintenance and service1814, which may include modification, reconfiguration, refurbishment, and other maintenance or service. Each of the processes of aircraft manufacturing and service method1800may be performed or carried out by a system integrator, a third party, an operator, or some combination thereof. In these examples, the operator may be a customer. For the purposes of this description, a system integrator may include, without limitation, any number of aircraft manufacturers and major-system subcontractors; a third party may include, without limitation, any number of vendors, subcontractors, and suppliers; and an operator may be an airline, a leasing company, a military entity, a service organization, and so on. With reference now toFIG.19, an illustration of an aircraft is depicted in which an illustrative embodiment may be implemented. In this example, aircraft1900is produced by aircraft manufacturing and service method1800inFIG.18and may include airframe1902with plurality of systems1904and interior1906. Examples of systems1904include one or more of propulsion system1908, electrical system1910, hydraulic system1912, and environmental system1914. Any number of other systems may be included. Although an aerospace example is shown, different illustrative embodiments may be applied to other industries, such as the automotive industry. Apparatuses and methods embodied herein may be employed during at least one of the stages of aircraft manufacturing and service method1800inFIG.18. In one illustrative example, components or subassemblies produced in component and subassembly manufacturing1806inFIG.18can be fabricated or manufactured in a manner similar to components or subassemblies produced while aircraft1900is in service1812inFIG.18. As yet another example, one or more apparatus embodiments, method embodiments, or a combination thereof can be utilized during production stages, such as component and subassembly manufacturing1806and system integration1808inFIG.18. One or more apparatus embodiments, method embodiments, or a combination thereof may be utilized while aircraft1900is in service1812, during maintenance and service1814inFIG.18, or both. The use of a number of the different illustrative embodiments may substantially expedite the assembly of aircraft1900, reduce the cost of aircraft1900, or both expedite the assembly of aircraft1900and reduce the cost of aircraft1900. For example, configuration manager134inFIG.1and configuration manager210inFIG.2can be used during specification and design1802to compare designs of aircraft with each other. For example, a comparison can be made between the current design and a new design that is being developed. As another example, these configuration managers can be used during system integration1808and certification and delivery1810to compare the state of aircraft during manufacturing and prior to delivery. In another illustrative example, configuration manager134inFIG.1and configuration manager210inFIG.2can be used during maintenance and service1814for planning and executing modification, reconfiguration, refurbishment, and other maintenance or service. Configuration manager210inFIG.2can also be used to compare aircraft to plan configuration changes as well as perform and compare reconfigured aircraft. As another example, when maintenance and service1814involves replacing or upgrading parts, identifying weight differences in spatial zones in aircraft can be used to determine whether the replacement or upgrade of parts were performed as desired according to the specification for maintenance plan. Some features of the illustrative examples are described in the following clauses. These clauses are examples of features and are not intended to limit other illustrative examples. Clause 1 A method for visualizing aircraft configurations, the method comprising:determining, by a computer system, a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft;displaying, by the computer system, a visual representation of the aircraft in a graphical user interface on a display system; anddisplaying, by the computer system, the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. Clause 2 The method according to clause 1 further comprising:determining, by the computer system, the weights for the spatial zones in the aircraft and the corresponding spatial zones in the comparison aircraft. Clause 3 The method according to clause 2, wherein determining, by the computer system, the weights for the spatial zones and the corresponding spatial zones comprises:determining, by the computer system, the weights for the spatial zones and the corresponding spatial zones using parts data. Clause 4 The method according to clause 3, wherein the parts data comprises at least one of a bill of materials identifying parts used in the aircraft and the comparison aircraft, spatial part information identifying part locations for the parts, weight data for the parts, a parts list, a part location table, or a weight list for the parts. Clause 5 The method according to one of clauses 1, 2, 3, or 4, wherein determining, by the computer system, the set of weight differences between the weights for the spatial zones in the aircraft and the corresponding spatial zones in the comparison aircraft comprises:identifying, by the computer system, parts located in a selected spatial zone in the spatial zones;determining, by the computer system, first weights for the parts in the selected spatial zone;summing, by the computer system, the first weights to obtain a first summed weight for the selected spatial zone;identifying, by the computer system, the parts in a selected corresponding spatial zone that corresponds to the selected spatial zone;determining, by the computer system, the weights for the parts in the selected corresponding spatial zone;summing, by the computer system, second weights for the parts in the selected corresponding spatial zone to obtain a second summed weight for the selected corresponding spatial zone; anddetermining, by the computer system, a difference between the first summed weight and the second summed weight, wherein the difference is for the selected spatial zone and the selected corresponding spatial zone. Clause 6 The method according to one of clauses 1, 2, 3, 4, or 5 further comprising:receiving, by the computer system, a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones;identifying, by the computer system, a group of parts in the spatial zone and in the corresponding spatial zone having the weight difference in response to the spatial zone selection of the spatial zone; anddisplaying, by the computer system, the group of parts in the spatial zone in the graphical user interface. Clause 7 The method according to clause 6 further comprising:receiving, by the computer system, a part selection of a part in the group of parts;determining, by the computer system, a hierarchical structure of components containing the part; anddisplaying, by the computer system, the hierarchical structure of the components with a graphical indicator identifying the part in the components in the hierarchical structure having the weight difference. Clause 8 The method according to one of clauses 1, 2, 3, 4, 5, 6, or 7 further comprising:receiving, by the computer system, a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones;identifying, by the computer system, a group of parts in the spatial zone that is absent from the corresponding spatial zone; anddisplaying, by the computer system, the group of parts in the spatial zone with a graphical indicator indicating that the group of parts is absent from the corresponding spatial zone. Clause 9 The method according to clause 8, wherein the visual representation of the aircraft is a first visual representation and further comprising:displaying, by the computer system, a second visual representation of the comparison aircraft in the graphical user interface on the display system; anddisplaying, by the computer system, the group of parts in a number of the corresponding spatial zones in the second visual representation of the comparison aircraft in the graphical user interface on the display system. Clause 10 The method according to one of clauses 1, 2, 3, 4, 5, 6, 7, 8, or 9, wherein the weights are selected from one of actual part weights and estimated part weights. Clause 11 The method according to one of clauses 5, 6, 7, 8, 9, or 10, wherein the parts are one of a module, an assembly, and components. Clause 12 The method according to one of clauses 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11, wherein the aircraft and the comparison aircraft are both of a same model or of a same model variant. Clause 13 An configuration visualization system comprising:a computer system executes program instructions to:determine a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft;display a visual representation of the aircraft in a graphical user interface on a display system; anddisplay the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. Clause 14 The configuration visualization system according to clause 13, wherein the computer system executes the program instructions to:determine the weights for the spatial zones in the aircraft and the corresponding spatial zones in the comparison aircraft. Clause 15 The configuration visualization system according to clause 14, wherein in determining the weights for the spatial zones and the corresponding spatial zones, the computer system executes the program instructions to:determining the weights for the spatial zones and the corresponding spatial zones using parts data. Clause 16 The configuration visualization system according to clause 15, wherein the parts data comprises at least one of a bill of materials identifying parts used in the aircraft and the comparison aircraft, spatial part information identifying part locations for the parts, weight data for the parts, a parts list, a part location table, or a weight list for the parts. Clause 17 The configuration visualization system according to one of clauses 13, 14, 15, or 16, wherein in determining the set of weight differences between the weights for the spatial zones in the aircraft and the corresponding spatial zones in the comparison aircraft, the computer system executes the program instructions to:identify parts located in a selected spatial zone in the spatial zones;determine first weights for the parts in the selected spatial zone;sum the first weights to obtain a first summed weight for the selected spatial zone;identify the parts in a selected corresponding spatial zone that corresponds to the selected spatial zone;determine the weights for the parts in the selected corresponding spatial zone;sum second weights for the parts in the selected corresponding spatial zone to obtain a second summed weight for the selected corresponding spatial zone; anddetermine a difference between the first summed weight and the second summed weight, wherein the difference is for the selected spatial zone and the selected corresponding spatial zone. Clause 18 The configuration visualization system according to one of clauses 13, 14, 15, 16, or 17, wherein the computer system executes the program instructions to:receive a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones;identify a group of parts in the spatial zone and in the corresponding spatial zone having the weight difference in response to the spatial zone selection of the spatial zone; anddisplay the group of parts in the spatial zone in the graphical user interface. Clause 19 The configuration visualization system according to clause 18, wherein the computer system executes the program instructions to:receive a part selection of a part in the group of parts;determine a hierarchical structure of components containing the part; anddisplay the hierarchical structure of the components with a graphical indicator identifying the part in the components in the hierarchical structure having the weight difference. Clause 20 The configuration visualization system according to one of clauses 13, 14, 15, 16, 17, 18, or 19, wherein the computer system executes the program instructions to:receive a spatial zone selection of a spatial zone in the spatial zones having a weight difference from a corresponding spatial zone in the corresponding spatial zones;identify a group of parts in the spatial zone that is absent from the corresponding spatial zone; anddisplay the group of parts in the spatial zone with a graphical indicator indicating that the group of parts is absent from the corresponding spatial zone. Clause 21 The configuration visualization system according to clause 20, wherein the visual representation of the aircraft is a first visual representation and wherein the computer system executes the program instructions to:display a second visual representation of the comparison aircraft in the graphical user interface on the display system; anddisplay the group of parts in a number of the corresponding spatial zones in the second visual representation of the comparison aircraft in the graphical user interface on the display system. Clause 22 The configuration visualization system according to one of clauses 13, 14, 15, 16, 17, 18, 19, or 21, wherein the weights are selected from one of actual part weights and estimated part weights. Clause 23 The configuration visualization system according to one of clauses 17, 18, 19, 20, 21, or 22, wherein the parts are one of a module, an assembly, and components. Clause 24 The configuration visualization system according to one of clauses 13, 14, 15, 16, 17, 18, 19, 21, 22, or 23, wherein the aircraft and the comparison aircraft are both of a same model or of a same model variant. Clause 25 A computer program product for visualizing aircraft configurations, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer system to cause the computer system to perform a method of:determining, by the computer system, a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft;displaying, by the computer system, a visual representation of the aircraft in a graphical user interface on a display system; anddisplaying, by the computer system, the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. Clause 26 A method for visualizing platform configurations, the method comprising:determining, by a computer system, a set of weight differences between weights for spatial zones in a platform and corresponding spatial zones in a comparison platform;displaying, by the computer system, a visual representation of the platform in a graphical user interface on a display system; anddisplaying, by the computer system, the spatial zones in the visual representation of the platform in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. Thus, the illustrative embodiments provide a method, apparatus, system, and computer program product for visualizing aircraft configurations. In one illustrative example, a computer system determines a set of weight differences between weights for spatial zones in an aircraft and corresponding spatial zones in a comparison aircraft. The computer system displays a visual representation of the aircraft in a graphical user interface on a display system. The computer system displays the spatial zones in the visual representation of the aircraft in association with a set of graphical indicators identifying the set of weight differences between the spatial zones and the corresponding spatial zones in the graphical user interface on the display system. With these visualizations of configuration differences, human operator can more easily compare individualized airplane configurations as compared to current techniques. In the illustrative examples, the use of weights for spatial zones determined from part weights for parts in the spatial zones enable displaying differences between a spatial zone in an aircraft and the corresponding spatial zone in the comparison aircraft. A human operator can more easily identify differences between two configurations of an airplane. For example, a human operator may desire to convert an airplane from a passenger airplane to a cargo airplane. With this conversion, a comparison is used to determine what changes are needed to reconfigure the passenger airplane to a cargo airplane. In this example, a cargo airplane having a desired configuration can be an aircraft that is compared to the passenger airplane which is the comparison aircraft in this example. The cargo airplane can be compared to a number of different passenger airplanes to determine which passenger airplanes may be most suitable for conversion. Based on the visualization of weight differences in spatial zones, determining what changes are needed and the cost of those changes can be more easily made using a configuration manager in the illustrative examples. The description of the different illustrative embodiments has been presented for purposes of illustration and description and is not intended to be exhaustive or limited to the embodiments in the form disclosed. For example, although platform204inFIG.2has been described as an aircraft, the illustrative examples can be applied to comparing configurations of other types of platforms. A platform for comparison can be, for example, a mobile platform, a stationary platform, a land-based structure, an aquatic-based structure, and a space-based structure. More specifically, the platform can be a surface ship, a tank, a personnel carrier, a train, a spacecraft, a space station, a satellite, a submarine, an automobile, a power plant, a bridge, a dam, a house, a manufacturing facility, a building, and other suitable platforms. In another illustrative example, the determination of weights for spatial zones in aircraft can be made to determine a center of gravity for the aircraft. With the weights for the spatial zones, the placement of cargo within the aircraft can be selected to balance the aircraft with respect to the center of gravity and can provide a more neutral trim. With this implementation of the configuration manager, weight balancing of cargo in the aircraft can improve the performance of the aircraft for each flight of the aircraft. In other words, as different types of cargo are to be carried by the aircraft, the placement of the cargo to obtain a desired balance with respect to the center of gravity can be tailored towards the cargo that is to be transported. Additionally, by balancing the cargo using the weights in the spatial zones, reduced issues with vibrations or adjusted trim can occur. As result, reduced maintenance from wear on parts and aircraft can occur. Further, less compensation by flight control surfaces can be used when the cargo is loaded to balance the aircraft for flight. With reduced trim adjustments and control adjustments, quicker responses can be obtained from the aircraft. This type of balancing can also improve safety through undesired environmental conditions such as thunderstorms or turbulence. The different illustrative examples describe components that perform actions or operations. In an illustrative embodiment, a component can be configured to perform the action or operation described. For example, the component can have a configuration or design for a structure that provides the component an ability to perform the action or operation that is described in the illustrative examples as being performed by the component. Further, to the extent that terms “includes”, “including”, “has”, “contains”, and variants thereof are used herein, such terms are intended to be inclusive in a manner similar to the term “comprises” as an open transition word without precluding any additional or other elements. Many modifications and variations will be apparent to those of ordinary skill in the art. Further, different illustrative embodiments may provide different features as compared to other desirable embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. | 86,545 |
11861136 | DESCRIPTION OF EMBODIMENTS A virtual reality environment is an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world. Conventional methods of interacting with virtual reality environments often require multiple separate inputs (e.g., a sequence of gestures and button presses, etc.) to achieve an intended outcome. Further, conventional methods of interacting with virtual reality environments do not allow a user to interact with other electronic devices in the physical world while still immersed in the virtual reality environment. The embodiments herein allow the user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world and provide an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones). Additionally, the embodiments herein enhance the interaction with the virtual device and provide a more immersive and intuitive way to interact with the virtual device by displaying the virtual device in a simulated three-dimensional environment that includes additional information and experiences that are not available in the physical world when interacting with the real device. The systems, methods, and GUIs described herein improve user interface interactions with virtual reality environments in multiple ways. For example, they make it easier to: interact with and control a virtual device in the virtual reality environment that corresponds to a real device in the physical world, change input modes, manipulate two-dimensional and three-dimensional representations of content, distinguish hover inputs on an input device from contact inputs on the input device in the virtual reality environment, and interact with displayed virtual user interfaces. Below,FIGS.1A-1B,2, and3A-3Cprovide a description of example devices.FIGS.4A-4Band5A1-5A48illustrate example user interfaces for interacting with virtual reality environments, in accordance with some embodiments.FIGS.6A-6Eare flow diagrams of a process for displaying and adjusting an appearance of a virtual user interface object in a virtual reality environment based on user inputs in the physical world, in accordance with some embodiments.FIGS.7A-7Care flow diagrams of a process for selecting a mode of operation of an input device in accordance with movement of and changes in pose of the input device, in accordance with some embodiments.FIGS.8A-8Care flow diagrams of a process for displaying and performing navigation operations within corresponding two-dimensional and three-dimensional user interfaces, in accordance with some embodiments.FIGS.9A-9Bare flow diagrams of a process for displaying and adjusting an appearance of a focus indicator on a virtual user interface object in a virtual reality environment based on user inputs in the physical world, in accordance with some embodiments.FIGS.10A-10Care flow diagrams of a process for updating display of virtual user interface objects and associated virtual user interfaces in accordance with movement of and changes in pose of an input device, in accordance with some embodiments. The user interfaces in FIGS.5A1-5A48are used to illustrate the processes inFIGS.6A-6E,7A-7C,8A-8C,9A-9B, and10A-10C. Example Devices Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact, unless the context clearly indicates otherwise. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Computer systems for virtual/augmented reality include electronic devices that produce virtual/augmented reality environments. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Example embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch-screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch-screen display and/or a touchpad) that also includes, or is in communication with, one or more cameras. In the discussion that follows, a computer system that includes an electronic device that has (and/or is in communication with) a display and a touch-sensitive surface is described. It should be understood, however, that the computer system optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands. The device typically supports a variety of applications, such as one or more of the following: a gaming application, a note taking application, a drawing application, a presentation application, a word processing application, a spreadsheet application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application. The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed by the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user. Attention is now directed toward embodiments of portable devices with touch-sensitive displays.FIG.1Ais a block diagram illustrating portable multifunction device100with touch-sensitive display system112in accordance with some embodiments. Touch-sensitive display system112is sometimes called a “touch screen” for convenience, and is sometimes simply called a touch-sensitive display. Device100includes memory102(which optionally includes one or more computer readable storage mediums), memory controller122, one or more processing units (CPUs)120, peripherals interface118, RF circuitry108, audio circuitry110, speaker111, microphone113, input/output (I/O) subsystem106, other input or control devices116, and external port124. Device100optionally includes one or more optical sensors164(e.g., as part of one or more cameras). Device100optionally includes one or more intensity sensors165for detecting intensities of contacts on device100(e.g., a touch-sensitive surface such as touch-sensitive display system112of device100). Device100optionally includes one or more tactile output generators163for generating tactile outputs on device100(e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system112of device100or touchpad355of device300). These components optionally communicate over one or more communication buses or signal lines103. As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. Using tactile outputs to provide haptic feedback to a user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, a tactile output pattern specifies characteristics of a tactile output, such as the amplitude of the tactile output, the shape of a movement waveform of the tactile output, the frequency of the tactile output, and/or the duration of the tactile output. When tactile outputs with different tactile output patterns are generated by a device (e.g., via one or more tactile output generators that move a moveable mass to generate tactile outputs), the tactile outputs may invoke different haptic sensations in a user holding or touching the device. While the sensation of the user is based on the user's perception of the tactile output, most users will be able to identify changes in waveform, frequency, and amplitude of tactile outputs generated by the device. Thus, the waveform, frequency and amplitude can be adjusted to indicate to the user that different operations have been performed. As such, tactile outputs with tactile output patterns that are designed, selected, and/or engineered to simulate characteristics (e.g., size, material, weight, stiffness, smoothness, etc.); behaviors (e.g., oscillation, displacement, acceleration, rotation, expansion, etc.); and/or interactions (e.g., collision, adhesion, repulsion, attraction, friction, etc.) of objects in a given environment (e.g., a user interface that includes graphical features and objects, a simulated physical environment with virtual boundaries and virtual objects, a real physical environment with physical boundaries and physical objects, and/or a combination of any of the above) will, in some circumstances, provide helpful feedback to users that reduces input errors and increases the efficiency of the user's operation of the device. Additionally, tactile outputs are, optionally, generated to correspond to feedback that is unrelated to a simulated physical characteristic, such as an input threshold or a selection of an object. Such tactile outputs will, in some circumstances, provide helpful feedback to users that reduces input errors and increases the efficiency of the user's operation of the device. In some embodiments, a tactile output with a suitable tactile output pattern serves as a cue for the occurrence of an event of interest in a user interface or behind the scenes in a device. Examples of the events of interest include activation of an affordance (e.g., a real or virtual button, or toggle switch) provided on the device or in a user interface, success or failure of a requested operation, reaching or crossing a boundary in a user interface, entry into a new state, switching of input focus between objects, activation of a new mode, reaching or crossing an input threshold, detection or recognition of a type of input or gesture, etc. In some embodiments, tactile outputs are provided to serve as a warning or an alert for an impending event or outcome that would occur unless a redirection or interruption input is timely detected. Tactile outputs are also used in other contexts to enrich the user experience, improve the accessibility of the device to users with visual or motor difficulties or other accessibility needs, and/or improve efficiency and functionality of the user interface and/or the device. Tactile outputs are optionally accompanied with audio outputs and/or visible user interface changes, which further enhance a user's experience when the user interacts with a user interface and/or the device, and facilitate better conveyance of information regarding the state of the user interface and/or the device, and which reduce input errors and increase the efficiency of the user's operation of the device. It should be appreciated that device100is only one example of a portable multifunction device, and that device100optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in FIG.1A are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits. Memory102optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory102by other components of device100, such as CPU(s)120and the peripherals interface118, is, optionally, controlled by memory controller122. Peripherals interface118can be used to couple input and output peripherals of the device to CPU(s)120and memory102. The one or more processors120run or execute various software programs and/or sets of instructions stored in memory102to perform various functions for device100and to process data. In some embodiments, peripherals interface118, CPU(s)120, and memory controller122are, optionally, implemented on a single chip, such as chip104. In some other embodiments, they are, optionally, implemented on separate chips. RF (radio frequency) circuitry108receives and sends RF signals, also called electromagnetic signals. RF circuitry108converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry108optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry108optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The wireless communication optionally uses any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. Audio circuitry110, speaker111, and microphone113provide an audio interface between a user and device100. Audio circuitry110receives audio data from peripherals interface118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker111. Speaker111converts the electrical signal to human-audible sound waves. Audio circuitry110also receives electrical signals converted by microphone113from sound waves. Audio circuitry110converts the electrical signal to audio data and transmits the audio data to peripherals interface118for processing. Audio data is, optionally, retrieved from and/or transmitted to memory102and/or RF circuitry108by peripherals interface118. In some embodiments, audio circuitry110also includes a headset jack (e.g.,212,FIG.2). The headset jack provides an interface between audio circuitry110and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). I/O subsystem106couples input/output peripherals on device100, such as touch-sensitive display system112and other input or control devices116, with peripherals interface118. I/O subsystem106optionally includes display controller156, optical sensor controller158, intensity sensor controller159, haptic feedback controller161, and one or more input controllers160for other input or control devices. The one or more input controllers160receive/send electrical signals from/to other input or control devices116. The other input or control devices116optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s)160are, optionally, coupled with any (or none) of the following: a keyboard, infrared port, USB port, stylus, and/or a pointer device such as a mouse. The one or more buttons (e.g.,208,FIG.2) optionally include an up/down button for volume control of speaker111and/or microphone113. The one or more buttons optionally include a push button (e.g.,206,FIG.2). Touch-sensitive display system112provides an input interface and an output interface between the device and a user. Display controller156receives and/or sends electrical signals from/to touch-sensitive display system112. Touch-sensitive display system112displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output corresponds to user interface objects. As used herein, the term “affordance” refers to a user-interactive graphical user interface object (e.g., a graphical user interface object that is configured to respond to inputs directed toward the graphical user interface object). Examples of user-interactive graphical user interface objects include, without limitation, a button, slider, icon, selectable menu item, switch, hyperlink, or other user interface control. Touch-sensitive display system112has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch-sensitive display system112and display controller156(along with any associated modules and/or sets of instructions in memory102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system112and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on touch-sensitive display system112. In some embodiments, a point of contact between touch-sensitive display system112and the user corresponds to a finger of the user or a stylus. Touch-sensitive display system112optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system112and display controller156optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system112. In some embodiments, projected mutual capacitance sensing technology is used, such as that found in the iPhone®, iPod Touch®, and iPad® from Apple Inc. of Cupertino, California. Touch-sensitive display system112optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen video resolution is in excess of 400 dpi (e.g., 500 dpi, 800 dpi, or greater). The user optionally makes contact with touch-sensitive display system112using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. In some embodiments, in addition to the touch screen, device100optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch-sensitive display system112or an extension of the touch-sensitive surface formed by the touch screen. Device100also includes power system162for powering the various components. Power system162optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. Device100optionally also includes one or more optical sensors164(e.g., as part of one or more cameras).FIG.1Ashows an optical sensor coupled with optical sensor controller158in I/O subsystem106. Optical sensor(s)164optionally include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor(s)164receive light from the environment, projected through one or more lens, and converts the light to data representing an image. In conjunction with imaging module143(also called a camera module), optical sensor(s)164optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device100, opposite touch-sensitive display system112on the front of the device, so that the touch screen is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device so that the user's image is obtained (e.g., for selfies, for videoconferencing while the user views the other video conference participants on the touch screen, etc.). Device100optionally also includes one or more contact intensity sensors165.FIG.1Ashows a contact intensity sensor coupled with intensity sensor controller159in I/O subsystem106. Contact intensity sensor(s)165optionally include one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor(s)165receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112). In some embodiments, at least one contact intensity sensor is located on the back of device100, opposite touch-screen display system112which is located on the front of device100. Device100optionally also includes one or more proximity sensors166.FIG.1Ashows proximity sensor166coupled with peripherals interface118. Alternately, proximity sensor166is coupled with input controller160in I/O subsystem106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system112when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call). Device100optionally also includes one or more tactile output generators163.FIG.1Ashows a tactile output generator coupled with haptic feedback controller161in I/O subsystem106. In some embodiments, tactile output generator(s)163include one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Tactile output generator(s)163receive tactile feedback generation instructions from haptic feedback module133and generates tactile outputs on device100that are capable of being sensed by a user of device100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device100) or laterally (e.g., back and forth in the same plane as a surface of device100). In some embodiments, at least one tactile output generator sensor is located on the back of device100, opposite touch-sensitive display system112, which is located on the front of device100. Device100optionally also includes one or more accelerometers167, gyroscopes168, and/or magnetometers169(e.g., as part of an inertial measurement unit (IMU)) for obtaining information concerning the pose (e.g., position and orientation or attitude) of the device.FIG.1Ashows sensors167,168, and169coupled with peripherals interface118. Alternately, sensors167,168, and169are, optionally, coupled with an input controller160in I/O subsystem106. In some embodiments, information is displayed on the touch-screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device100optionally includes a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location of device100. In some embodiments, the software components stored in memory102include operating system126, communication module (or set of instructions)128, contact/motion module (or set of instructions)130, graphics module (or set of instructions)132, haptic feedback module (or set of instructions)133, text input module (or set of instructions)134, Global Positioning System (GPS) module (or set of instructions)135, and applications (or sets of instructions)136. Furthermore, in some embodiments, memory102stores device/global internal state157, as shown inFIGS.1A and3. Device/global internal state157includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch-sensitive display system112; sensor state, including information obtained from the device's various sensors and other input or control devices116; and location and/or positional information concerning the device's pose (e.g., position and orientation). Operating system126(e.g., iOS, Android, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Communication module128facilitates communication with other devices over one or more external ports124and also includes various software components for handling data received by RF circuitry108and/or external port124. External port124(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with the 30-pin connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California. In some embodiments, the external port is a Lightning connector that is the same as, or similar to and/or compatible with the Lightning connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California. In some embodiments, the external port is a USB Type-C connector that is the same as, or similar to and/or compatible with the USB Type-C connector used in some electronic devices from Apple Inc. of Cupertino, California. Contact/motion module130optionally detects contact with touch-sensitive display system112(in conjunction with display controller156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module130includes various software components for performing various operations related to detection of contact (e.g., by a finger or by a stylus), such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module130receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts or stylus contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module130and display controller156detect contact on a touchpad. Contact/motion module130optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event. Similarly, tap, swipe, drag, and other gestures are optionally detected for a stylus by detecting a particular contact pattern for the stylus. In some embodiments, detecting a finger tap gesture depends on the length of time between detecting the finger-down event and the finger-up event, but is independent of the intensity of the finger contact between detecting the finger-down event and the finger-up event. In some embodiments, a tap gesture is detected in accordance with a determination that the length of time between the finger-down event and the finger-up event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4 or 0.5 seconds), independent of whether the intensity of the finger contact during the tap meets a given intensity threshold (greater than a nominal contact-detection intensity threshold), such as a light press or deep press intensity threshold. Thus, a finger tap gesture can satisfy particular input criteria that do not require that the characteristic intensity of a contact satisfy a given intensity threshold in order for the particular input criteria to be met. For clarity, the finger contact in a tap gesture typically needs to satisfy a nominal contact-detection intensity threshold, below which the contact is not detected, in order for the finger-down event to be detected. A similar analysis applies to detecting a tap gesture by a stylus or other contact. In cases where the device is capable of detecting a finger or stylus contact hovering over a touch sensitive surface, the nominal contact-detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch sensitive surface. The same concepts apply in an analogous manner to other types of gestures. For example, a swipe gesture, a pinch gesture, a depinch gesture, and/or a long press gesture are optionally detected based on the satisfaction of criteria that are either independent of intensities of contacts included in the gesture, or do not require that contact(s) that perform the gesture reach intensity thresholds in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of one or more contacts; a pinch gesture is detected based on movement of two or more contacts towards each other; a depinch gesture is detected based on movement of two or more contacts away from each other; and a long press gesture is detected based on a duration of the contact on the touch-sensitive surface with less than a threshold amount of movement. As such, the statement that particular gesture recognition criteria do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met means that the particular gesture recognition criteria are capable of being satisfied if the contact(s) in the gesture do not reach the respective intensity threshold, and are also capable of being satisfied in circumstances where one or more of the contacts in the gesture do reach or exceed the respective intensity threshold. In some embodiments, a tap gesture is detected based on a determination that the finger-down and finger-up event are detected within a predefined time period, without regard to whether the contact is above or below the respective intensity threshold during the predefined time period, and a swipe gesture is detected based on a determination that the contact movement is greater than a predefined magnitude, even if the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where detection of a gesture is influenced by the intensity of contacts performing the gesture (e.g., the device detects a long press more quickly when the intensity of the contact is above an intensity threshold or delays detection of a tap input when the intensity of the contact is higher), the detection of those gestures does not require that the contacts reach a particular intensity threshold so long as the criteria for recognizing the gesture can be met in circumstances where the contact does not reach the particular intensity threshold (e.g., even if the amount of time that it takes to recognize the gesture changes). Contact intensity thresholds, duration thresholds, and movement thresholds are, in some circumstances, combined in a variety of different combinations in order to create heuristics for distinguishing two or more different gestures directed to the same input element or region so that multiple different interactions with the same input element are enabled to provide a richer set of user interactions and responses. The statement that a particular set of gesture recognition criteria do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met does not preclude the concurrent evaluation of other intensity-dependent gesture recognition criteria to identify other gestures that do have criteria that are met when a gesture includes a contact with an intensity above the respective intensity threshold. For example, in some circumstances, first gesture recognition criteria for a first gesture—which do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the first gesture recognition criteria to be met—are in competition with second gesture recognition criteria for a second gesture—which are dependent on the contact(s) reaching the respective intensity threshold. In such competitions, the gesture is, optionally, not recognized as meeting the first gesture recognition criteria for the first gesture if the second gesture recognition criteria for the second gesture are met first. For example, if a contact reaches the respective intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected rather than a swipe gesture. Conversely, if the contact moves by the predefined amount of movement before the contact reaches the respective intensity threshold, a swipe gesture is detected rather than a deep press gesture. Even in such circumstances, the first gesture recognition criteria for the first gesture still do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the first gesture recognition criteria to be met because if the contact stayed below the respective intensity threshold until an end of the gesture (e.g., a swipe gesture with a contact that does not increase to an intensity above the respective intensity threshold), the gesture would have been recognized by the first gesture recognition criteria as a swipe gesture. As such, particular gesture recognition criteria that do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met will (A) in some circumstances ignore the intensity of the contact with respect to the intensity threshold (e.g. for a tap gesture) and/or (B) in some circumstances still be dependent on the intensity of the contact with respect to the intensity threshold in the sense that the particular gesture recognition criteria (e.g., for a long press gesture) will fail if a competing set of intensity-dependent gesture recognition criteria (e.g., for a deep press gesture) recognize an input as corresponding to an intensity-dependent gesture before the particular gesture recognition criteria recognize a gesture corresponding to the input (e.g., for a long press gesture that is competing with a deep press gesture for recognition). Pose module131, in conjunction with accelerometers167, gyroscopes168, and/or magnetometers169, optionally detects pose information concerning the device, such as the device's pose (e.g., roll, pitch, yaw and/or position) in a particular frame of reference. Pose module131includes software components for performing various operations related to detecting the position of the device and detecting changes to the pose of the device. Graphics module132includes various known software components for rendering and displaying graphics on touch-sensitive display system112or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations and the like. In some embodiments, graphics module132stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module132receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller156. Haptic feedback module133includes various software components for generating instructions (e.g., instructions used by haptic feedback controller161) to produce tactile outputs using tactile output generator(s)163at one or more locations on device100in response to user interactions with device100. Text input module134, which is, optionally, a component of graphics module132, provides soft keyboards for entering text in various applications (e.g., contacts137, e-mail140, IM141, browser147, and any other application that needs text input). GPS module135determines the location of the device and provides this information for use in various applications (e.g., to telephone138for use in location-based dialing, to camera143as picture/video metadata, and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). Virtual/augmented reality module145provides virtual and/or augmented reality logic to applications136that implement augmented reality, and in some embodiments virtual reality, features. Virtual/augmented reality module145facilitates superposition of virtual content, such as a virtual user interface object, on a representation of at least a portion of a field of view of the one or more cameras. For example, with assistance from the virtual/augmented reality module145, the representation of at least a portion of a field of view of the one or more cameras may include a respective physical object and the virtual user interface object may be displayed at a location, in a displayed augmented reality environment, that is determined based on the respective physical object in the field of view of the one or more cameras or a virtual reality environment that is determined based on the pose of at least a portion of a computer system (e.g., a pose of a display device that is used to display the user interface to a user of the computer system). Applications136optionally include the following modules (or sets of instructions), or a subset or superset thereof:contacts module137(sometimes called an address book or contact list);telephone module138;video conferencing module139;e-mail client module140;instant messaging (IM) module141;workout support module142;camera module143for still and/or video images;image management module144;browser module147;calendar module148;widget modules149, which optionally include one or more of: weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, dictionary widget149-5, and other widgets obtained by the user, as well as user-created widgets149-6;widget creator module150for making user-created widgets149-6;search module151;video and music player module152, which is, optionally, made up of a video player module and a music player module;notes module153;map module154; and/oronline video module155. Examples of other applications136that are, optionally, stored in memory102include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication. In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, contacts module137includes executable instructions to manage an address book or contact list (e.g., stored in application internal state192of contacts module137in memory102or memory370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers and/or e-mail addresses to initiate and/or facilitate communications by telephone138, video conference139, e-mail140, or IM141; and so forth. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, telephone module138includes executable instructions to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols and technologies. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch-sensitive display system112, display controller156, optical sensor(s)164, optical sensor controller158, contact module130, graphics module132, text input module134, contact list137, and telephone module138, videoconferencing module139includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, e-mail client module140includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module144, e-mail client module140makes it very easy to create and send e-mails with still or video images taken with camera module143. In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, the instant messaging module141includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, Apple Push Notification Service (APNs) or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in a MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS). In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, text input module134, GPS module135, map module154, and video and music player module152, workout support module142includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (in sports devices and smart watches); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store and transmit workout data. In conjunction with touch-sensitive display system112, display controller156, optical sensor(s)164, optical sensor controller158, contact module130, graphics module132, and image management module144, camera module143includes executable instructions to capture still images or video (including a video stream) and store them into memory102, modify characteristics of a still image or video, and/or delete a still image or video from memory102. In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, text input module134, and camera module143, image management module144includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, and text input module134, browser module147includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, e-mail client module140, and browser module147, calendar module148includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to do lists, etc.) in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, and browser module147, widget modules149are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, and dictionary widget149-5) or created by the user (e.g., user-created widget149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets). In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, and browser module147, the widget creator module150includes executable instructions to create widgets (e.g., turning a user-specified portion of a web page into a widget). In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, and text input module134, search module151includes executable instructions to search for text, music, sound, image, video, and/or other files in memory102that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, and browser module147, video and music player module152includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present or otherwise play back videos (e.g., on touch-sensitive display system112, or on an external display connected wirelessly or via external port124). In some embodiments, device100optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, notes module153includes executable instructions to create and manage notes, to do lists, and the like in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, GPS module135, and browser module147, map module154includes executable instructions to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data on stores and other points of interest at or near a particular location; and other location-based data) in accordance with user instructions. In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, text input module134, e-mail client module140, and browser module147, online video module155includes executable instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen112, or on an external display connected wirelessly or via external port124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module141, rather than e-mail client module140, is used to send a link to a particular online video. Each of the above identified modules and applications correspond to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory102optionally stores a subset of the modules and data structures identified above. Furthermore, memory102optionally stores additional modules and data structures not described above. In some embodiments, device100is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device100, the number of physical input control devices (such as push buttons, dials, and the like) on device100is, optionally, reduced. The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device100to a main, home, or root menu from any user interface that is displayed on device100. In such embodiments, a “menu button” is implemented using a touch-sensitive surface. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touch-sensitive surface. FIG.1Bis a block diagram illustrating example components for event handling in accordance with some embodiments. In some embodiments, memory102(inFIG.1A) or370(FIG.3A) includes event sorter170(e.g., in operating system126) and a respective application136-1(e.g., any of the aforementioned applications136,137-155,380-390). Event sorter170receives event information and determines the application136-1and application view191of application136-1to which to deliver the event information. Event sorter170includes event monitor171and event dispatcher module174. In some embodiments, application136-1includes application internal state192, which indicates the current application view(s) displayed on touch-sensitive display system112when the application is active or executing. In some embodiments, device/global internal state157is used by event sorter170to determine which application(s) is (are) currently active, and application internal state192is used by event sorter170to determine application views191to which to deliver event information. In some embodiments, application internal state192includes additional information, such as one or more of: resume information to be used when application136-1resumes execution, user interface state information that indicates information being displayed or that is ready for display by application136-1, a state queue for enabling the user to go back to a prior state or view of application136-1, and a redo/undo queue of previous actions taken by the user. Event monitor171receives event information from peripherals interface118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display system112, as part of a multi-touch gesture). Peripherals interface118transmits information it receives from I/O subsystem106or a sensor, such as proximity sensor166, accelerometer(s)167, and/or microphone113(through audio circuitry110). Information that peripherals interface118receives from I/O subsystem106includes information from touch-sensitive display system112or a touch-sensitive surface. In some embodiments, event monitor171sends requests to the peripherals interface118at predetermined intervals. In response, peripherals interface118transmits event information. In other embodiments, peripheral interface118transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). In some embodiments, event sorter170also includes a hit view determination module172and/or an active event recognizer determination module173. Hit view determination module172provides software procedures for determining where a sub-event has taken place within one or more views, when touch-sensitive display system112displays more than one view. Views are made up of controls and other elements that a user can see on the display. Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. Hit view determination module172receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module172identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. Active event recognizer determination module173determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module173determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module173determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. Event dispatcher module174dispatches the event information to an event recognizer (e.g., event recognizer180). In embodiments including active event recognizer determination module173, event dispatcher module174delivers the event information to an event recognizer determined by active event recognizer determination module173. In some embodiments, event dispatcher module174stores in an event queue the event information, which is retrieved by a respective event receiver module182. In some embodiments, operating system126includes event sorter170. Alternatively, application136-1includes event sorter170. In yet other embodiments, event sorter170is a stand-alone module, or a part of another module stored in memory102, such as contact/motion module130. In some embodiments, application136-1includes a plurality of event handlers190and one or more application views191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view191of the application136-1includes one or more event recognizers180. Typically, a respective application view191includes a plurality of event recognizers180. In other embodiments, one or more of event recognizers180are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application136-1inherits methods and other properties. In some embodiments, a respective event handler190includes one or more of: data updater176, object updater177, GUI updater178, and/or event data179received from event sorter170. Event handler190optionally utilizes or calls data updater176, object updater177or GUI updater178to update the application internal state192. Alternatively, one or more of the application views191includes one or more respective event handlers190. Also, in some embodiments, one or more of data updater176, object updater177, and GUI updater178are included in a respective application view191. A respective event recognizer180receives event information (e.g., event data179) from event sorter170, and identifies an event from the event information. Event recognizer180includes event receiver182and event comparator184. In some embodiments, event recognizer180also includes at least a subset of: metadata183, and event delivery instructions188(which optionally include sub-event delivery instructions). Event receiver182receives event information from event sorter170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current pose (e.g., position and orientation) of the device. Event comparator184compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator184includes event definitions186. Event definitions186contain definitions of events (e.g., predefined sequences of sub-events), for example, event1(187-1), event2(187-2), and others. In some embodiments, sub-events in an event187include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event1(187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase. In another example, the definition for event2(187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display system112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers190. In some embodiments, event definition187includes a definition of an event for a respective user-interface object. In some embodiments, event comparator184performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display system112, when a touch is detected on touch-sensitive display system112, event comparator184performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler190, the event comparator uses the result of the hit test to determine which event handler190should be activated. For example, event comparator184selects an event handler associated with the sub-event and the object triggering the hit test. In some embodiments, the definition for a respective event187also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type. When a respective event recognizer180determines that the series of sub-events do not match any of the events in event definitions186, the respective event recognizer180enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. In some embodiments, a respective event recognizer180includes metadata183with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. In some embodiments, a respective event recognizer180activates event handler190associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer180delivers event information associated with the event to event handler190. Activating an event handler190is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer180throws a flag associated with the recognized event, and event handler190associated with the flag catches the flag and performs a predefined process. In some embodiments, event delivery instructions188include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process. In some embodiments, data updater176creates and updates data used in application136-1. For example, data updater176updates the telephone number used in contacts module137, or stores a video file used in video and music player module152. In some embodiments, object updater177creates and updates objects used in application136-1. For example, object updater177creates a new user-interface object or updates the position of a user-interface object. GUI updater178updates the GUI. For example, GUI updater178prepares display information and sends it to graphics module132for display on a touch-sensitive display. In some embodiments, event handler(s)190includes or has access to data updater176, object updater177, and GUI updater178. In some embodiments, data updater176, object updater177, and GUI updater178are included in a single module of a respective application136-1or application view191. In other embodiments, they are included in two or more software modules. It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices100with input-devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc., on touch-pads; pen stylus inputs; inputs based on real-time analysis of video images obtained by one or more cameras; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. FIG.2illustrates a portable multifunction device100having a touch screen (e.g., touch-sensitive display system112,FIG.1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI)200. In these embodiments, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers202(not drawn to scale in the figure) or one or more styluses203(not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward) and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. Device100optionally also includes one or more physical buttons, such as “home” or menu button204. As described previously, menu button204is, optionally, used to navigate to any application136in a set of applications that are, optionally executed on device100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch-screen display. In some embodiments, device100includes the touch-screen display, menu button204(sometimes called home button204), push button206for powering the device on/off and locking the device, volume adjustment button(s)208, Subscriber Identity Module (SIM) card slot210, head set jack212, and docking/charging external port124. Push button206is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In some embodiments, device100also accepts verbal input for activation or deactivation of some functions through microphone113. Device100also, optionally, includes one or more contact intensity sensors165for detecting intensities of contacts on touch-sensitive display system112and/or one or more tactile output generators163for generating tactile outputs for a user of device100. FIG.3Ais a block diagram of an example multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device300need not be portable. In some embodiments, device300is a gaming system, a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device300typically includes one or more processing units (CPU's)310, one or more network or other communications interfaces360, memory370, and one or more communication buses320for interconnecting these components. Communication buses320optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device300includes input/output (I/O) interface330comprising display340, which is optionally a touch-screen display. I/O interface330also optionally includes a keyboard and/or mouse (or other pointing device)350and touchpad355, tactile output generator357for generating tactile outputs on device300(e.g., similar to tactile output generator(s)163described above with reference toFIG.1A), sensors359(e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors similar to contact intensity sensor(s)165described above with reference toFIG.1A). Memory370includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory370optionally includes one or more storage devices remotely located from CPU(s)310. In some embodiments, memory370stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory102of portable multifunction device100(FIG.1A), or a subset thereof. Furthermore, memory370optionally stores additional programs, modules, and data structures not present in memory102of portable multifunction device100. For example, memory370of device300optionally stores drawing module380, presentation module382, word processing module384, website creation module386, disk authoring module388, and/or spreadsheet module390, while memory102of portable multifunction device100(FIG.1A) optionally does not store these modules. Each of the above identified elements inFIG.3Aare, optionally, stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory370optionally stores a subset of the modules and data structures identified above. Furthermore, memory370optionally stores additional modules and data structures not described above. FIGS.3B-3Care block diagrams of example computer systems301in accordance with some embodiments. In some embodiments, computer system301includes and/or is in communication with:input device(s) (302and/or307, e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands);virtual/augmented reality logic303(e.g., virtual/augmented reality module145);display generation component(s) (304and/or308, e.g., a display, a projector, a heads-up display, or the like) for displaying virtual user interface elements to the user;camera(s) (e.g.,305and/or311) for capturing images of a field of view of the device, e.g., images that are used to determine placement of virtual user interface elements, determine a pose of the device, and/or display a portion of the physical environment in which the camera(s) are located; andpose sensor(s) (e.g.,306and/or311) for determining a pose of the device relative to the physical environment and/or changes in pose of the device. In some computer systems (e.g.,301-ainFIG.3B), input device(s)302, virtual/augmented reality logic303, display generation component(s)304, camera(s)305; and pose sensor(s)306are all integrated into the computer system (e.g., portable multifunction device100inFIGS.1A-1Bor device300inFIG.3such as a smartphone or tablet). In some computer systems (e.g.,301-b), in addition to integrated input device(s)302, virtual/augmented reality logic303, display generation component(s)304, camera(s)305; and pose sensor(s)306, the computer system is also in communication with additional devices that are separate from the computer system, such as separate input device(s)307such as a touch-sensitive surface, a wand, a remote control, or the like and/or separate display generation component(s)308such as virtual reality headset or augmented reality glasses that overlay virtual objects on a physical environment. In some computer systems (e.g.,301-cinFIG.3C), the input device(s)307, display generation component(s)309, camera(s)311; and/or pose sensor(s)312are separate from the computer system and are in communication with the computer system. In some embodiments, other combinations of components in computer system301and in communication with the computer system are used. For example, in some embodiments, display generation component(s)309, camera(s)311, and pose sensor(s)312are incorporated in a headset that is either integrated with or in communication with the computer system. In some embodiments, all of the operations described below with reference to FIGS.5A1-5A48are performed on a single computing device with virtual/augmented reality logic303(e.g., computer system301-adescribed below with reference toFIG.3B). However, it should be understood that frequently multiple different computing devices are linked together to perform the operations described below with reference to FIGS.5A1-5A48(e.g., a computing device with virtual/augmented reality logic303communicates with a separate computing device with a display450and/or a separate computing device with a touch-sensitive surface451). In any of these embodiments, the computing device that is described below with reference to FIGS.5A1-5A48is the computing device (or devices) that contain(s) the virtual/augmented reality logic303. Additionally, it should be understood that the virtual/augmented reality logic303could be divided between a plurality of distinct modules or computing devices in various embodiments; however, for the purposes of the description herein, the virtual/augmented reality logic303will be primarily referred to as residing in a single computing device so as not to unnecessarily obscure other aspects of the embodiments. In some embodiments, the virtual/augmented reality logic303includes one or more modules (e.g., one or more event handlers190, including one or more object updaters177and one or more GUI updaters178as described in greater detail above with reference toFIG.1B) that receive interpreted inputs and, in response to these interpreted inputs, generate instructions for updating a graphical user interface in accordance with the interpreted inputs which are subsequently used to update the graphical user interface on a display. In some embodiments, an interpreted input for an input that has been detected (e.g., by a contact motion module130inFIGS.1A and3), recognized (e.g., by an event recognizer180inFIG.1B) and/or distributed (e.g., by event sorter170inFIG.1B) is used to update the graphical user interface on a display. In some embodiments, the interpreted inputs are generated by modules at the computing device (e.g., the computing device receives raw contact input data so as to identify gestures from the raw contact input data). In some embodiments, some or all of the interpreted inputs are received by the computing device as interpreted inputs (e.g., a computing device that includes the touch-sensitive surface451processes raw contact input data so as to identify gestures from the raw contact input data and sends information indicative of the gestures to the computing device that includes the virtual/augmented reality logic303). In some embodiments, both a display and a touch-sensitive surface are integrated with the computer system (e.g.,301-ainFIG.3B) that contains the virtual/augmented reality logic303. For example, the computer system may be a desktop computer or laptop computer with an integrated display (e.g.,340inFIG.3) and touchpad (e.g.,355inFIG.3). As another example, the computing device may be a portable multifunction device100(e.g., a smartphone, PDA, tablet computer, etc.) with a touch screen (e.g.,112inFIG.2). In some embodiments, a touch-sensitive surface is integrated with the computer system while a display is not integrated with the computer system that contains the virtual/augmented reality logic303. For example, the computer system may be a device300(e.g., a desktop computer or laptop computer) with an integrated touchpad (e.g.,355inFIG.3) connected (via wired or wireless connection) to a separate display (e.g., a computer monitor, television, etc.). As another example, the computer system may be a portable multifunction device100(e.g., a smartphone, PDA, tablet computer, etc.) with a touch screen (e.g.,112inFIG.2) connected (via wired or wireless connection) to a separate display (e.g., a computer monitor, television, etc.). In some embodiments, a display is integrated with the computer system while a touch-sensitive surface is not integrated with the computer system that contains the virtual/augmented reality logic303. For example, the computer system may be a device300(e.g., a desktop computer, laptop computer, television with integrated set-top box) with an integrated display (e.g.,340inFIG.3) connected (via wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, a portable multifunction device, etc.). As another example, the computer system may be a portable multifunction device100(e.g., a smartphone, PDA, tablet computer, etc.) with a touch screen (e.g.,112inFIG.2) connected (via wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, another portable multifunction device with a touch screen serving as a remote touchpad, etc.). In some embodiments, neither a display nor a touch-sensitive surface is integrated with the computer system (e.g.,301-cinFIG.3C) that contains the virtual/augmented reality logic303. For example, the computer system may be a stand-alone computing device300(e.g., a set-top box, gaming console, etc.) connected (via wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, a portable multifunction device, etc.) and a separate display (e.g., a computer monitor, television, etc.). In some embodiments, the computer system has an integrated audio system (e.g., audio circuitry110and speaker111in portable multifunction device100). In some embodiments, the computing device is in communication with an audio system that is separate from the computing device. In some embodiments, the audio system (e.g., an audio system integrated in a television unit) is integrated with a separate display. In some embodiments, the audio system (e.g., a stereo system) is a stand-alone system that is separate from the computer system and the display. Attention is now directed towards embodiments of user interfaces (“UI”) that are, optionally, implemented on portable multifunction device100. FIG.4Aillustrates an example user interface for a menu of applications on portable multifunction device100in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device300. In some embodiments, user interface400includes the following elements, or a subset or superset thereof:Signal strength indicator(s) for wireless communication(s), such as cellular and Wi-Fi signals;Time;a Bluetooth indicator;a Battery status indicator;Tray408with icons for frequently used applications, such as:Icon416for telephone module138, labeled “Phone,” which optionally includes an indicator414of the number of missed calls or voicemail messages;Icon418for e-mail client module140, labeled “Mail,” which optionally includes an indicator410of the number of unread e-mails;Icon420for browser module147, labeled “Browser”; andIcon422for video and music player module152, labeled “Music”; andIcons for other applications, such as:Icon424for IM module141, labeled “Messages”;Icon426for calendar module148, labeled “Calendar”;Icon428for image management module144, labeled “Photos”;Icon430for camera module143, labeled “Camera”;Icon432for online video module155, labeled “Online Video”;Icon434for stocks widget149-2, labeled “Stocks”;Icon436for map module154, labeled “Maps”;Icon438for weather widget149-1, labeled “Weather”;Icon440for alarm clock widget149-4, labeled “Clock”;Icon442for workout support module142, labeled “Workout Support”;Icon444for notes module153, labeled “Notes”; andIcon446for a settings application or module, labeled “Settings,” which provides access to settings for device100and its various applications136. It should be noted that the icon labels illustrated inFIG.4Aare merely examples. For example, other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. FIG.4Billustrates an example user interface on a device (e.g., device300,FIG.3A) with a touch-sensitive surface451(e.g., a tablet or touchpad355,FIG.3A) that is separate from the display450. Although many of the examples that follow will be given with reference to inputs on touch screen display112(where the touch sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown inFIG.4B. In some embodiments, the touch-sensitive surface (e.g.,451inFIG.4B) has a primary axis (e.g.,452inFIG.4B) that corresponds to a primary axis (e.g.,453inFIG.4B) on the display (e.g.,450). In accordance with these embodiments, the device detects contacts (e.g.,460and462inFIG.4B) with the touch-sensitive surface451at locations that correspond to respective locations on the display (e.g., inFIG.4B,460corresponds to468and462corresponds to470). In this way, user inputs (e.g., contacts460and462, and movements thereof) detected by the device on the touch-sensitive surface (e.g.,451inFIG.4B) are used by the device to manipulate the user interface on the display (e.g.,450inFIG.4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein. Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse based input or a stylus input or input through movement of the user, such as a user's head, hands, or arms, optionally as tracked using one or more cameras). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact), or by the user waving his hand in substantially one direction (e.g., left, right, up, or down). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact), or by the user performing a gesture. Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously, or optionally, the user simultaneously moves multiple parts of his body such as his head and his hand, or two hands simultaneously. As used herein, the term “focus selector” (sometimes called a “focus indicator”) refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector,” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad355inFIG.3Aor touch-sensitive surface451inFIG.4B) while the cursor is over a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch-screen display (e.g., touch-sensitive display system112inFIG.1Aor the touch screen inFIG.4A) that enables direct interaction with user interface elements on the touch-screen display, a detected contact on the touch-screen acts as a “focus selector,” so that when an input (e.g., a press input by the contact) is detected on the touch-screen display at a location of a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch-screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch-screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). In some embodiments, a focus indicator (e.g., a cursor or selection indicator) is displayed via the display device to indicate a current portion of the user interface that will be affected by inputs received from the one or more input devices. As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact or a stylus contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or a sum) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be readily accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). In some embodiments, contact/motion module130uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device100). For example, a mouse “click” threshold of a trackpad or touch-screen display can be set to any of a large range of predefined thresholds values without changing the trackpad or touch-screen display hardware. Additionally, in some implementations a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, a value produced by low-pass filtering the intensity of the contact over a predefined period or starting at a predefined time, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first intensity threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second intensity threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more intensity thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective option or forgo performing the respective operation) rather than being used to determine whether to perform a first operation or a second operation. In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface may receive a continuous swipe contact transitioning from a start location and reaching an end location (e.g., a drag gesture), at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location may be based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm may be applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity. The user interface figures described herein optionally include various intensity diagrams that show the current intensity of the contact on the touch-sensitive surface relative to one or more intensity thresholds (e.g., a contact detection intensity threshold IT0, a light press intensity threshold ITL, a deep press intensity threshold ITD(e.g., that is at least initially higher than ITL), and/or one or more other intensity thresholds (e.g., an intensity threshold ITHthat is lower than ITL)). This intensity diagram is typically not part of the displayed user interface, but is provided to aid in the interpretation of the figures. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold IT0below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures. In some embodiments, the response of the device to inputs detected by the device depends on criteria based on the contact intensity during the input. For example, for some “light press” inputs, the intensity of a contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to inputs detected by the device depends on criteria that include both the contact intensity during the input and time-based criteria. For example, for some “deep press” inputs, the intensity of a contact exceeding a second intensity threshold during the input, greater than the first intensity threshold for a light press, triggers a second response only if a delay time has elapsed between meeting the first intensity threshold and meeting the second intensity threshold. This delay time is typically less than 200 ms (milliseconds) in duration (e.g., 40, 100, or 120 ms, depending on the magnitude of the second intensity threshold, with the delay time increasing as the second intensity threshold increases). This delay time helps to avoid accidental recognition of deep press inputs. As another example, for some “deep press” inputs, there is a reduced-sensitivity time period that occurs after the time at which the first intensity threshold is met. During the reduced-sensitivity time period, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to detection of a deep press input does not depend on time-based criteria. In some embodiments, one or more of the input intensity thresholds and/or the corresponding outputs vary based on one or more factors, such as user settings, contact motion, input timing, application running, rate at which the intensity is applied, number of concurrent inputs, user history, environmental factors (e.g., ambient noise), focus selector position, and the like. Example factors are described in U.S. patent application Ser. Nos. 14/399,606 and 14/624,296, which are incorporated by reference herein in their entireties. For example,FIG.4Cillustrates a dynamic intensity threshold480that changes over time based in part on the intensity of touch input476over time. Dynamic intensity threshold480is a sum of two components, first component474that decays over time after a predefined delay time p1from when touch input476is initially detected, and second component478that trails the intensity of touch input476over time. The initial high intensity threshold of first component474reduces accidental triggering of a “deep press” response, while still allowing an immediate “deep press” response if touch input476provides sufficient intensity. Second component478reduces unintentional triggering of a “deep press” response by gradual intensity fluctuations of in a touch input. In some embodiments, when touch input476satisfies dynamic intensity threshold480(e.g., at point481inFIG.4C), the “deep press” response is triggered. FIG.4Dillustrates another dynamic intensity threshold486(e.g., intensity threshold ITD.FIG.4Dalso illustrates two other intensity thresholds: a first intensity threshold ITHand a second intensity threshold ITL. InFIG.4D, although touch input484satisfies the first intensity threshold ITHand the second intensity threshold ITLprior to time p2, no response is provided until delay time p2has elapsed at time482. Also inFIG.4D, dynamic intensity threshold486decays over time, with the decay starting at time488after a predefined delay time p1has elapsed from time482(when the response associated with the second intensity threshold ITLwas triggered). This type of dynamic intensity threshold reduces accidental triggering of a response associated with the dynamic intensity threshold ITS immediately after, or concurrently with, triggering a response associated with a lower intensity threshold, such as the first intensity threshold ITHor the second intensity threshold ITL. FIG.4Eillustrate yet another dynamic intensity threshold492(e.g., intensity threshold ITS). InFIG.4E, a response associated with the intensity threshold ITLis triggered after the delay time p2has elapsed from when touch input490is initially detected. Concurrently, dynamic intensity threshold492decays after the predefined delay time p1has elapsed from when touch input490is initially detected. So a decrease in intensity of touch input490after triggering the response associated with the intensity threshold ITL, followed by an increase in the intensity of touch input490, without releasing touch input490, can trigger a response associated with the intensity threshold ITS (e.g., at time494) even when the intensity of touch input490is below another intensity threshold, for example, the intensity threshold ITL. An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold ITLto an intensity between the light press intensity threshold ITLand the deep press intensity threshold ITDis sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold ITDto an intensity above the deep press intensity threshold ITDis sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold IT0to an intensity between the contact-detection intensity threshold IT0and the light press intensity threshold ITLis sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold IT0to an intensity below the contact-detection intensity threshold IT0is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments IT0is zero. In some embodiments, IT0is greater than zero. In some illustrations a shaded circle or oval is used to represent intensity of a contact on the touch-sensitive surface. In some illustrations, a circle or oval without shading is used represent a respective contact on the touch-sensitive surface without specifying the intensity of the respective contact. In some embodiments, described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., the respective operation is performed on a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., the respective operation is performed on an “up stroke” of the respective press input). In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., the respective operation is performed on an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances). For ease of explanation, the description of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold. As described above, in some embodiments, the triggering of these responses also depends on time-based criteria being met (e.g., a delay time has elapsed between a first intensity threshold being met and a second intensity threshold being met). User Interfaces and Associated Processes Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that may be implemented on a computer system (e.g., portable multifunction device100or device300) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), one or more input devices (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands), optionally one or more pose sensors (e.g., one or more pose sensors for detecting respective poses of the one or more input devices and/or the one or more display generation components, including one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system), optionally one or more sensors to detect intensities of contacts with an input device, optionally one or more sensors to detect proximity of an input object (e.g., a user's fingertip) above an input element (e.g., the touch-sensitive surface) of the input device, optionally one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras), and optionally one or more tactile output generators. FIGS.5A1-5A48illustrate example user interfaces for interacting with virtual reality environments, in accordance with some embodiments. In particular, Figures illustrate example user interfaces for displaying a simulated three-dimensional space (e.g., a virtual reality environment) and, in response to different inputs (e.g., on device100), adjusting the appearance of the simulated three-dimensional space and/or the appearance of objects in the simulated three-dimensional space, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes described herein with reference toFIGS.6A-6E,7A-7C,8A-8C,9A-9B, and10A-10C. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a device (e.g., device100) with a touch-sensitive display system112. In such embodiments, the focus selector is, optionally: a respective finger or stylus contact, a representative point corresponding to a finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a respective contact), or a centroid of two or more contacts detected on the touch-sensitive display system112. In some embodiments, a location of the focus selector in the user interface is visually indicated by a displayed focus indicator. However, analogous operations are, optionally, performed on a device with a display450and a separate touch-sensitive surface451in response to detecting the contacts on the touch-sensitive surface451while displaying the user interfaces shown in the figures on the display450, along with a focus indicator. Similarly, analogous operations are, optionally, performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting the contacts on the touch-sensitive surface of the input device while displaying the user interfaces shown in the figures on the display of headset5008, along with a focus indicator. FIG.5A1illustrates a context in which user interfaces described with regard to5A2-5A48are used. FIG.5A1illustrates a physical space5004in which user5002is located. User5002views a virtual reality environment using a computer system that includes a headset5008, a separate input device with a touch-sensitive surface (e.g., device100with a touch-sensitive display system or device5010with a touch-sensitive surface that does not include a display), and optionally an additional input device (e.g., watch5012with a touch-sensitive display). In this example, headset5008displays the virtual reality environment and user5002uses the separate input device (e.g., device100or device5010) to interact with the virtual reality environment. In some embodiments, device100is used as the separate input device. In some embodiments, device100is inserted into a headset (e.g., headset5008-b) and the separate input device is a touch-sensitive remote control that does not have a display (e.g., device5010). In some embodiments, the separate input device5010is a touch-sensitive remote control, a mouse, a joystick, a wand controller, or the like. In some embodiments, the separate input device (e.g., device100or device5010) includes one or more cameras that track the position of one or more features of user5002such as the user's hands and movement. In some embodiments, headset5008displays a virtual reality environment that includes at least a portion of a simulated three-dimensional space (e.g., simulated 3D space5006, FIGS.5A2-5A48) and one or more user interface objects that are located within the simulated three-dimensional space (e.g., virtual device5016, Figure that corresponds to the input device held by user5002, and/or virtual user interface5126, FIG.5A33, that corresponds to the user interface of watch5012worn by user5002, etc.). In some embodiments, the display of device100(e.g., on touch-sensitive display system112, sometimes referred to as “touch-screen display112,” “touch screen112,” “display112” or “touch-sensitive display112,” of device100, as shown inFIGS.1A,4A, and5A2) is synchronized with the display of a virtual device (e.g., virtual device5016, FIG.5A2). In some embodiments, one or more cameras of device100(sometimes referred to as “a camera” of device100) continuously provide a live view of the contents that are within the field of view of the cameras (e.g., when a camera application is launched). FIGS.5A2-5A9illustrate example user interfaces for displaying a simulated three-dimensional space5006(sometimes referred to as “simulated 3D space5006”) (e.g., a virtual reality environment) and a user interface object that is located within the simulated 3D space5006(e.g., virtual device5016that corresponds to an input device, such as device100, held by user5002) and, in response to different inputs (e.g., on device100), adjusting the appearance of the simulated 3D space5006and/or the appearance of user interface objects in the simulated 3D space, in accordance with some embodiments. In some embodiments, device100displays (or causes headset5008to display) the simulated 3D space5006and user interface objects in the simulated 3D space5006. In some embodiments, device100updates/adjusts (or causes headset5008to update/adjust) the simulated 3D space5006and user interface objects in the simulated 3D space5006. In some embodiments, virtual device5016is displayed at a location within the simulated 3D space5006that corresponds to a location of the input device (e.g., device100) in the user's hand in the physical space5004. FIGS.5A2-5A5illustrate example user interfaces for displaying the simulated 3D space5006and virtual device5016for a maps application. In FIG.5A2, device100detects an input on an icon for the maps application, such as a tap gesture by contact5022(on home screen user interface5020) in the physical space5004, and displays (or causes headset5008to display) a focus indicator on virtual device5016(e.g., focus indicator5024on virtual user interface5018that corresponds to contact5022on device100) in the simulated 3D space5006. In response to detecting the input on the icon for the maps application, device100launches the maps application (e.g., displaying user interface5032for the maps application on device100), and displays (or causes headset5008to display) a virtual user interface5030for the maps application on virtual device5016and updates the appearance of the simulated 3D space5006accordingly (e.g., to display a 3D map corresponding to the launched map application), as shown in FIG.5A3. In Figures device100detects an input to return to the home screen, such as a swipe up gesture by contact5034on device100(which is displayed as a swipe up gesture by focus indicator5036on virtual device5016, FIG.5A4) (or alternatively, a press input on the home button of device100), and in response, displays the home screen (e.g., home screen user interface5020, FIG.5A5), which is displayed as virtual user interface5018on virtual device5016in the simulated 3D space5006. In some embodiments, when the home screen is displayed on the device, the simulated 3D space5006is empty, as shown in FIG.5A5. Although not shown here, in some embodiments, when the home screen is displayed on the device, the simulated 3D space5006includes a stationary background object (e.g., an apple icon). FIGS.5A6-5A9illustrate example user interfaces for displaying the simulated 3D space5006and virtual device5016for a video player application. In Figure device100detects an input on an icon for the video player application, such as a tap gesture by contact5038(on home screen user interface5020) in the physical space5004, and displays (or causes headset5008to display) a focus indicator on virtual device5016(e.g., focus indicator5039on virtual user interface5018that corresponds to contact5038on device100) in the simulated 3D space5006. In response to detecting the input on the icon for the video player application, device100launches the video player application (e.g., displaying user interface5042for the video player application on device100), and displays (or causes headset5008to display) a virtual user interface5040for the video player application on virtual device5016and updates the appearance of the simulated 3D space5006accordingly (e.g., to display a plurality of selectable representations of videos available to be launched via the video player application), as shown in FIG.5A7. In FIGS.5A8-5A9, device100detects an input to return to the home screen, such as a swipe up gesture by contact5044on device100(which is displayed as a swipe up gesture by focus indicator5046on virtual device5016, FIG.5A8) (or alternatively, a press input on the home button of device100), and in response, displays the home screen (e.g., home screen user interface5020, FIG.5A9), which is displayed as virtual user interface5018on virtual device5016in the simulated 3D space5006. In some embodiments, when the home screen is displayed on the device, the simulated 3D space5006is empty, as shown in Figure Although not shown here, in some embodiments, when the home screen is displayed on the device, the simulated 3D space5006includes a stationary background object (e.g., an apple icon). In FIGS.5A10-5A11, device100detects a swipe left gesture by contact5048on device100in the physical space5004(which is displayed as a swipe left gesture by focus indicator5049on virtual device5016in the simulated 3D space5006, FIG.5A10), and in response, displays a second page of the home screen (e.g., home screen user interface5021on device100, FIG.5A11), which is displayed as virtual user interface5019on virtual device5016in the simulated 3D space5006. FIGS.5A12-5A15illustrate displaying and adjusting an appearance of a focus indicator on virtual device5016in the simulated 3D space5006as the user's finger moves closer to (and eventually touches) device100in the physical space5004. In Figures when the user's finger is within a threshold distance (e.g., 4 millimeters) from touch screen112, but not touching touch screen112, focus indicator5050is displayed with a first appearance (e.g., each of focus indicators5050-a,5050-b, and5050-care displayed as an outline of a circular indicator with shading). In FIG.5A15, when the user's finger is touching touch screen112, focus indicator5050is displayed with a second appearance (e.g., focus indicator5050-dis displayed as a solid circular indicator with no shading). In some embodiments, focus indicator5050changes in appearance as the user's finger moves closer to touch screen112. For example, as shown in FIGS.5A12-5A14, the size of focus indicator5050grows larger as the user's finger moves closer to touch screen112. As shown in FIG.5A12, when the user's finger is relatively far away from touch screen112, focus indicator5050-ais relatively small. As the user's finger moves closer to touch screen112, as shown in FIGS.5A13-5A14, focus indicator grows larger (e.g., increasing in diameter from focus indicator5050-ato focus indicator5050-band increasing further in diameter from focus indicator5050-bto focus indicator5050-c). Although not shown in Figures in some embodiments, a representation of one or more of the user's fingers are displayed in the simulated 3D space5006(e.g., as “virtual fingers”). In some embodiments, a representation of the user's finger is displayed in addition to displaying the focus indicator (e.g., focus indicator5050). In some embodiments, contact touches (e.g., as shown in Figure are displayed as translucent contact points (and not virtual fingers) so that the user interface of virtual device5016is not obscured. FIGS.5A15-5A19illustrate example user interfaces for displaying the simulated 3D space5006and virtual device5016for a weather application. In FIG.5A15, device100detects an input on an icon for the weather application, such as a tap input by the user's finger (on home screen user interface5021) in the physical space5004, and displays (or causes headset5008to display) a focus indicator on virtual device5016(e.g., as focus indicator5050-don virtual user interface5019that corresponds to a contact by the user's finger on device100) in the simulated 3D space5006. In response to detecting the input on the icon for the weather application, device100launches the weather application (e.g., displaying user interface5062for the weather application on device100), and displays (or causes headset5008to display) a virtual user interface5060for the weather application on virtual device5016and updates the appearance of the simulated 3D space5006in accordance with the current weather for the selected city (e.g., to display clouds based on the current “mostly cloudy” weather in Cupertino), as shown in FIG.5A16. In some embodiments, virtual user interface5060(of virtual device5016in the simulated 3D space5006) includes additional depth compared to user interface5062(of device100in the physical space5004). For example, as seen more clearly in FIG.5A17(e.g., when device100is rotated to the side), the text displayed in virtual user interface5060of the weather application is displayed so as to appear to float off virtual device5016in the simulated 3D space5006. In FIGS.5A18-5A19, device100detects a swipe left gesture by contact5064on device100in the physical space5004(which is displayed as a swipe left gesture by focus indicator5066on virtual device5016in the simulated 3D space5006, FIG.5A18), and in response, displays a second page of the weather application (e.g., user interface5063on device100, FIG.5A19, showing the weather for San Jose), which is displayed as virtual user interface5061on virtual device5016in the simulated 3D space5006. As shown in FIG.5A19, the simulated 3D space5006is updated to show a representation of the current weather for the selected city (e.g., ceasing to display the representation of cloudy weather shown for Cupertino in FIG.5A18and displaying a representation of sunny weather for San Jose in FIG.5A19). In FIGS.5A20-5A21, device100detects an activation of a button of device100(e.g., a long press of the home button of device100, as shown in FIG.5A20), and in response, displays an indication of a virtual assistant (e.g., user interface5072on device100, FIG.5A21), which is displayed as virtual user interface5070on virtual device5016in the simulated 3D space5006. Although not shown here, in some embodiments, activation of one or more other buttons of device100(e.g., a button on the side or top of device100) launches display of the virtual assistant and/or causes an update to the appearance of the user interface displayed on virtual device5016and/or causes an update to the appearance of the simulated 3D space5006. In some embodiments, different devices have different buttons or gestures for performing different functions (e.g., a home button on the face of one device is mapped to starting an interaction with a virtual assistant in response to detecting a long press on the home button, displaying a home user interface in response to detecting a short press on the home button, and displaying a multitasking user interface in response to detecting a double press on the home button, whereas a different device starts an interaction with a virtual assistant in response to detecting a long press on a side button, and displays a home user interface or a multitasking user interface in response to detecting a swipe up from a bottom edge of the device depending on parameters of the movement during the gesture) and the interactions with the virtual device are selected to match interactions with a device that the user is holding to use as an input device or a device that is associated with a user profile of the user or a device that is being used as a display for displaying the simulated 3D space. For example, if the user is using (or is otherwise associated with) a device that has a home button on a same side of the device as a display of the device, the virtual device in the simulated 3D space has a virtual home button that is used to go home, go to multitasking, and/or invoke a virtual assistant, whereas if the user is using (or is otherwise associated with) a device without a home button on the same side of the device as the display of the device, the virtual device in the simulated 3D space does not have a virtual button and instead uses gestures and/or a side button to go home, go to multitasking, and/or invoke a virtual assistant. More generally, the buttons and/or functions of the device that is associated with the user (e.g., being used as an input device, being used as an output device, or associated with a user account of the user) are adopted by the virtual device to help provide the user with a familiar set of buttons and/or functions for interacting with the virtual device in the simulated 3D space. FIG.5A22is similar to FIG.5A3. In particular, FIG.5A22illustrates user interface5032for the maps application displayed on device100in physical space5004, and, accordingly, virtual user interface5030for the maps application displayed on virtual device5016in simulated 3D space5006, as well as virtual 3D model5104displayed in simulated 3D space5006. Virtual 3D model5104is a three-dimensional representation of content displayed in virtual user interface5030and includes a plurality of virtual buildings including virtual building5104a. In addition, FIG.5A22illustrates input5102detected on touch screen112of device100, at a location corresponding to user interface5032on device100. In the example shown in FIG.5A22, input5102includes a contact and movement of the contact (e.g., a drag gesture, or a swipe gesture) in a downward direction. In response to detecting input5102, focus indicator5103is displayed at a corresponding location in virtual user interface5030displayed on virtual device5016. FIGS.5A23-5A24illustrate a transition from FIG.5A22in response to input5102. In particular, FIG.5A23illustrates that, in response to input5102, user interface5032displayed on device100is shifted downward. In accordance with user interface5032being shifted downward, virtual user interface5030displayed on virtual device5016in simulated 3D space5006is shifted downward (e.g., by an amount corresponding to the amount of shift in user interface5032, so that content displayed in user interface5032and content displayed in virtual user interface5030remain synchronized). In addition, virtual 3D model5104is shifted so as to display a three-dimensional representation of a different portion of virtual user interface5030(and of user interface5032), in accordance with virtual user interface5030(and user interface5032) being shifted downward. That is, virtual 3D model5104is shifted so as to appear as if the viewer (e.g., user5002) moved forward along an aerial flyover view of the three-dimensional representation of virtual user interface5030. For example, virtual building5104aappears to move closer and closer to the viewer through the sequence of FIGS.5A22-5A24. In some embodiments, in accordance with a determination that input5102includes a drag gesture (e.g., a contact and movement of the contact, and optionally liftoff of the contact with a velocity below a predefined threshold), user interface5032, virtual user interface5030, and virtual 3D model5104are shifted by an amount corresponding to the amount of movement of input5102(e.g., the displacement of the contact) without being continuously shifted after liftoff of input5102. In some embodiments, in accordance with a determination that input5102includes a swipe gesture (e.g., a contact, movement of the contact, and liftoff of the contact with a velocity above a predefined threshold), user interface5032, virtual user interface5030, and virtual 3D model5104are continuously shifted in response to input5102. Accordingly, FIG.5A24illustrates further shifting of user interface5032, virtual user interface5030, and virtual 3D model5104in response to input5102(e.g., without receiving an additional, intervening input). In some embodiments, the shifting of user interface5032, virtual user interface5030, and virtual 3D model5104occurs with gradual deceleration (e.g., from an initial velocity to a lower or zero velocity). FIG.5A25illustrates a transition from FIG.5A24. In particular, Figure illustrates input5106detected on touch screen112of device100, at a location corresponding to user interface5032. In the example shown in FIG.5A25, input5106includes a plurality of contacts and movement of the contacts away from each other (e.g., a depinch gesture). In response to detecting input5106, focus indicator5107, in this example represented by a pair of indicators, is displayed at a corresponding location in virtual user interface5030displayed on virtual device5016. FIG.5A26illustrates a transition from FIG.5A25in response to input5106. In particular, FIG.5A26illustrates user interface5032after zooming in to user interface5032as shown in FIG.5A25, in response to input5106. In accordance with zooming in to user interface5032, virtual user interface5030is also zoomed in (e.g., by an amount corresponding to the amount of zooming in user interface5032, so that content displayed in user interface5032and content displayed in virtual user interface5030remain synchronized). In addition, virtual 3D model5104is updated so as to display a three-dimensional representation of the zoomed-in portion of virtual user interface5030(and of user interface5032), in accordance with zooming in to virtual user interface5030(and user interface5032). That is, virtual 3D model5104is enlarged and shifted so as to appear as if the viewer moved closer (e.g., forward along an aerial flyover view and/or lower in altitude) to the three-dimensional representation of virtual user interface5030. For example, virtual building5104bas displayed in the center region of virtual 3D model5104in FIG.5A25is displayed larger and closer to the viewer in FIG.5A26. In addition, details of user interface5030and virtual 3D model5104that were not visible at the zoom scale shown in FIG.5A25, such as landmarks5030aand5030bof user interface5030and virtual buildings5104cand5104dof virtual 3D model5104, are visible at the zoom scale shown in FIG.5A26. FIG.5A27, like FIG.5A7, illustrates user interface5042for the video player application displayed on device100in physical space5004, and, accordingly, virtual user interface5040for the video player application displayed on virtual device5016in simulated 3D space5006, as well as a plurality of representations of videos, including videos5108and5116, displayed in simulated 3D space5006. FIG.5A28is similar to FIG.5A27in that FIG.5A28shows the same display environment shown in simulated 3D space5006of FIG.5A27but shows, in physical space5004, a view of user5002holding device100at a pose that is within a particular range of poses (e.g., greater than 30 degrees from the horizontal direction, but less than, for example, 60 degrees from the horizontal direction). FIG.5A29illustrates a transition from FIG.5A28. In particular, Figure illustrates a change in simulated 3D space5006in response to user5002moving device100such that the pose of device100is within a different particular range of poses (e.g., less than 30 degrees from the horizontal direction and within a predefined region in front of the user). In the example shown in FIG.5A29, user5002has lowered device100so that device100is substantially horizontal. In response to the change in the pose of device100, the pose of virtual device5016is changed so as to correspond to the pose of device100(e.g., virtual device5016is displayed so as to appear substantially horizontal from the viewer's perspective). In addition, in FIG.5A29, virtual device5016is in a pointing mode of operation, used to point at and direct focus to particular virtual objects in simulated 3D space5006using focus indicator5112(e.g., displayed as a simulated beam of light). In particular, virtual device5016is used to direct focus to video5108(e.g., a poster representing associated video content), as indicated by highlighting5110. Virtual device5016displays virtual user interface5114to indicate that virtual device5016is in a pointing mode of operation, and to indicate that an input (e.g., a tap gesture) on device100(e.g., on touch screen112of device100), which corresponds to virtual device5016, will cause a selection operation to be performed with respect to selected video5108. FIG.5A30illustrates a transition from FIG.5A29. In particular, Figure illustrates a change in simulated 3D space5006in response to user5002moving device100. In the example shown in FIG.5A30, user5002has rotated device100slightly counterclockwise so as to move focus indicator5112, and accordingly highlighting5110, to video5116(e.g., a poster representing associated video content that is different from the video content associated with video5108). Virtual device5016continues to display virtual user interface5114to indicate that virtual device5016is in the pointing mode of operation and that an input on the input device will cause a selection operation to be performed with respect to selected video5116. FIG.5A31illustrates a transition from FIG.5A30. In particular, Figure illustrates a change in simulated 3D space5006in response to detecting an input by user5002on device100while displaying the display environment as shown in FIG.5A30. In response to the input, video5116is selected and played on screen5118in simulated 3D space5006. Virtual device5016displays virtual user interface5120, which includes a poster representing video5116and a plurality of video playback control buttons for controlling video playback (e.g., a scrub bar for displaying and controlling video progress, a volume control bar, a pause button (which changes appearance between a pause button and a play button when selected/toggled), and fast-forward and rewind buttons). In addition, virtual user interface5120includes a second plurality of control buttons5124for controlling the appearance of screen5118in simulated 3D space5006and/or the appearance of simulated 3D space5006itself. In some embodiments, while playing video5116, device100displays user interface5122, which includes the poster representing video5116and the video playback controls. In some embodiments, the second plurality of control buttons is not displayed in user interface5122on device100(e.g., because the second plurality of control buttons controls features specific to viewing content in a virtual environment, such as simulated 3D space5006). In some embodiments, while playing video5116, device100displays a blank user interface or switches the display to a sleep state (e.g., because device100in physical space5004is not visible to user5002while user5002is viewing simulated 3D space5006) (e.g., to conserve battery power). FIG.5A32is similar to FIG.5A31in that FIG.5A32shows the same display environment shown in simulated 3D space of FIG.5A31but shows, in physical space5004, a view of user5002holding device100at a pose that is at least 30 degrees from the horizontal direction, but less than approximately 60 degrees from the horizontal direction. FIG.5A33illustrates a transition from FIG.5A32. In FIG.5A33, user5002has raised his left arm, on which user5002is wearing watch5012. In response to detecting the lifting of watch5012, virtual user interface5126is displayed in simulated 3D space5006. Virtual user interface5126in simulated 3D space5006is a virtual representation of the user interface of watch5012as displayed in physical space5004. FIG.5A34illustrates device100receiving a notification of an incoming call while video content is being played on screen5118in simulated 3D space5006. In response to receiving the incoming call, virtual device5016displays virtual user interface5130as shown in simulated 3D space5006(and, in some embodiments, device100accordingly displays user interface5128as shown in physical space5004of FIG.5A34). FIG.5A35illustrates a transition from FIG.5A34. In particular, Figure shows input5132detected on touch screen112of device100at a location corresponding to answer button5134in user interface5128, to answer the incoming call. Accordingly, focus indicator5136is displayed on virtual answer button5138in virtual user interface5130. FIG.5A36illustrates a transition from FIG.5A35. In response to detecting input5132to answer the incoming call, virtual device5016displays virtual user interface5142for an ongoing call (and, in some embodiments, device100accordingly displays user interface5140for an ongoing call as shown in FIG.5A36). In addition, playback of video content on screen5118is paused during the ongoing call. FIG.5A36illustrates, in physical space5004, that user5002is holding device100in his hand and away from his body. In some embodiments, while user5002holds device100as shown in FIG.5A36, audio from the ongoing call is output from one or more audio output devices of headset5008(e.g., headphones, earbuds, or speakers of headset5008) on both sides of headset5008(e.g., so that user5002hears the audio in both ears). FIG.5A37illustrates a transition from FIG.5A36. In FIG.5A37, user5002has moved device100up to his ear to continue the ongoing call. Accordingly, because device100is no longer in the field of view of user5002(or of headset5008, such as one or more outward-facing cameras of headset5008), virtual device5016ceases to be displayed in simulated 3D space5006. FIG.5A38illustrates playback of video content on screen5118, and, in some embodiments, a transition from FIG.5A37(e.g., resuming playback of video content after ending the ongoing call illustrated in FIG.5A37). FIG.5A38is similar to FIG.5A31, except that the displayed video content corresponds to a different (e.g., later) point in the video being played. FIG.5A39illustrates a transition from FIG.5A38. In particular, Figure illustrates device100receiving a notification5144of an incoming message, displayed over user interface5122. Accordingly, a virtual notification5146is displayed over virtual user interface5120in simulated 3D space5006. FIG.5A40illustrates a transition from FIG.5A39. In particular, Figure shows input5148detected on touch screen112of device100at a location corresponding to notification5144displayed over user interface5122. Accordingly, focus indicator5150is displayed on virtual notification5146, which is displayed over virtual user interface5120in simulated 3D space5006. FIG.5A41illustrates a transition from FIG.5A40. In response to detecting input5148, virtual device5016displays virtual user interface5154for a messaging application (and, in some embodiments, device100accordingly displays user interface5152for the messaging application, as shown in FIG.5A41). Virtual user interface5154is displayed at a pose that corresponds to a pose of virtual device5016. In addition, playback of video content on screen5118is paused while virtual user interface5154for the messaging application is displayed on virtual device5016. FIG.5A42illustrates a transition from FIG.5A41. In particular, Figure illustrates that user5002has raised device100so as to increase the pose of device100to a threshold pose (e.g., 60 degrees above horizontal and within a predefined region in front of the user). In response to detecting the increase in pose of device100to the threshold pose, virtual user interface5154is displayed at a slightly increased scale and as if slightly lifted away from the surface of virtual device5016. Virtual device5016continues to be displayed at a location in simulated 3D space5006that corresponds to the position of device100in physical space5004. FIG.5A43illustrates a transition from FIG.5A43. In particular, Figure illustrates that user5002has raised device100further so as to increase the pose of device100to significantly above the threshold pose (e.g., nearly vertical). Virtual device5016continues to be displayed at a location in simulated 3D space5006that corresponds to the position of device100in physical space5004. In addition, virtual user interface5154is displayed at a predefined zoom scale (e.g., a maximum zoom scale) and at a predefined location in simulated 3D space5006away from the location of virtual device5016(e.g., so as to improve the readability of virtual user interface5154as displayed in FIG.5A43compared to user interface5154as displayed on virtual device5016in FIG.5A41, for example). In some embodiments, the speed of the transition from virtual user interface5154being displayed as slightly lifted away from virtual device5016to virtual user interface5154being displayed at the predefined location and scale in simulated 3D space5006is faster than the speed at which user5002raises device100(e.g., faster than the speed at which the pose of device100changes). FIG.5A44illustrates a transition of FIG.5A43. In particular, Figure illustrates that user5002has entered text (e.g., “OK”) in the text entry field of virtual user interface5154, and that, in response to user5002entering text, a “Send” affordance is displayed in virtual user interface5154. FIG.5A44also illustrates an input (e.g., a tap gesture) at a location on device100in physical space5004corresponding to activation of the “Send” affordance, where the input is represented by focus indicator5156accordingly displayed on the “Send” affordance in virtual user interface5154. FIG.5A45illustrates a transition of FIG.5A44. In particular, Figure illustrates that, in response to the input activating the “Send” affordance, a message (e.g., “OK”) is sent to a recipient (e.g., “Bob”), and displayed in virtual user interface5154(and, in some embodiments, accordingly in user interface5152on device100). FIG.5A46illustrates a transition of FIG.5A45. In particular, Figure illustrates that user5002has moved device100toward the right while maintaining device100at the same pose as shown in FIG.5A45. In response to the movement of device100toward the right, display of virtual device5016in simulated 3D space5006is updated to show movement of virtual device5016so that virtual device5016continues to be displayed at a location in simulated 3D space5006that corresponds to the position of device100in physical space5004. FIG.5A47illustrates a transition of FIG.5A46. In particular, Figure illustrates that user5002has lowered device100so as to decrease the pose of device100to the same pose as shown in FIG.5A42corresponding to the threshold pose. Accordingly, the pose and location of virtual device5016in simulated 3D space5006is updated so as to correspond to the position of device100in physical space5004. Although device100is at the same pose as shown in FIG.5A42, virtual user interface5154is maintained at the predefined zoom scale and at the predefined location in simulated 3D space5006away from the location of virtual device5016, rather than being redisplayed as if slightly lifted away from the surface of virtual device5016. That is, the pose threshold at which upward movement of device100triggers display of virtual user interface5154at the predefined zoom scale and predefined location away from virtual device5016is distinct from (e.g., higher than) the pose threshold at which downward movement of device100triggers redisplay of virtual user interface5154on or nearly on the surface of virtual device5016(e.g., the upward and downward thresholds include hysteresis). FIG.5A48illustrates a transition from FIG.5A47. In particular, Figure illustrates that user5002has lowered device100so as to decrease the pose of device100to the same pose as shown in FIG.5A41. In the example shown in FIG.5A48, device100has been lowered so as to have a pose that is below a (downward) pose threshold and, accordingly, virtual user interface5154is redisplayed on the surface of virtual device5016(e.g., the viewing mode in which virtual user interface5154is displayed at the predefined zoom scale and predefined location away from virtual device5016has been terminated). FIGS.6A-6Eare flow diagrams illustrating method600of displaying and adjusting an appearance of a virtual user interface object in a virtual reality environment based on user inputs in the physical world, in accordance with some embodiments. Method600is performed at a computer system (e.g., portable multifunction device100,FIG.1A, device300,FIG.3A, or a multi-component computer system including headset5008and an input device (e.g., device100or device5010), FIG.5A1) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as a display generation component of the computer system, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands or eyes), and optionally one or more pose sensors for detecting respective poses of one or more of the input device (e.g., device100or device5010and/or watch5012, FIG.5A1) and display generation components (e.g., the pose sensors include one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system). In some embodiments, the computer system (e.g., the input device of the computer system) includes one or more sensors to detect intensities of contacts with the input device (e.g., a touch-sensitive surface), and optionally one or more tactile output generators. In some embodiments, the computer system includes one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect toFIGS.3B-3C, in some embodiments, method600is performed at a computer system301(e.g., computer system301-a,301-b, or301-c) in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more pose sensors are each either included in or in communication with computer system301. In some embodiments, the display generation component is a touch-screen display and the input device (e.g., with a touch-sensitive surface) is on or integrated with the display generation component. In some embodiments, the display generation component is separate from the input device (e.g., as shown inFIG.4Band FIG.5A1). Some operations in method600are, optionally, combined and/or the order of some operations is, optionally, changed. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting contacts on the touch-sensitive surface of the input device while displaying some of the user interfaces shown in the figures on the display of headset5008, and optionally while displaying some of the user interfaces shown in the figures on a separate display generation component of input device100. However, analogous operations are, optionally, performed on a computer system with a touch-sensitive display system112(e.g., on device100with touch screen112) and optionally one or more integrated cameras. Similarly, analogous operations are, optionally, performed on a computer system having one or more cameras that are implemented separately (e.g., in a headset) from one or more other components (e.g., an input device) of the computer system; and in some such embodiments, “movement of the computer system” corresponds to movement of one or more cameras of the computer system, or movement of one or more cameras in communication with the computer system. As described below, method600relates to displaying and adjusting an appearance of a virtual user interface object (e.g., a representation of a smartphone) in a virtual reality environment (e.g., an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world), based on user inputs in the physical world (e.g., on a smartphone in the physical world). Allowing a user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world provides an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones), thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The computer system (e.g., device100, FIG.5A2) displays (602), via at least a first display generation component (e.g., a VR headset, or a stereo display, a 3D display, a holographic projector, a volumetric display, etc.) (e.g., headset5008, FIG.5A1) of the one or more display generation components, a view of at least a portion of a simulated three-dimensional space (e.g., a portion of a 3D virtual environment that is within the user's field of view) (e.g., simulated 3D space5006, FIG.5A2). In some embodiments, the simulated three-dimensional space is an immersive environment that has boundaries that do not move relative to the user (e.g., as the user moves around, the boundaries are either hidden from the user by being placed outside of a field of view of the user and/or the boundaries move as the user moves or provides inputs that correspond to movement in the immersive environment to give the illusion that the simulated three-dimensional space extends in all directions around the user). The computer system also displays, via at least the first display generation component, a view of a user interface object (e.g., virtual device5016, FIG.5A2) that is located within the simulated three-dimensional space. The user interface object is a representation of a computing device (e.g., a handheld computing device, such as a smartphone) (e.g., device100, FIG.5A2) that has a non-immersive display environment (e.g., a user interface where the boundaries of the user interface are visible to the user and the boundaries of the user interface move relative to the user in accordance with user inputs) (e.g., a touch screen display of a handheld computing device, such as touch screen112of device100, FIG.5A2) that provides access to a plurality of different applications (e.g., the user interface object is a 3D graphical image that visually resembles and/or represents a handheld device (e.g., a smartphone) that provides access to a plurality of user applications, such as an instant messages application, a maps application, a calendar application, an e-mail application, etc.) (e.g., as shown in FIG.5A2). In some embodiments, the computer system includes the computing device (e.g., device100, FIG.5A2), wherein a touch-screen display of the computing device (e.g., touch screen112, FIG.5A2) is the input device of the computer system. The user interface object (e.g., virtual device5016, FIG.5A2) includes a first user interface (e.g., a two-dimensional user interface) (e.g., virtual user interface5018, FIG.5A2) that corresponds to the non-immersive display environment (e.g., user interface5020, Figure (e.g., the 3D graphical image of the handheld device (e.g., virtual device5016, Figure includes a representation of (e.g., an exact image, or augmented image, or stylized image) of a user interface of a type that is, in some circumstances, displayed on the touch-screen display that provides access to the plurality of user applications) of the computing device (e.g., user interface5020, FIG.5A2) and is responsive to touch inputs from a user on the input device (e.g., in the same manner or a consistent manner that the user interface shown in the non-immersive display environment responds to touch inputs). In addition, a pose of the user interface object in the simulated three-dimensional space corresponds to a pose of the input device in a physical space surrounding the input device (e.g., the orientation of the user interface object is continuously updated to correspond to the orientation of the input device when the input device moves relative to the physical space surrounding the input device) (e.g., as shown in FIGS.5A28-5A30). The computer system detects (604) a touch input (e.g., a tap or swipe input) at a location on the input device that corresponds to a respective location in the first user interface (e.g. a button shown in the first user interface, or a slider or scrollable map shown in the first user interface) (e.g., as shown in FIG.5A2). In response to detecting the touch input on the input device (606): in accordance with a determination that the touch input is detected at a location on the input device that corresponds to a first location in the first user interface, the computer system updates an appearance of the first user interface that is displayed on the user interface object in a first manner (e.g., launches a first application in the first user interface that is included on the surface of the user interface object or move or resize a user interface object displayed in the first user interface at the first location) (e.g., as shown in FIGS.5A2-5A3, where a tap input on the icon for the maps application launches the maps application); and in accordance with a determination that the touch input is detected at a location on the input device that corresponds to a second location in the first user interface, the computer system updates the appearance of the first user interface that is displayed on the user interface object in a second manner that is different from the first manner (e.g., launches a second application in the first user interface that is included on the surface of the user interface object or move or resize a user interface object displayed in the first user interface at the second location) (e.g., as shown in FIGS.5A6-5A7, where a tap input on the icon for the video player application launches the video player application). In some embodiments, in response to detecting the touch input on the input device (608): in accordance with a determination that the touch input is detected at the location on the input device that corresponds to the first location in the first user interface, the computer system updates an appearance of the three-dimensional space in a third manner in accordance with the update in appearance made to the first user interface (e.g., in accordance with the update in appearance made to the first user interface in the first manner); and in accordance with a determination that the touch input is detected at the location on the input device that corresponds to the second location in the first user interface, the computer system updates the appearance of the three-dimensional space in a fourth manner that is different from the third manner, in accordance with the update in appearance made to the first user interface (e.g., in accordance with the update in appearance made to the first user interface in the second manner). For example, in some embodiments, in accordance with a determination that the touch input is detected at the location on the input device that corresponds to the first location in the first user interface, a first application (e.g., a map application) is launched in the first user interface and the appearance of the three-dimensional space is updated to match the first application (e.g., the three-dimensional space is updated to include a 3D map corresponding to the launched map application, as shown in FIG.5A3). In some embodiments, in accordance with a determination that the touch input is detected at the location on the input device that corresponds to the second location in the first user interface, a second application (e.g., a video player application) is launched in the first user interface and the appearance of the three-dimensional space is updated to match the second application (e.g., the three-dimensional space is updated to include a plurality of selectable representations of videos available to be launched via the video player application, as shown in FIG.5A7). In some embodiments, the computer system detects a plurality of (distinct) touch inputs at respective locations on the input device that correspond to respective locations in the first user interface (e.g., a series, or sequence of touch inputs), and the plurality of touch inputs includes at least one touch input that is detected at a location on the input device that corresponds to the first location in the first user interface, and at least one touch input that is detected at a location on the input device that corresponds to the second location in the first user interface. Updating the appearance of the three-dimensional space in accordance with the updates made to the user interface of the virtual device (e.g., the first user interface of the user interface object) improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), provides the user with a more immersive and/or intuitive viewing experience, enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system updates (610), or causes the computing device to update, a second user interface (e.g., user interface5032, FIG.5A3) displayed through the non-immersive display environment of the computing device (e.g., device100, FIG.5A3) in accordance with the touch input directed to the first location in the first user interface (e.g., as shown in FIG.5A2). In some embodiments, the second user interface (e.g., user interface5032, FIG.5A3) corresponds to the first user interface (e.g., virtual user interface5030, FIG.5A3). For example, the change made to the second user interface in the non-immersive display environment on the display of the computing device is the same change that is shown on the first user interface in the immersive display environment, and the same change to the second user interface can be shown concurrently with the change shown in the immersive display environment (e.g., as shown in FIG.5A3). Similarly, in some embodiments, the second user interface displayed through the non-immersive display environment of the computing device is updated in accordance with the touch input directed to the second location in the first user interface (e.g., as shown in FIGS.5A6-5A7) (e.g., the user interface of the computing device and the user interface of the corresponding virtual device in the immersive display environment are kept synchronized as the user interacts with the virtual device by providing inputs to the computing device while viewing the virtual device in the immersive display environment, as shown in FIGS.5A2-5A11). Updating the appearance of the user interface of the real device (e.g., the second user interface displayed through the non-immersive display environment of the computing device) in accordance with updates made to the user interface of the virtual device (e.g., with touch inputs in the first user interface of the user interface object) improves the visual feedback provided on the computing device (e.g., by keeping the user interface of the computing device synchronized with the corresponding virtual device), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computing device that has the non-immersive display environment (e.g., a smartphone, such as device100) has (612) an internal state that is used to determine the appearance of the first user interface that is displayed on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., the virtual user interface that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A2-5A11). Using the internal state of the computing device to determine the appearance of the virtual device (e.g., the appearance of the first user interface that is displayed on the user interface object) keeps the real computing device and the virtual device synchronized, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, inputs in the first user interface on the user interface object (e.g., touch inputs from the user on the input device that correspond to inputs in the first user interface) (e.g., touch inputs from user5002on device100that correspond to inputs in the virtual user interface that is displayed on virtual device5016) cause (614) one or more changes in the internal state of the computing device (e.g., device100) that has the non-immersive display environment (e.g., sent/read messages, reordered applications, recently used application information, etc.). For example, in some embodiments, messages that are marked as read in the first user interface on the user interface object in the simulated three-dimensional space (e.g., in the virtual user interface on virtual device5016in the simulated 3D space5006) are also marked as read in the internal state of the computing device that has the non-immersive display environment (e.g., in device100in the physical space5004). Changing the internal state of the computing device in response to inputs in the virtual device (e.g., in the first user interface on the user interface object) keeps the real computing device and the virtual device synchronized, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, changes in the internal state of the computing device that has the non-immersive display environment (e.g., device100in the physical space5004) cause (616) changes in the first user interface on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., in the virtual user interface on virtual device5016in the simulated 3D space5006) (e.g., incoming notifications). For example, in some embodiments, incoming notifications for the computing device that has the non-immersive display environment (e.g., incoming notifications on a smartphone) cause changes in the first user interface on the user interface object (e.g., the incoming notifications also appear in the first user interface on the user interface object in the simulated three-dimensional immersive display environment) (e.g., as shown in FIG.5A39with respect to an incoming text message) (e.g., as shown in Figure with respect to an incoming phone call). Changing the virtual device (e.g., changing the first user interface on the user interface object) in accordance with changes in the real computing device (e.g., in the internal state of the computing device) keeps the real computing device and the virtual device synchronized, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., the virtual user interface that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A2-5A11) is (618) a simulation of a touch-sensitive user interface on the computing device (e.g., a device with a touch-sensitive display that has a non-immersive display environment) (e.g., the user interface on device100, FIGS.5A2-5A11). For example, in some embodiments, the first user interface that is displayed on the user interface object is a simulation of a smartphone user interface (e.g., as shown in Figures Simulating a touch-sensitive user interface of a real device on the user interface of the virtual device (e.g., the first user interface that is displayed on the user interface object) provides a consistent user interface (and consistency between what is displayed and what a user would expect to see), improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (620) a change in the pose of the input device in the physical space surrounding the input device; and in response to detecting the change in the pose of the input device in the physical space surrounding the input device, changes the pose of the user interface object (and the included first user interface) in the simulated three-dimensional space (e.g., as shown in FIGS.5A28-5A30where user5002changes the pose of device100in the physical space5004and the pose of virtual device5016in the simulated 3D space5006changes accordingly). For example, in some embodiments, the pose of the user interface object in the simulated three-dimensional space (e.g., the pose of virtual device5016in the simulated 3D space5006) is continuously updated to correspond to the pose of the input device (e.g., the pose of device100in the physical space5004) when the input device moves relative to the physical space surrounding the input device. Changing the pose of the virtual device (e.g., the user interface object) in response to changes in pose of the input device improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and user movement and by providing consistency between what is displayed and what a user would expect to see), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first display generation component is (622) a virtual reality headset (e.g., headset5008-a, FIG.5A1); the input device is a handheld computing device (e.g., a smartphone with a touch-sensitive display) (e.g., device100, FIG.5A1) that is distinct from the virtual reality headset; and the handheld computing device sends the first user interface (e.g., the virtual user interface on virtual device5016in the simulated 3D space5006) to the virtual reality headset for display. Allowing the user to use a handheld computing device (e.g., a smartphone) as the input device and view the virtual reality environment via a separate virtual reality headset improves the feedback provided to the user (e.g., by allowing the user to use a familiar input device with familiar tactile feedback), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system is (624) a handheld computing device (e.g., a smartphone); and the first display generation component is a touch-screen display of the handheld computing device (e.g., touch screen112of device100). For example, in some embodiments, a touch-screen display of a smartphone displays the view of at least a portion of the simulated three-dimensional space and the view of the user interface object that is located within the simulated three-dimensional space, and the smartphone is inserted into a VR headset (e.g., headset5008-b, FIG.5A1) (e.g., a cardboard VR headset) for the user to wear. Displaying the virtual reality environment using the display of a handheld computing device (e.g., a smartphone) allows the user to view the virtual reality environment without requiring a separate display generation component, enhances the operability of the device (e.g., by allowing the user to use the device as a smartphone or as a virtual reality viewer), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the input device is (626) a touch-sensitive remote control that does not have a display (e.g., device5010, FIG.5A1). Allowing the user to use a touch-sensitive remote control as the input device (while using a handheld computing device such as a smartphone to view the virtual reality environment) provides an efficient way to operate/interact with the computing device when the computing device is not available as an input device and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, while displaying the three-dimensional space in a manner corresponding to a first application (e.g., a video player application, as shown in FIG.5A38) (628), the computer system: detects a second touch input (e.g., a tap or swipe input) at a location on the input device that corresponds to launching a second application, distinct from the first application, in the first user interface that is included on the surface of the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., as shown in FIG.5A40); and in response to detecting the second touch input on the input device: displays the second application in the first user interface, while maintaining display of the three-dimensional space in a manner corresponding to the first application (e.g., as shown in FIG.5A41). For example, in some embodiments, the first user interface of the user interface object in the simulated three-dimensional display environment is used to view and/or interact with a second application (e.g., a camera application, a messaging application, a phone application, etc.) while the simulated three-dimensional display environment continues to correspond to a first application (e.g., a video player application that is playing a movie in the simulated three-dimensional display environment) (e.g., as shown in FIGS.5A33-5A48). For example, notifications (e.g., messages, calls, etc.) received on the virtual phone (e.g., on the first user interface of the user interface object) can be answered in the simulated three-dimensional display environment (e.g., via touch inputs on the virtual phone) (e.g., as described herein with reference to operation646of method600). As another example, in some embodiments, the second application is used to “punch through” the simulated three-dimensional display environment to the real physical world. For example, the virtual phone (e.g., through the first user interface of the user interface object, virtual device5016) becomes a window to the real world (e.g., through a camera application). Allowing the user to view content from a first application in the virtual reality environment (e.g., in the three-dimensional space) and content from a second application on the virtual phone (e.g., on the first user interface of the user interface object) simulates usage of a real phone in the physical world, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the user interface object is (630) displayed at a location within the simulated three-dimensional space that corresponds to a location of the input device in the user's hand in the physical space (e.g., as shown in FIGS.5A28-5A30). Displaying the user interface object at a location within the simulated three-dimensional space that corresponds to a location of the input device in the user's hand in the physical space improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and by providing consistency between what is displayed and what a user would expect to see), provides the user with a more immersive and/or intuitive viewing experience, enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object (e.g., the virtual user interface that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A2-5A11) when the computing device is in a first mode of operation (e.g., a first application or home screen is designated for display on the computing device, such as device100in FIGS.5A2-5A11) while the user is viewing the simulated three-dimensional environment (e.g., in the simulated three-dimensional immersive display environment) corresponds (632) to a second user interface that is displayed on the computing device (e.g., the user interface that is displayed on device100in the physical space5004, FIGS.5A2-5A11) when the computing device is in the first mode of operation (e.g., displayed on the display of the computing device through the non-immersive display environment of the computing device) (e.g., as shown in FIGS.5A2-5A11). In some embodiments, the first user interface that is displayed on the user interface object (e.g., the user interface of the virtual reality phone in the simulated three-dimensional immersive display environment) (e.g., the virtual user interface that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A2-5A11) mimics the user interface of a smartphone in the physical environment (e.g., the user interface that is displayed on device100in the physical space5004, FIGS.5A2-5A11). For example, in some embodiments, applications on the virtual reality phone are the same applications as on the smartphone in the physical environment, with the same organization (e.g., as shown in FIG.5A2). In some embodiments, navigational inputs on the virtual reality phone result in the same changes as navigational inputs on the actual smartphone (e.g., as shown in Figures In some embodiments, the first user interface includes more or less information (e.g., device status information) than the second user interface (e.g., as shown in Figure user interface5020displayed on device100in the physical space5004includes device status information that is not displayed in virtual user interface5018that is displayed on virtual device5016in the simulated 3D space5006). Displaying a first user interface (e.g., on the virtual reality phone in the simulated three-dimensional environment) that corresponds to a second user interface (e.g., on the real computing device in the physical world) provides a consistent user interface (and consistency between what is displayed and what a user would expect to see), improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object (e.g., virtual user interface5060that is displayed on virtual device5016in the simulated 3D space5006, FIG.5A17) includes (634) additional depth compared to the second user interface displayed through the non-immersive display environment of the computing device (e.g., on a display of the computing device) (e.g., user interface5062that is displayed on device100in the physical space5004, FIG.5A17). In some embodiments, one or more user interface elements from the second user interface (e.g., icons and/or text from user interface5062that is displayed on device100in the physical space5004, Figure that are spaced apart by a respective amount in a first direction on the display of the device (e.g., along a simulated z-axis that extends out of the display of the device) are spaced apart by more than the respective amount in the first direction (e.g., along a simulated z-axis that extends out of the simulated display of the user interface object) in the first user interface (e.g., in the simulated three-dimensional environment) (e.g., icons and/or text from virtual user interface5060that is displayed on virtual device5016in the simulated 3D space5006are spaced apart by more than the respective amount in the first direction, FIG.5A17). For example, in some embodiments, the icons and/or text displayed in the first user interface appear to float off the virtual reality phone in the simulated three-dimensional immersive display environment (e.g., as shown in FIG.5A17). Displaying the first user interface (e.g., of the virtual device) with additional depth compared to the second user interface (e.g., of the real device) provides the user with a more enhanced and immersive viewing experience, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (636) an activation of a button of the input device (e.g., volume up/down, home, power, etc.); and in response to detecting the activation of the button of the input device, updates the appearance of the first user interface that is displayed on the user interface object in accordance with the activation of the button (e.g., as shown in FIGS.5A20-5A21). For example, in some embodiments, where the input device is a smartphone with a home button (e.g., device100, FIG.5A20), in response to detecting activation of the home button, the computer system updates (or causes the computing device and/or one or more display generation components to update) the first user interface to display the home screen (e.g., virtual user interface5019, FIG.5A11). In some embodiments, the button is a display off button and in response to detecting activation of the button the first user interface is replaced with a representation of the device with a display off (e.g., virtual device5016with the display off). In some embodiments, the button is a display lock button and in response to the detecting activation of the button the first user interface is replaced with a lock screen. In some embodiments, the button is a virtual assistant button (e.g., a home button of device100) and in response to detecting activation of the button (e.g., a long press of the home button of device100, as shown in FIG.5A20), the device displays an indication of a virtual assistant (e.g., as shown in FIG.5A21). Updating the appearance of the first user interface (e.g., of the virtual device) in accordance with activation of a button of the input device (e.g., the real computing device) keeps the real computing device and the virtual device synchronized, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and by providing consistency between what is displayed and what a user would expect to see), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system displays (638) a first view of at least a portion of the simulated three-dimensional space (e.g., a portion of a 3D virtual environment associated with a third application), wherein the first view is associated with a third application; and detects a second touch input (e.g., a tap or swipe input) on the input device, wherein the second touch input launches a fourth application, distinct from the third application (e.g., the second touch input is a tap input on an application icon to launch the fourth application) (e.g., the second touch input is a side swipe input to switch between different applications in the application stack). In some embodiments, the first user interface in the simulated three-dimensional environment is a view of the third application when the second touch input is detected (e.g., a view of the maps application, as shown in FIG.5A3, and the second touch input is a side swipe input used to switch between open applications from the maps application to the video player application). In some embodiments, the first user interface in the simulated three-dimensional environment is a view of a home screen of the device when the second touch input is detected (e.g., a view of home screen user interface5020, as shown in5A6, and the second touch input is a tap input on an application icon to launch the video player application, as shown in FIG.5A7). In some embodiments, in response to detecting the second touch input on the input device, the computer system displays a second view of at least a portion of the simulated three-dimensional space (e.g., simulated 3D space5006), distinct from the first view, wherein the second view is associated with the fourth application (e.g., a video player application, as shown in FIG.5A7). Updating the appearance of the simulated three-dimensional space in accordance with a currently selected application improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and by providing consistency between what is displayed and what a user would expect to see), provides the user with a more immersive and/or intuitive viewing experience, enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., virtual user interface5030that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A22-5A24) extends (640) outside the user interface object (e.g., when scrolling a list on the first user interface, the list appears to scroll beyond the edge of the user interface object before fading out) (e.g., when scrolling through the map of virtual user interface5030, the map is displayed so as to appear to extend beyond the edge of virtual device5016before fading out, as shown in FIGS.5A22-5A24). Displaying the first user interface (e.g., of the virtual device) extending outside the user interface object (e.g., beyond the edge(s) of the virtual device) provides the user with a more enhanced and immersive viewing experience, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., the virtual user interface that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A2-5A11) includes (642) additional information about one or more of the user interface objects in the simulated three-dimensional space. For example, the device (e.g., virtual device5016, FIG.5A27) displays metadata about a representation of a media item (e.g., audio or video such as a song, movie, television show, or the like) that is displayed in the simulated three-dimensional space (e.g., in FIG.5A27, although not shown in detail here, in some embodiments, virtual user interface5040displays metadata about the listed movie selections). Displaying additional information about one or more of the user interface objects in the simulated three-dimensional space on the first user interface (e.g., of the virtual device) provides the user with a more enhanced and immersive viewing experience, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface that is displayed on the user interface object corresponds (644) to a respective application, and the method further includes: in accordance with a determination that the respective application includes instructions for displaying one or more three-dimensional objects in the simulated three-dimensional space (e.g., a full virtual reality experience), updating an appearance of the three-dimensional space in accordance with the instructions from the respective application (e.g., as shown in FIG.5A3with respect to a maps application) (e.g., as shown in FIG.5A7with respect to a video player application); and in accordance with a determination that the respective application does not include instructions for displaying a simulated three-dimensional environment (e.g., a full virtual reality experience), displaying the simulated three-dimensional space includes displaying at least a portion of a two-dimensional user interface for the respective application that is adapted for display in the non-immersive display environment of the computing device (e.g., a weather forecast user interface) along with one or more other user interface objects (e.g., clouds, raindrops, lightning, etc.) generated based on information from the respective application (e.g., that includes a slightly enhanced version of the application content (e.g., weather app with VR weather such as clouds or rain)) (e.g., as shown in FIGS.5A16-5A19). In some embodiments, the computer system displays a plurality of (distinct) user interfaces on the user interface object, the plurality of user interfaces corresponding to respective applications, and the plurality of user interfaces includes at least one user interface corresponding to an application that includes instructions for displaying one or more three-dimensional objects in the simulated three-dimensional space, and at least one user interface corresponding to an application that does not include instructions for displaying a simulated three-dimensional environment. Displaying the simulated three-dimensional space with at least one or more other user interface objects generated based on information from the respective application (e.g., when the respective application does not include instructions for displaying a full virtual reality experience) or displaying the three-dimensional space in accordance with instructions from the respective application provides the user with a more enhanced and immersive viewing experience, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system receives (646) an indication that an event (e.g., receipt of an electronic communication such as a phone call, email, or other electronic message, or notification that corresponds to an application on the device) has occurred that corresponds to generation of a notification at the computing device (e.g., receiving a phone call, as shown in FIG.5A34) (e.g., receiving a text message, as shown in FIG.5A39); and in response to receiving the indication, displays a representation of the notification on the first user interface that is displayed on the user interface object (e.g., in the simulated three-dimensional immersive display environment) (e.g., as shown in FIG.5A34and FIG.5A39) and, optionally in some embodiments, displays a representation of the notification on the non-immersive display environment of the computing device (e.g., on device100, as shown in FIG.5A34and FIG.5A39). For example, in some embodiments, the notification received on the computing device is an incoming phone call, which the user can answer or decline by tapping on (or sliding) affordances on the first user interface (e.g., in the simulated three-dimensional immersive display environment) (e.g., as shown in FIGS.5A34-5A37). As another example, in some embodiments, the notification received on the computing device is a message or alert, which the user can respond to in the simulated three-dimensional immersive display environment by: 1) ignoring the message/alert, which will cause the computer system to cease to display the message/alert in the first user interface in the simulated three-dimensional space after a respective period of time with no interaction, 2) dismissing the message/alert with a gesture such as a flick on the touch-sensitive surface (of the input device) at a location that corresponds to the message/alert in the first user interface in the simulated three-dimensional space before the respective period of time has elapsed, which will cause the computer system to cease to display the message/alert before the respective period of time has elapsed, or 3) tapping on the touch-sensitive surface (of the input device) (e.g., on device100, as shown in FIG.5A40) at a location that corresponds to the message/alert in the first user interface in the simulated three-dimensional space, which will cause the computer system to display a corresponding application for responding to the message/alert in the first user interface in the simulated three-dimensional space (e.g., as shown in FIGS.5A40-5A41). Displaying a representation of a notification on the first user interface (e.g., of the virtual device) in response to receiving an indication that an event has occurred that corresponds to generation of the notification at the computing device simulates usage of a real phone in the physical world, improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. It should be understood that the particular order in which the operations inFIGS.6A-6Ehave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods700,800,900, and1000) are also applicable in an analogous manner to method600described above with respect toFIGS.6A-6E. For example, the contacts, gestures, user interface objects, intensity thresholds, focus indicators, and/or animations described above with reference to method600optionally have one or more of the characteristics of the contacts, gestures, user interface objects, intensity thresholds, focus indicators, and/or animations described herein with reference to other methods described herein (e.g., methods700,800,900, and1000). For brevity, these details are not repeated here. FIGS.7A-7Care flow diagrams illustrating method700of selecting a mode of operation of an input device in accordance with movement of and changes in pose of the input device, in accordance with some embodiments. Method700is performed at a computer system (e.g., portable multifunction device100,FIG.1A, device300,FIG.3A, or a multi-component computer system including headset5008and an input device (e.g., device100or device5010), FIG.5A1) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as a display generation component of the computer system, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands or eyes), and optionally one or more pose sensors for detecting respective poses of one or more of the input device (e.g., device100or device5010and/or watch5012, FIG.5A1) and display generation components (e.g., the pose sensors include one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system). In some embodiments, the computer system (e.g., the input device of the computer system) includes one or more sensors to detect intensities of contacts with the input device (e.g., a touch-sensitive surface), and optionally one or more tactile output generators. In some embodiments, the computer system includes one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect toFIGS.3B-3C, in some embodiments, method700is performed at a computer system301(e.g., computer system301-a,301-b, or301-c) in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more pose sensors are each either included in or in communication with computer system301. In some embodiments, the display generation component is a touch-screen display and the input device (e.g., with a touch-sensitive surface) is on or integrated with the display generation component. In some embodiments, the display generation component is separate from the input device (e.g., as shown inFIG.4Band FIG.5A1). Some operations in method700are, optionally, combined and/or the order of some operations is, optionally, changed. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting contacts on the touch-sensitive surface of the input device while displaying some of the user interfaces shown in the figures on the display of headset5008, and optionally while displaying some of the user interfaces shown in the figures on a separate display generation component of input device100. However, analogous operations are, optionally, performed on a computer system with a touch-sensitive display system112(e.g., on device100with touch screen112) and optionally one or more integrated cameras. Similarly, analogous operations are, optionally, performed on a computer system having one or more cameras that are implemented separately (e.g., in a headset) from one or more other components (e.g., an input device) of the computer system; and in some such embodiments, “movement of the computer system” corresponds to movement of one or more cameras of the computer system, or movement of one or more cameras in communication with the computer system. As described below, method700relates to determining an input mode of an input device (e.g., a smartphone or other physical controller in the physical world) in a virtual reality environment (e.g., an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world), based on a pose of the input device, and specifically based on whether the pose of the input device meets predefined criteria associated with respective input modes. Allowing a user to interact with a virtual reality environment through a plurality of available input modes (e.g., a pointer mode, where the input device is held at or near a substantially horizontal pose, in which a corresponding user interface object in the virtual reality environment shifts focus between other user interface objects, or a mode in which the input device in conjunction with the corresponding user interface object simulate usage of a real phone in the physical world) provides an intuitive and efficient way for the user to switch between input modes (e.g., by changing the pose of the input device using straightforward and intuitive motions) without cluttering the display environment with additional control affordances or requiring numerous inputs, thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The computer system displays (702) via at least a first display generation component (e.g., a VR headset, or a stereo display, a 3D display, a holographic projector, a volumetric display, etc.) of the one or more display generation components: a view of at least a portion of a simulated three-dimensional space (e.g., simulated 3D space5006, FIG.5A27); and a view of a user interface object that is located within the simulated three-dimensional space (e.g., virtual device5016, FIG.5A27), wherein: the user interface object includes a first user interface (e.g., a two-dimensional user interface) that is responsive to touch inputs on the input device (e.g., virtual user interface5040for the video player application displayed on virtual device5016in simulated 3D space5006, responsive to touch inputs on device100in physical space5004, FIG.5A27), and a pose of the user interface object in the simulated three-dimensional space corresponds to a pose of the input device in a physical space surrounding the input device. For example, the user interface object is a representation of a computing device (e.g., device100in physical space5004, FIG.5A27) that has a non-immersive display environment (e.g., a touch-screen display, such as touch screen112, FIG.5A27) that provides access to a plurality of different applications (e.g., the user interface object is a 3D graphical image that visually resembles and/or represents a handheld device (e.g., a smartphone) that provides access to a plurality of user applications, such as an instant messages application, a maps application, a calendar application, an e-mail application, etc.); the first user interface corresponds to the non-immersive display environment (e.g., the 3D graphical image of the handheld device includes an exact image (or augmented image, or stylized image) of the touch-screen display that provides access to the plurality of user applications) and is responsive to touch inputs from the user on the input device in the same manner or a consistent manner that the user interface shown in the non-immersive display environment (e.g., user interface5042shown on device100in physical space5004, FIG.5A27) responds to touch inputs. The computer system detects (704) a movement input via the input device that includes movement of the input device relative to a physical environment surrounding the input device (e.g., lowering and/or rotating of device100, as shown and described in greater detail herein with reference to FIGS.5A27-5A30). In response to detecting the movement input via the input device (706): in accordance with a determination that the movement of the input device is detected while the input device meets first input-mode criteria that include a first criterion that is satisfied when a pose of the input device is within a first range of poses (e.g., substantially horizontal or less than 30 degrees tilted up from the horizontal plane, as shown and described in greater detail herein with reference to FIGS.5A29-5A30), the computer system performs a respective operation within the simulated three-dimensional space (e.g., moving a focus indicator within the simulated three-dimensional space, such as moving focus indicator5112within simulated 3D space5006as described with reference to FIGS.5A29-5A30, dragging an object in the three-dimensional space, etc.) in accordance with the movement of the input device (and optionally, ceasing to display the first user interface, such as ceasing to detect virtual user interface5040in FIG.5A29), wherein at least a portion of the respective operation occurs outside of the user interface object (e.g., the user interface object acts as a source of the force or action that affects object(s) or space that is located outside of the user interface object); and, in accordance with a determination that the movement of the input device is detected while the input device meets second input-mode criteria that require a pose of the input device to be within a second range of poses distinct from the first range of poses (e.g., greater than 30 degrees from the horizontal direction), the computer system repositions the user interface object in the simulated three-dimensional space in accordance with the movement of the input device without performing the respective operation (e.g., movement of device100while at the pose displayed in FIG.5A28results in repositioning virtual device5016without displaying or moving focus indicator5112). In some embodiments, the first input-mode criteria require a pose of the input device to be within the first range of poses. In some embodiments, the computer system detects a plurality of (distinct) movement inputs that include movement of the input device relative to the physical environment (e.g., a series, or sequence of movement inputs), and the plurality of movement inputs includes at least one movement input for which the movement of the input device is detected while the input device meets the first input-mode criteria that require the pose of the input device to be within the first range of poses, and at least one movement input for which the movement of the input device is detected while the input device meets the second input-mode criteria that require the pose of the input device to be within the second range of poses distinct from the first range of poses. In some embodiments, the computer system detects (708) a touch input via the input device; in accordance with a determination that the touch input is detected while the input device meets the second input-mode criteria, performs a user interface operation in the first user interface (e.g., performing a user interface operation corresponding to the detected touch input); and in accordance with a determination that the touch input is detected while the input device meets the first input-mode criteria, forgoes performance of the user interface operation in the first user interface. In some embodiments, the first user interface is responsive to (e.g., operations are performed in the first user interface in response to) touch inputs from the user while the input device is in the second input mode (satisfying the second input-mode criteria) but not while the input device is in the first input mode (satisfying the first input-mode criteria) (e.g., in response to a tap input on device100the computer system performs an operation in user interface5040(FIG.5A27) corresponding to the location of the tap input, while device100is at the pose shown in FIG.5A28, but forgoes performance of an operation in user interface5040corresponding to the location of the tap input, while device100is at the pose shown in FIG.5A29). In some embodiments, the computer system detects a plurality of (distinct) touch inputs (e.g., a series, or sequence of touch inputs), and the plurality of touch inputs includes at least one touch input that is detected while the input device meets the second input-mode criteria, and at least one touch input that is detected while the input device meets the first input-mode criteria. Forgoing performance of a user interface operation in the first user interface while in the first input mode (e.g., while using the input device as a pointer), where the user interface operation is or would have been performed in the second input mode (e.g., as if the user were interacting with a corresponding physical device with a non-immersive display environment) improves the feedback provided to the user (e.g., by distinguishing the two modes of operation and providing distinct control options in each mode) and enhances the operability of the device (e.g., by allowing the user to perform intended operations while in the second mode of operation and preventing inadvertent activation of operations of the second mode while in the first mode of operation), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in response to detecting the movement input via the input device (710): in accordance with a determination that the movement of the input device meets first pose criteria that require the pose of the input device to enter a third range of poses distinct from the first range of poses as a result of the movement (e.g., the input device is raised so that the pose of the input device is within the third range of poses), the computer system displays the first user interface in the simulated three-dimensional space at a location away from the user interface object (e.g., as shown and described in greater detail herein with reference to FIGS.5A41-5A43and method1000). In some embodiments, the third range of poses includes poses greater than the first range of poses (e.g., greater than 30 degrees from the horizontal direction). In some embodiments, the third range of poses includes at least some poses in the second range of poses. In some embodiments, the third range of poses is distinct from both the first and second ranges of poses (e.g., the third range of poses includes poses greater than both the first and second ranges). In some embodiments, the plurality of (distinct) movement inputs includes at least one movement input for which movement of the input device meets the first pose criteria that require the input device to enter the third range of poses as a result of the movement. Displaying the first user interface in the simulated three-dimensional space at a location away from the user interface object in response to the pose of the input device entering a particular range of poses improves the visual feedback provided to the user (e.g., by providing a visual indication that a different mode of operation is being activated (or will be activated if the user continues to move further into the particular range of poses)), provides additional control options without cluttering the display environment with additional displayed controls and reduces the number of inputs needed to perform an operation (e.g., providing a third mode of operation, in addition to the first and second modes, and allowing the user to easily switch between all three modes using straightforward motion(s) (raising or lowering the input device, or the pose of the input device)), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the user interface object is (712) a first user interface object (e.g., virtual device5016, FIGS.5A29-5A30), and the method includes: displaying, via at least the first display generation component, a second user interface object that is located within the simulated three-dimensional space (e.g., video5116, FIGS.5A29-5A30), wherein the second user interface object is distinct from the first user interface object, and performing the respective operation within the simulated three-dimensional space includes displaying an indication that the second user interface object is in focus (e.g., focus indicator5112and/or highlighting5110, FIGS.5A29-5A30) (e.g., the second user interface object is a user interface object that will be affected by inputs such as a selection input); and in accordance with a determination that the movement of the input device is detected while the input device meets the first input-mode criteria: after performing the respective operation, detecting a touch input via the input device (e.g., detecting one or more contacts on a touch-sensitive surface of the input device and liftoff of the one or more contacts, such as touch screen112of device100in physical space5004, FIG.5A30); and in response to detecting the touch input (e.g., in accordance with a determination that the touch input includes one or more contacts on the touch-sensitive surface and subsequent liftoff of the one or more contacts (e.g., the touch input is a tap gesture)), performing a selection operation with respect to the second user interface object (e.g., visually distinguishing, such as highlighting, the second user interface object, or displaying additional content associated with the second user interface object (e.g., launching an application or multimedia file, as shown and described in greater detail herein with reference to FIGS.5A30-5A31)). Using the input device to direct focus to user interface objects in the simulated three-dimensional space, displaying an indication of which object is in focus improves the visual feedback provided to the user (e.g., by indicating which object will be affected by inputs such as a selection input), and, in response to a touch input, performing a selection operation with respect to the object in focus provides additional control options without cluttering the display environment with additional displayed controls and reduces the number of inputs needed to perform an operation (e.g., by allowing the user to use straightforward motions of the input device to focus on objects and a single touch input to select an object), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by providing an intuitive interaction for the selection operation consistent with use of a pointing device, and helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (714) a touch input via the input device that includes detecting one or more contacts (e.g., on a touch-sensitive surface of the input device). In response to detecting the touch input via the input device, in accordance with a determination that the touch input is detected while the input device meets the first input-mode criteria (e.g., the input device met the first input-mode criteria at the time of initial detection of the touch input), and while the touch input is maintained on the input device (e.g., on the touch-sensitive surface of the input device), the computer system performs the respective operation without regard to whether the input device meets the first input-mode criteria (e.g., while the touch input is maintained on (the touch-sensitive surface of) the input device, the respective operation continues to be performed even if the input device moves such that the pose of the input device moves outside of the first range of poses). For example, in response to a touch input on device100(FIG.5A30), the computer system moves focus indicator5112and, if focus indicator5112corresponds to a virtual object in simulated 3D space5006, highlights the virtual object using highlighting5110without regard to whether device100remains within the range of poses between horizontal and 30 degrees above horizontal while the touch input is maintained. The computer system also detects liftoff of the one or more contacts; and, after detecting the liftoff of the one or more contacts, detects a second movement input via the input device that includes subsequent movement of the input device relative to the physical environment surrounding the input device. In response to detecting the second movement input via the input device: in accordance with a determination that the subsequent movement of the input device is detected while the input device meets the first input-mode criteria, the computer system performs a second respective operation within the simulated three-dimensional space in accordance with the subsequent movement of the input device, wherein at least a portion of the second respective operation occurs outside of the user interface object; and, in accordance with a determination that the subsequent movement of the input device is detected while the input device meets the second input-mode criteria, the computer system repositions the user interface object in the simulated three-dimensional space in accordance with the subsequent movement of the input device without performing the second respective operation. In other words, after the one or more contacts are lifted off the touch-sensitive surface, performance of the respective operation depends on whether the input device meets the first input-mode criteria (in which case the respective operation is performed), whether the input device meets the second input-mode criteria (in which case the respective operation is not performed, and instead, if the input device is moved, the user interface object is repositioned in accordance with the movement of the input device), or whether the input device meets other criteria (e.g., as described herein with reference to operation720of method700). Maintaining operation in the first mode while a touch input received in the first mode is maintained, regardless of input device pose, allows the user to direct focus to objects that are outside of the pose range required for entering the first mode, and provides an intuitive interaction consistent with use of a pointing device (e.g., by confining operation of the input device to the first mode until the touch input is released). Reevaluating whether subsequent movement of the input meets first or second input-mode criteria to make a new determination of the mode of operation after the touch input is released allows the user to easily exit the first input mode and select a subsequent input mode by changing the pose of the input device. These features provide additional control options without cluttering the display environment with additional displayed controls and reduce the number of inputs needed to perform an operation, enhance the operability of the device, and make the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in response to detecting the movement input via the input device (716), in accordance with a determination that the movement of the input device is detected while the input device meets the first input-mode criteria, the computer system ceases to display the first user interface on the user interface object (e.g., the computer system ceases to display virtual user interface5040on virtual device5016, as shown and described in greater detail herein with reference to FIGS.5A28-5A29). Ceasing to display the first user interface on the user interface object in the first input mode improves the visual feedback provided to the user (e.g., by indicating to the user that the input device is operating in the first mode rather than the second mode, and accordingly indicating that operations of the second mode are unavailable), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user distraction as a result of clutter in the user interface to help the user focus on the respective operation being performed in the simulated three-dimensional space, and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in response to detecting the movement input via the input device (718), in accordance with a determination that the movement of the input device is detected while the input device meets the first input-mode criteria, the computer system displays a second user interface (e.g., a second user interface associated with performing the respective operation, such as a pointer user interface; that is, a user interface indicating that the user interface object is operating as a pointer) on the user interface object (e.g., the computer system displays virtual user interface5114on virtual device5016, as shown and described in greater detail herein with reference to FIG.5A29). Replacing display of the first user interface on the user interface object with a second user interface (e.g., associated with use of the input device as a pointing device) improves the visual feedback provided to the user (e.g., by providing a visual indication on the user interface object that the input device is operating in the first input mode, and by reducing user distraction as a result of clutter in the user interface to help the user focus on the respective operation being performed in the simulated three-dimensional space), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first input-mode criteria further include (720): a second criterion that is satisfied when a distance from the input device to the user's eyes is within a predefined range of distances; and a third criterion that is satisfied when a manner in which the user holds the input device satisfies predefined criteria (e.g., the input device is held in a predefined orientation or range of orientations, or the user's hand and fingers are placed on the device so as to contact predefined areas of the device, etc.). In some embodiments, the first input-mode criteria are satisfied when at least one of the first criterion, the second criterion, and the third criterion is satisfied. In some embodiments, the first input-mode criteria require two or more of the criteria be satisfied (e.g., a combination of pose and distance, or a combination of distance and manner of holding the device, etc.). For example, a movement that causes the input device to enter a respective pose range, such as the first pose range, when the input device is within a predefined threshold distance from the user's face may activate the first input mode (e.g., as shown and described in greater detail herein with reference to FIGS.5A28-5A29), while the same movement performed when the input device is beyond a predefined threshold distance from the user's face might not. Using alternative or additional criteria for determining whether the input device satisfies criteria for operating in the first input mode provides additional control options without cluttering the display environment with additional displayed controls (e.g., by providing additional ways that the user can activate the first input mode, and/or by making other control options or interactions available for movements of the input device that do not satisfy enough of the first input-mode criteria), enhances the operability of the device (e.g., by preventing inadvertent or erroneous activation of the first input mode and improving system responsiveness to user input), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more efficiently and quickly. In some embodiments, in accordance with a determination that the movement of the input device is detected while the input device meets the first input-mode criteria (722), the user interface object is displayed as a pointing device (e.g., a laser pointer) (e.g., as shown and described in greater detail herein with reference to FIGS.5A29-5A30). In some embodiments, the pointing device includes one or more buttons or other controls for performing selection or adjustment operations. In some embodiments, in accordance with a determination that the movement of the input device is detected while the input device meets the second input-mode criteria, the user interface object is displayed as a simulation of a non-immersive display environment (e.g., a mobile device user interface) (e.g., as shown and described in greater detail herein with reference to FIGS.5A27-5A28). In some embodiments, the computer system displays a virtual pointing device or virtual mobile phone. In some embodiments, the user interface object is displayed as a pointer while the input device is in pointer mode (e.g., while performing the respective operation, and/or while continuing to detect one or more contacts that were detected on a touch-sensitive surface of the input device while the input device met the first input-mode criteria, without regard to whether the input device meets the first input-mode criteria, as discussed herein with reference to operation714of method700). The display of a simulation of a non-immersive display environment within an immersive display environment is described in greater detail herein with reference to method600. Changing the display of the user interface object based on the mode of operation of the input device improves the visual feedback provided to the user (e.g., by providing a visual indication on the user interface object of its current mode of operation), enhances the operability of the device (e.g., by providing an intuitive interaction for the respective mode of operation, including, for the first mode, an interaction consistent with the use of a pointing device for performing a selection operation, and, for the second mode, an interaction consistent with the use of a mobile device), and makes the user-device interface more efficient (e.g., by reducing user distraction and user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more efficiently and quickly. It should be understood that the particular order in which the operations inFIGS.7A-7Chave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods600,800,900, and1000) are also applicable in an analogous manner to method700described above with respect toFIGS.7A-7C. For example, the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described above with reference to method700optionally have one or more of the characteristics of the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described herein with reference to other methods described herein (e.g., methods600,800,900, and1000). For brevity, these details are not repeated here. FIGS.8A-8Care flow diagrams illustrating method800of displaying and performing navigation operations within corresponding two-dimensional and three-dimensional user interfaces, in accordance with some embodiments. Method800is performed at a computer system (e.g., portable multifunction device100,FIG.1A, device300,FIG.3A, or a multi-component computer system including headset5008and an input device (e.g., device100or device5010), FIG.5A1) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as a display generation component of the computer system, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands or eyes), and optionally one or more pose sensors for detecting respective poses of one or more of the input device (e.g., device100or device5010and/or watch5012, FIG.5A1) and display generation components (e.g., the pose sensors include one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system). In some embodiments, the computer system (e.g., the input device of the computer system) includes one or more sensors to detect intensities of contacts with the input device (e.g., a touch-sensitive surface), and optionally one or more tactile output generators. In some embodiments, the computer system includes one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect toFIGS.3B-3C, in some embodiments, method800is performed at a computer system301(e.g., computer system301-a,301-b, or301-c) in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more pose sensors are each either included in or in communication with computer system301. In some embodiments, the display generation component is a touch-screen display and the input device (e.g., with a touch-sensitive surface) is on or integrated with the display generation component. In some embodiments, the display generation component is separate from the input device (e.g., as shown inFIG.4Band FIG.5A1). Some operations in method800are, optionally, combined and/or the order of some operations is, optionally, changed. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting contacts on the touch-sensitive surface of the input device while displaying some of the user interfaces shown in the figures on the display of headset5008, and optionally while displaying some of the user interfaces shown in the figures on a separate display generation component of input device100. However, analogous operations are, optionally, performed on a computer system with a touch-sensitive display system112(e.g., on device100with touch screen112) and optionally one or more integrated cameras. Similarly, analogous operations are, optionally, performed on a computer system having one or more cameras that are implemented separately (e.g., in a headset) from one or more other components (e.g., an input device) of the computer system; and in some such embodiments, “movement of the computer system” corresponds to movement of one or more cameras of the computer system, or movement of one or more cameras in communication with the computer system. As described below, method800relates to providing a view of a three-dimensional representation of content shown in a corresponding two-dimensional user interface on a virtual user interface object (e.g., a representation of a smartphone) in a virtual reality environment (e.g., an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world), and updating the content in response to touch gestures on an input device (e.g., a smartphone or other physical controller in the physical world). Allowing a user to interact with a three-dimensional representation of content using touch gestures to control a corresponding two-dimensional representation of the content provides an immersive, intuitive, and efficient way for the user to adjust the representations and obtain additional information about the content (e.g., information that is available using a three-dimensional representation but that is not readily available from a two-dimensional representation or from the physical world), thereby enhancing the operability of the device and making the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The computer system displays (802), via at least a first display generation component (e.g., a VR headset, or a stereo display, a 3D display, a holographic projector, a volumetric display, etc.) of the one or more display generation components: a view of at least a portion of a simulated three-dimensional space (e.g., simulated 3D space5006, FIG.5A22); a view of a first user interface object that is located within the simulated three-dimensional space (e.g., virtual device5016, FIG.5A22), wherein the first user interface object includes a view of a first user interface (e.g., a maps application interface, such as virtual user interface5030, FIG.5A22); and a view of a second user interface object (e.g., a three-dimensional rendering of an object represented in the first user interface (e.g., a map)) that is a three-dimensional representation of content shown in at least a portion of the first user interface (e.g., virtual 3D model5104, FIG.5A22). The computer system detects (804) a touch gesture (e.g., input5102, FIG.5A22) via the input device (e.g., device100in physical space5004, FIG.5A22), including detecting one or more contacts on the touch-sensitive surface of the input device and detecting movement of the one or more contacts across the touch-sensitive surface of the input device (e.g., movement of the one or more contacts in a first direction, such as a drag gesture or a swipe gesture in the first direction, or movement of two or more contacts toward or away from each other, such as a pinch gesture or a depinch gesture, respectively). In response to detecting the touch gesture via the input device (806): the computer system adjusts (e.g., by shifting and/or zooming, or otherwise dynamically manipulating) a currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture (e.g., as shown and described in greater detail herein with reference to FIGS.5A22-5A24). In some embodiments, adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes displaying a previously undisplayed portion of the first user interface on the first user interface object. In some embodiments, at least a portion of the previously displayed view of the first user interface continues to be displayed in conjunction with the previously undisplayed portion. For example, in response to detecting a drag gesture in a first direction (e.g., upward), the currently displayed portion of the first user interface is shifted in the first direction (e.g., upward), and a previously undisplayed portion of the first user interface (e.g., that is adjacent to the shifted portion of the first user interface with respect to the first direction, for example, below the shifted portion of the first user interface) is displayed (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, Figures Note, in some embodiments, adjusting a currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture is distinct from shifting a focus selector onto or away from the portion of the first user interface on the first user interface object; instead, the amount of adjusting (e.g., zooming, shifting, rotating, shading, etc.) directly and dynamically corresponds to a magnitude of a property associated with the touch gesture (e.g., movement of the contact, speed, intensity of the contact(s), etc.). The computer system also updates the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting (e.g., shifting and/or zooming) of the first user interface on the first user interface object (e.g., as shown and described in greater detail herein with reference to virtual 3D model5104, FIGS.5A22-5A24). In some embodiments, updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes displaying a previously undisplayed portion of the second user interface object that corresponds to the previously undisplayed portion of the first user interface. Note, in some embodiments, the updating of the second user interface object in accordance with the adjusting of the first user interface on the first user interface object also dynamically corresponds to a magnitude of a property associated with the touch gesture (e.g., movement of the contact, speed, intensity of the contact(s), etc.). In some embodiments, the adjusting and updating are performed in accordance with a determination that the input device meets predefined pose criteria (e.g., a range of poses between 30 degrees above horizontal and, for example, 60 degrees above horizontal) when the touch gesture is detected via the input device, as described in greater detail herein with reference to method700. For example, in some embodiments the adjusting and updating are performed in accordance with a determination that the input device meets the second input-mode criteria of method700. In some embodiments, adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes (808) displaying a previously undisplayed portion of the first user interface on the first user interface object in conjunction with continuing to display at least a portion of the previously displayed view of the first user interface (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, FIGS.5A22-5A26); and updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes displaying a previously undisplayed portion of the second user interface object that corresponds to the previously undisplayed portion of the first user interface in conjunction with continuing to display at least a portion of the previously displayed view of the second user interface object (e.g., as shown and described in greater detail herein with reference to virtual 3D model5104, FIGS.5A22-5A26). Displaying previously undisplayed portions of the first user interface and of the second user interface object in conjunction with continuing to display previously displayed portions improves the visual feedback provided to the user (e.g., by providing the user with a smoother transition when changing content that is displayed and by helping the user to maintain context), enhances the operability of the device (e.g., by providing an intuitive interaction for manipulating two-dimensional and three-dimensional representations of content), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes (810) laterally shifting the first user interface (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, FIGS.5A22-5A24); and updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes traversing a path between a first position and a second position on a spherical surface in the simulated three-dimensional space, wherein the first position corresponds to the first user interface, and the second position corresponds to the laterally shifted first user interface (e.g., as shown and described in greater detail herein with reference to virtual 3D model5104, associated with traversing a path across a geographic area on Earth, FIGS.5A22-5A24). Using the second user interface object to represent traversing a path along a spherical surface in conjunction with shifting the first user interface in response to a touch gesture improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and providing additional information that is not readily available to the user in the physical world, such as a more realistic or more immersive visualization of displayed content), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface includes (812) a two-dimensional view of content; adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes traversing the two-dimensional view of the content; and updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes traversing a three-dimensional panoramic view of the content in the simulated three-dimensional space. For example, if the virtual user interface displayed on virtual device5016displays a two-dimensional image captured using panoramic image capture, the virtual 3D model is a three-dimensional simulation of the panoramic view of the displayed image, and traversing the image on virtual device5016corresponds to the viewer traversing a 360-degree panoramic view using the virtual 3D model. Using the second user interface object to represent traversing a three-dimensional panoramic view of content in conjunction with traversing a two-dimensional view of the content in the first user interface in response to a touch gesture improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and providing additional information that is not readily available to the user in the physical world, such as a more realistic or more immersive visualization of the displayed content), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the touch gesture includes (814) a swipe gesture in a first direction; adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes continuously shifting the first user interface in the first direction (e.g., including continuously adding previously undisplayed portions of the first user interface to the display) on the first user interface object (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, FIGS.5A22-5A24). In some embodiments, the amount of movement of the first user interface is determined by a magnitude of the touch gesture (e.g., a magnitude of a property associated with the touch gesture, such as speed of the touch gesture). In some embodiments, the direction of movement of the first user interface (e.g., downward movement of virtual user interface5030, FIGS.5A22-5A24) is determined by a direction of the touch gesture (e.g., downward movement of input5102, FIG.5A22). Also, in some embodiments, updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes continuously updating the view of the second user interface object in accordance with the continuous shifting of the first user interface in the first direction (e.g., continuously updating the view of the second user interface object to correspond to the displayed first user interface as the first user interface is shifted). For example, in response to a swipe gesture on an application user interface (e.g., a map user interface, such as a two-dimensional map, in a maps application) displayed on the first user interface object, the displayed portion of the application user interface (e.g., the displayed portion of the map user interface) is continuously shifted in the direction of the swipe gesture (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, FIGS.5A22-5A24). In accordance with the continuous shifting of the application user interface, the view of the three-dimensional representation of at least a portion of the application user interface (e.g., a three-dimensional map of at least a portion of the displayed map user interface) is updated (e.g., as the map user interface moves, the view of the three-dimensional map is updated (e.g., shifted) so as to remain synchronized with the displayed portion of the map user interface (e.g., the map user interface and the three-dimensional map both represent the same geographic location)) (e.g., as shown and described in greater detail herein with reference to virtual 3D model5104, FIGS.5A22-5A24). In some embodiments, the amount of movement of the second user interface is determined by a magnitude of the touch gesture (e.g., amount of movement (displacement) or speed of the touch gesture). In some embodiments, the direction of movement of the second user interface is determined by a direction of the touch gesture. Continuously shifting or panning the first user interface and the second user interface object in a direction of the touch gesture improves the visual feedback provided to the user (e.g., by providing the user with a continuous visualization of traversing two-dimensional and three-dimensional representations of content), reduces the number of inputs needed to perform an operation (e.g., using only a single swipe gesture rather than requiring the user to repeatedly make separate inputs to move the user interfaces), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the touch gesture includes (816) a depinch gesture; and adjusting the currently displayed portion of the first user interface on the first user interface object in accordance with the touch gesture includes displaying additional details of at least a sub-portion of the currently displayed portion of the first user interface (e.g., zooming in from an first initial zoom level to a sub-portion of the currently displayed portion at a second zoom level, and displaying additional details about the sub-portion that are visible at the second zoom level but not the first zoom level). In some embodiments, the amount of zooming of the first user interface is determined by a magnitude of the touch gesture (e.g., an amount by which a distance between two contacts increases or decreases). In some embodiments, the direction of zooming (e.g., in or out) of the first user interface is determined by a direction of the touch gesture (e.g., whether the gesture includes movement of the contacts toward each other in a pinch gesture or movement of the contacts away from each other in a depinch gesture). Also, in some embodiments, updating the view of the second user interface object in the simulated three-dimensional space in accordance with the adjusting of the first user interface on the first user interface object includes displaying additional details of the displayed view of the second user interface object that correspond to the additional details of at least the sub-portion of the currently displayed portion of the first user interface (e.g., zooming in from a first initial zoom level to a sub-portion of the displayed view of the second user interface object at a second zoom level, where the sub-portion of the displayed view of the second user interface corresponds to the sub-portion of the first user interface, and displaying additional details about the sub-portion that are visible at the second zoom level but not the first zoom level). For example, with respect to the previously-discussed map user interface example, in response to a depinch gesture via the input device (e.g., input5106via device100in physical space5004, FIG.5A25), the computer system zooms in to a portion of the two-dimensional map user interface displayed on the first user interface object (e.g., as shown and described in greater detail herein with reference to virtual user interface5030, FIGS.5A25-5A26), including displaying additional information such as additional map elements (e.g., additional building footprints, additional streets, and text and/or image labels for displayed map elements, as shown and described in greater detail herein with respect to landmarks5030aand5030b, FIG.5A26). In conjunction with displaying the additional information for the zoomed-in portion of the two-dimensional map user interface, the computer system zooms in to a portion of the three-dimensional representation of the map displayed in the simulated three-dimensional space (e.g., a three-dimensional representation of the zoomed-in portion of the two-dimensional map) (e.g., as shown and described in greater detail herein with reference to virtual 3D model5104, FIGS.5A25-5A26), including displaying additional information such as additional map elements (e.g., additional three-dimensional objects such as buildings, additional streets, and text and/or image labels, as shown and described in greater detail herein with reference to virtual buildings5104cand5104d, FIG.5A26). In some embodiments, the amount of zooming of the second user interface is determined by a magnitude of the touch gesture (e.g., an amount by which a distance between two contacts increases or decreases). In some embodiments, the direction of zooming (e.g., in or out) of the second user interface is determined by a direction of the touch gesture (e.g., whether the gesture includes movement of the contacts toward each other in a pinch gesture or movement of the contacts away from each other in a depinch gesture). Zooming into and displaying additional details of the first user interface and the second user interface object in response to a touch gesture (e.g., a depinch gesture, or other gesture for increasing a zoom scale of a displayed user interface) improves the visual feedback provided to the user (e.g., by displaying additional details of the two-dimensional and three-dimensional representations of the content being viewed), provides additional control options without cluttering the display environment with additional displayed controls and reduces the number of inputs needed to perform an operation (e.g., using straightforward gestures rather than requiring display and activation of additional control affordances), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface displayed on the first user interface object includes (818) a map user interface that includes map content, and the second user interface object is a three-dimensional representation of map content shown in at least a portion of the map user interface (e.g., as shown and described in greater detail herein with reference to FIGS.5A22-5A26). Displaying a (two-dimensional) map user interface that includes map content and a three-dimensional representation of map content shown in the map user interface improves the visual feedback provided to the user (e.g., by providing the user with an immersive experience for visualizing and interacting with a map of a particular geographic area, including providing additional information (e.g., a flyover view of the geographic area) that is not readily available to the user in the physical world), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface displayed on the first user interface object includes (820) a multimedia user interface that includes one or more multimedia items (e.g., photos, audio tracks, videos, messages, etc.), and the second user interface object is a three-dimensional representation of one or more of the multimedia items shown in the multimedia user interface (e.g., as shown and described in greater detail herein with reference to FIG.5A27). Displaying a multimedia user interface that includes multimedia items and a three-dimensional representation of the multimedia items shown in the multimedia user interface improves the visual feedback provided to the user (e.g., by providing the user with an immersive experience for visualizing and interacting with a selection of multimedia items), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. It should be understood that the particular order in which the operations inFIGS.8A-8Chave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods600,700,900, and1000) are also applicable in an analogous manner to method800described above with respect toFIGS.8A-8C. For example, the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described above with reference to method800optionally have one or more of the characteristics of the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described herein with reference to other methods described herein (e.g., methods600,700,900, and1000). For brevity, these details are not repeated here. FIGS.9A-9Bare flow diagrams illustrating method900of displaying and adjusting an appearance of a focus indicator on a virtual user interface object in a virtual reality environment based on user inputs in the physical world, in accordance with some embodiments. Method900is performed at a computer system (e.g., portable multifunction device100,FIG.1A, device300,FIG.3A, or a multi-component computer system including headset5008and an input device (e.g., device100or device5010), FIG.5A1) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), an input device with an input element (e.g., a touch-sensitive surface, a button, a mouse, a joystick, a slider, a dial, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands or eyes) that includes first sensors for detecting contact with the input element (e.g., the touch-sensitive surface) and second sensors for detecting proximity of an input object above the input element (e.g., the touch-sensitive surface) (e.g., a touch-sensitive remote control, or a touch-screen display that also serves as a display generation component of the computer system), and optionally one or more pose sensors for detecting respective poses of one or more of the input device (e.g., device100or device5010and/or watch5012, FIG.5A1) and display generation components (e.g., the pose sensors include one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system). In some embodiments, the computer system (e.g., the input device of the computer system) includes one or more sensors to detect intensities of contacts with the input device (e.g., a touch-sensitive surface), and optionally one or more tactile output generators. In some embodiments, the computer system includes one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect toFIGS.3B-3C, in some embodiments, method900is performed at a computer system301(e.g., computer system301-a,301-b, or301-c) in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more pose sensors are each either included in or in communication with computer system301. In some embodiments, the display generation component is a touch-screen display and the input device (e.g., with a touch-sensitive surface) is on or integrated with the display generation component. In some embodiments, the display generation component is separate from the input device (e.g., as shown inFIG.4Band FIG.5A1). Some operations in method900are, optionally, combined and/or the order of some operations is, optionally, changed. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting contacts on the touch-sensitive surface of the input device while displaying some of the user interfaces shown in the figures on the display of headset5008, and optionally while displaying some of the user interfaces shown in the figures on a separate display generation component of input device100. However, analogous operations are, optionally, performed on a computer system with a touch-sensitive display system112(e.g., on device100with touch screen112) and optionally one or more integrated cameras. Similarly, analogous operations are, optionally, performed on a computer system having one or more cameras that are implemented separately (e.g., in a headset) from one or more other components (e.g., an input device) of the computer system; and in some such embodiments, “movement of the computer system” corresponds to movement of one or more cameras of the computer system, or movement of one or more cameras in communication with the computer system. As described below, method900relates to displaying and adjusting an appearance of a focus indicator on a virtual user interface object (e.g., a representation of a smartphone) in a virtual reality environment (e.g., an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world), based on user inputs in the physical world (e.g., on a smartphone in the physical world). Specifically, this method relates to displaying the focus indicator with a first appearance (e.g., an outline of a circular indicator) in response to detecting a hover input and displaying the focus indicator with a second appearance (e.g., a solid circular indicator) in response to detecting a contact. Allowing a user to see hover inputs (where an input object, such as a finger, is not touching the input element) differently from contacts in the virtual reality environment improves the visual feedback provided to the user (e.g., by displaying an indication of the location of the user's fingers, since the user's fingers are not visible to the user in the immersive virtual reality environment) and allows the user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world, thereby providing an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones), enhancing the operability of the device, and making the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The computer system (e.g., device100, FIG.5A2) displays (902) via at least a first display generation component (e.g., a VR headset, or a stereo display, a 3D display, a holographic projector, a volumetric display, etc.) (e.g., headset5008, FIG.5A1) of the one or more display generation components: a view of at least a portion of a simulated three-dimensional space (e.g., a portion of a three-dimensional virtual environment that is within the user's field of view) (e.g., simulated 3D space5006, FIGS.5A12-5A15). In some embodiments, the simulated three-dimensional space is part of an immersive display environment (e.g., a virtual reality environment). The computer system also displays, via at least the first display generation component, a view of a first user interface object (e.g., virtual device5016, FIG.5A2) that is located within the simulated three-dimensional space, and that includes a first user interface (e.g., a two-dimensional user interface) (e.g., virtual user interface5019that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A12-5A15). For example, the first user interface object is a representation of a computing device (e.g., device100, FIG.5A12) that has a non-immersive display environment (e.g., a user interface where the boundaries of the user interface are visible to the user and the boundaries of the user interface move relative to the user in accordance with user inputs) (e.g., a touch screen display of a handheld computing device, such as touch screen112of device100, FIG.5A12) that provides access to a plurality of different applications (e.g., the first user interface object is a 3D graphical image that visually resembles and/or represents a handheld device (e.g., a smartphone) that provides access to a plurality of user applications, such as an instant messages application, a maps application, a calendar application, an e-mail application, etc.) (e.g., as shown in FIG.5A12); the first user interface (e.g., virtual user interface5019, FIG.5A12) corresponds to the non-immersive display environment (e.g., user interface5021, Figure (e.g., the 3D graphical image of the handheld device (e.g., virtual device5016, FIG.5A12) includes a representation of (e.g., an exact image, or augmented image, or stylized image) of a user interface of a type that is, in some circumstances, displayed on the touch-screen display that provides access to the plurality of user applications) of the computing device (e.g., user interface5021, FIG.5A12) and is responsive to touch inputs from a user on the input device (e.g., in the same manner or a consistent manner that the user interface shown in the non-immersive display environment responds to touch inputs). In addition, a pose of the user interface object in the simulated three-dimensional space corresponds to a pose of the input device in a physical space surrounding the input device (e.g., the orientation of the user interface object is continuously updated to correspond to the orientation of the input device when the input device moves relative to the physical space surrounding the input device) (e.g., as shown in FIGS.5A28-5A30). The computer system detects (904) a hover input via the input device, including detecting an input object (e.g., a fingertip) above the input element of the input device while the input object is not touching the input element, wherein proximity of the input object to the input element meets first proximity criteria (e.g., within a first threshold distance from the input element) (e.g., as shown in FIGS.5A12-5A14). In response to detecting the hover input via the input device, the computer system displays (906) a focus indicator with a first appearance (e.g., an outline of an oval or circular indicator, a crosshair, etc.) at a hover location above the first user interface object (e.g., in the 3D virtual space) that corresponds to a hover location of the input object above the input element of the input device (e.g., in a physical environment) (e.g., as shown in FIGS.5A12-5A14). Although the focus indicators in FIGS.5A12-5A14are displayed with a particular appearance (e.g., a dotted outline of a circular indicator with shading), in some embodiments, the focus indicators for hover inputs are displayed in another manner with a different appearance (e.g., a partially translucent circular indicator with no outline). While displaying the focus indicator with the first appearance at the hover location above the first user interface object (e.g., in the virtual 3D space), the computer system detects (908) a contact between the input object and the input element (e.g., in the physical environment) (e.g., as shown in FIG.5A15). In response to detecting the contact between the input object and the input element, the computer system displays (910) the focus indicator with a second appearance (e.g., a solid oval or circular indicator, a bolded crosshair, etc.) that is distinct from the first appearance at a location on the first user interface object that corresponds to a contact location of the input object on the input element of the input device (e.g., as shown in FIG.5A15). In some embodiments, method900of displaying the focus indicator with a first appearance (e.g., an outline of a circular indicator) in response to detecting a hover input and displaying the focus indicator with a second appearance (e.g., a solid circular indicator) in response to detecting a contact improves the operations of method600, as described above. In some embodiments, in response to detecting the hover input via the input device (e.g., including detecting the input object above the input element of the input device while the input object is not touching the input element) (912), the computer system displays a representation of the input object in the view of the simulated three-dimensional space at a second hover location above the first user interface object (e.g., where the second hover location has an x, y, and z component in the simulated three-dimensional space) that corresponds to the hover location of the input object above the input element of the input device (e.g., in the physical environment). For example, in some embodiments, when a user's fingers are within a threshold distance from the input element (e.g., meeting the first proximity criteria), but are not touching the input element, a representation of (one or more of) the user's fingers are displayed in the simulated three-dimensional space (e.g., as “virtual fingers”). In some embodiments, the representation of the input object (e.g., a user's finger) is displayed in addition to displaying the focus indicator with the first appearance (e.g., an outline of an oval or circular indicator, a crosshair, etc.). For example, in some embodiments, the representation of the input object is displayed above the focus indicator with the first appearance (e.g., the virtual finger is displayed slightly above an outline of an oval or circular indicator). In some embodiments, the focus indicator changes in appearance as the input object moves closer to the touch-sensitive surface of the input device (e.g., the size of the focus indicator grows smaller as the input object moves closer to the input device) (e.g., the size of the focus indicator grows larger as the input object moves closer to the input device, as shown in FIGS.5A12-5A14), but the representation of the input object (e.g., the user's finger) does not change in appearance as the input object moves closer to the touch-sensitive surface of the input device. In some embodiments, the representation of the input object (e.g., a user's finger) is displayed instead of displaying the focus indicator with the first appearance (e.g., the representation of the input object is the focus indicator, as shown in FIGS.5A12-5A14). Displaying a representation of the input object (e.g., the user's finger) in the simulated three-dimensional space improves the visual feedback provided to the user (e.g., by displaying one or more “virtual fingers” of the user since the user's fingers are not visible to the user in the immersive virtual reality environment), allows the user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world, provides an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying the focus indicator with the second appearance (e.g., in response to detecting the contact between the input object and the input element) on the first user interface object includes (914) displaying the focus indicator as a translucent contact point on the first user interface object (e.g., not a virtual finger); and at least a portion of the first user interface that is at a location of the focus indicator is at least partly visible through the focus indicator (e.g., as shown in FIG.5A15). In some embodiments, contact touches (e.g., as shown in FIG.5A15) (as opposed to hover inputs where the input object is not touching the input element, as shown in FIGS.5A12-5A14) are displayed as translucent contact points (and not virtual fingers) so that the user interface of the virtual phone is not obscured (e.g., the first user interface of the first user interface object in the simulated three-dimensional space is not obscured) (e.g., virtual user interface5019is not obscured). Displaying the focus indicator as a translucent contact point (in response to detecting a contact) allows the user to still see the first user interface (e.g., the user interface of the virtual phone) while also providing visual feedback to the user about the location of the contact, improves the visual feedback provided to the user (e.g., by displaying an indication of the location of the user's finger, since the user's fingers are not visible to the user in the immersive virtual reality environment), allows the user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world, provides an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first user interface (e.g., virtual user interface5019that is displayed on virtual device5016in the simulated 3D space5006, FIGS.5A12-5A14) is (916) a representation of a respective user interface of a respective device (e.g., user interface5021that is displayed on device100in the physical space5004, Figures that is represented by the first user interface object (e.g., the first user interface is a simulated user interface of a computing device such as a smartphone and the first user interface object is a simulation of the computing device) (e.g., as shown in FIGS.5A12-5A14); the respective device is capable of detecting hover inputs; and the respective device does not display representations of hover inputs in the respective user interface when hover inputs are detected while the respective device is displaying the respective user interface (e.g., device100does not display representations of hover inputs in user interface5021, FIGS.5A12-5A14) (e.g., the focus indicators are displayed overlaying the first user interface when in the immersive display environment (e.g., simulated 3D space5006) because the user's fingers are not visible in the immersive display environment; however when the respective user interface that corresponds to the first user interface is displayed in the non-immersive display environment (e.g., physical space5004), the focus indicators that correspond to hover inputs are not needed because the user's fingers are visible to the user when the fingers hover over the respective user interface). In some embodiments, the focus indicator changes in appearance as the input object moves closer to the touch-sensitive surface of the input device (e.g., the size of the focus indicator grows smaller as the input object moves closer to the input device or the size of the focus indicator grows larger as the input object moves closer to the input device, as shown in FIGS.5A12-5A14, and/or some other characteristic of the focus indicator changes (e.g., boldness, color, shading, pattern, etc.)). While representations of hover inputs are useful in the simulated three-dimensional space (e.g., on the virtual device) because the user's fingers are not visible in the immersive display environment, representations of hover inputs are not needed on the respective device (e.g., on the real device) because the user's fingers are visible to the user in the physical world. Forgoing display of the hover inputs in the respective user interface of the respective device (e.g., the user interface of the real device in the physical world) reduces clutter in the user interface of the respective device, reduces user distraction, enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, while displaying the focus indicator with the first appearance at the hover location above the first user interface object (e.g., in the virtual 3D space) (e.g., as shown in FIG.5A14) (918), the computer system detects a touch input (e.g., a tap input) at a location on the input device that corresponds to a first location in the first user interface (e.g., as shown in FIG.5A15); and in response to detecting the touch input on the input device, activates an element of the first user interface located at the first location (e.g., launching an application corresponding to the application icon located at the first location, as shown in FIGS.5A15-5A16). Activating an element of the first user interface (e.g., of the virtual device) in response to a touch input at a location on the input device that corresponds to a location of the element allows the user to interact with and control a virtual device in the virtual reality environment as if the user were interacting with the corresponding real device in the physical world, provides an intuitive and efficient way for the user to access functions of the real device while still immersed in the virtual reality environment (e.g., without requiring the user to remove equipment such as a virtual reality headset and headphones), enhances the operability of the device (e.g., by allowing the user to interact with either the virtual device or the real device), and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (920) a change in intensity of the contact between the input object and the input element (e.g., in the physical environment) (e.g., a change in intensity of the contact between the user's finger and touch screen112, FIG.5A15); and in response to the detecting the change in intensity of the contact between the input object and the input element, displays the focus indicator with a third appearance that is distinct from the first appearance and distinct from the second appearance. For example, in some embodiments, if the intensity of the contact between the input object and the input element increases, the focus indicator increases in size (e.g., the solid oval or circular indicator increases in diameter). Similarly, if the intensity of the contact between the input object and the input element decreases, the focus indicator decreases in size (e.g., the solid oval or circular indicator decreases in diameter). Alternatively, in some embodiments, if the intensity of the contact between the input object and the input element increases, the focus indicator decreases in size (e.g., the solid oval or circular indicator decreases in diameter) and if the intensity of the contact between the input object and the input element decreases, the focus indicator increases in size (e.g., the solid oval or circular indicator increases in diameter). As another example, in some embodiments, if the intensity of the contact between the input object and the input element increases, the focus indicator remains the same size, but the focus indicator changes in another manner (e.g., boldness, color, shading, pattern, etc.). Updating an appearance of the focus indicator (e.g., in the simulated three-dimensional space) in accordance with a change in intensity of the contact between the input object and the input element (e.g., in the physical environment) improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), provides the user with a more immersive and/or intuitive viewing experience, enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. It should be understood that the particular order in which the operations inFIGS.9A-9Bhave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods600,700,800, and1000) are also applicable in an analogous manner to method900described above with respect toFIGS.9A-9B. For example, the contacts, gestures, user interface objects, intensity thresholds, focus indicators, and/or animations described above with reference to method900optionally have one or more of the characteristics of the contacts, gestures, user interface objects, intensity thresholds, focus indicators, and/or animations described herein with reference to other methods described herein (e.g., methods600,700,800, and1000). For brevity, these details are not repeated here. FIGS.10A-10Care flow diagrams illustrating method1000of updating display of virtual user interface objects and associated virtual user interfaces in accordance with movement of and changes in pose of an input device, in accordance with some embodiments. Method1000is performed at a computer system (e.g., portable multifunction device100,FIG.1A, device300,FIG.3A, or a multi-component computer system including headset5008and an input device (e.g., device100or device5010), FIG.5A1) that includes (and/or is in communication with) one or more display generation components (e.g., a display, a projector, a heads-up display, or the like), an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as a display generation component of the computer system, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands or eyes), and optionally one or more pose sensors for detecting respective poses of one or more of the input device (e.g., device100or device5010and/or watch5012, FIG.5A1) and display generation components (e.g., the pose sensors include one or more cameras, gyroscopes, inertial measurement units, or other sensors that enable the computer system to detect changes in an orientation and/or position of the computer system or parts thereof relative to a physical environment of the computer system). In some embodiments, the computer system (e.g., the input device of the computer system) includes one or more sensors to detect intensities of contacts with the input device (e.g., a touch-sensitive surface), and optionally one or more tactile output generators. In some embodiments, the computer system includes one or more cameras (e.g., video cameras that continuously provide a live view of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect toFIGS.3B-3C, in some embodiments, method1000is performed at a computer system301(e.g., computer system301-a,301-b, or301-c) in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more pose sensors are each either included in or in communication with computer system301. In some embodiments, the display generation component is a touch-screen display and the input device (e.g., with a touch-sensitive surface) is on or integrated with the display generation component. In some embodiments, the display generation component is separate from the input device (e.g., as shown inFIG.4Band FIG.5A1). Some operations in method1000are, optionally, combined and/or the order of some operations is, optionally, changed. For convenience of explanation, some of the embodiments will be discussed with reference to operations performed on a computer system (e.g., as shown in FIG.5A1) with a headset5008and a separate input device (e.g., device100or device5010) with a touch-sensitive surface in response to detecting contacts on the touch-sensitive surface of the input device while displaying some of the user interfaces shown in the figures on the display of headset5008, and optionally while displaying some of the user interfaces shown in the figures on a separate display generation component of input device100. However, analogous operations are, optionally, performed on a computer system with a touch-sensitive display system112(e.g., on device100with touch screen112) and optionally one or more integrated cameras. Similarly, analogous operations are, optionally, performed on a computer system having one or more cameras that are implemented separately (e.g., in a headset) from one or more other components (e.g., an input device) of the computer system; and in some such embodiments, “movement of the computer system” corresponds to movement of one or more cameras of the computer system, or movement of one or more cameras in communication with the computer system. As described below, method1000relates to displaying a user interface at a location away from a virtual user interface object (e.g., a representation of a smartphone) in a virtual reality environment (e.g., an immersive three-dimensional environment that is experienced through sensory stimuli such as sights and sounds and that provides additional information and experiences to a user that are not available in the physical world), and optionally at an increased scale, based on a change in pose of an input device (e.g., a smartphone or other physical controller in the physical world). Displaying a user interface at a position away from the virtual user interface object improves the readability of the user interface and relieves the user from having to track the location of the user interface while displayed on the virtual user interface object, which may move as the input device moves, thereby improving the visual feedback provided to the user, enhancing the operability of the device, and making the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The computer system displays (1002), via at least a first display generation component (e.g., a VR headset, or a stereo display, a 3D display, a holographic projector, a volumetric display, etc.) of the one or more display generation components: a view of at least a portion of a simulated three-dimensional space (e.g., simulated 3D space5006, FIG.5A41); and a view of a user interface object that is located within the simulated three-dimensional space (e.g., virtual device5016, FIG.5A41), wherein the user interface object includes a view of a first user interface (e.g., virtual user interface5154, FIG.5A41) that is displayed at a pose that corresponds to a pose of the user interface object in the simulated three-dimensional space, and wherein the pose of the user interface object in the simulated three-dimensional space corresponds to a pose of the input device (e.g., device100, FIG.5A41) in a physical space surrounding the input device (e.g., physical space5004, FIG.5A41). The computer system detects (1004) a movement input via the input device (e.g., user5002raising device100, as shown and described in greater detail herein with reference to FIG.5A42). In response to detecting the movement input (1006): in accordance with a determination that the movement input corresponds to a movement of the input device relative to the physical environment surrounding the input device, and that the movement of the input device meets first pose criteria that require that a parameter of change in the pose of the input device meet a set of one or more thresholds (e.g., pose criteria that are met when the device has entered a first pose range, a rate of change in orientation of the device has met a pose change threshold that is indicative of movement of the input device toward a face of the user, or acceleration of change in pose of the device has met a pose acceleration threshold that is indicative of movement of the input device toward a face of the user) as a result of the movement (e.g., the input device is raised so that the pose of the input device is within the first pose range), the computer system displays the first user interface in the simulated three-dimensional space at a location away from the user interface object (e.g., the first user interface appears to be lifted away from the surface of the user interface object) (and optionally increasing a scale of the first user interface) (e.g., as shown and described in greater detail herein with reference to virtual user interface5154and virtual device5016, FIGS.5A42-5A47). Also, in accordance with a determination that the movement input corresponds to a movement of the input device relative to the physical environment surrounding the input device, and that the movement of the input device does not meet the first pose criteria, the computer system updates the pose of the user interface object in the simulated three-dimensional space in accordance with the movement of the input object, while maintaining display of the first user interface at a pose that corresponds to the pose of the user interface object (e.g., the first user interface appears to be displayed on the surface of the user interface object) (e.g., virtual user interface5154is not displayed so as to appear to be lifted away from the surface of virtual device5016, if movement of device100in physical space5004does not meet the associated pose criteria). In some embodiments, the computer system detects a plurality of (distinct) movement inputs (e.g., a series, or sequence of movement inputs) via the input device, and the plurality of movement inputs includes at least one movement input that corresponds to a movement of the input device relative to the physical environment and for which the movement of the input device meets the first pose criteria that require the pose of the input device to enter the first pose range as a result of the movement, and at least one movement input that corresponds to a movement of the input device relative to the physical environment and for which the movement of the input device does not meet the first pose criteria. In some embodiments, in accordance with a determination that the movement input corresponds to a movement of the input device relative to the physical environment (1008), and that the movement of the input device meets the first pose criteria, the computer system continues to display the user interface object while displaying the first user interface in the simulated three-dimensional space at a location away from the user interface object. In some embodiments, the user interface object continues to be displayed while the first user interface moves (e.g., during an animated transition) toward or away from the user interface object (and optionally, the pose of the user interface object is updated in accordance with the movement of the input device). For example, in some embodiments, the user interface object continues to be displayed while the first user interface moves from appearing to be displayed on the surface of the user interface object to appearing lifted away from the surface of the user interface object, or vice versa. As shown and described in greater detail herein with respect to FIGS.5A42-5A47, virtual device5016continues to be displayed in simulated 3D space5006while virtual user interface5154is displayed at locations away from virtual device5016. Maintaining display of the first user interface object while displaying the first user interface away from the user interface object improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input and user movement, and by helping the user to maintain context through consistency between what is displayed and what a user would expect to see and avoiding abrupt changes in what is displayed), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying the first user interface in the simulated three-dimensional space at a location away from the user interface object includes (1010) increasing a scale of the first user interface, wherein a rate of change in the scale of the first user interface with respect to time is greater than a rate of change in the pose of the input device with respect to time (e.g., as shown and described in greater detail herein with reference to FIG.5A43). Increasing the scale of the first user interface faster than the speed at which the pose of the input device is changed improves the visual feedback provided to the user (e.g., by enlarging and improving the readability of the user interface without requiring the user to move extensively and by making the computer system appear more responsive to user input and user movement), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying the first user interface in the simulated three-dimensional space at a location away from the user interface object includes (1012): increasing a scale of the first user interface; and displaying the first user interface with increased scale at a predefined location in the simulated three-dimensional space (e.g., as shown and described in greater detail herein with reference to FIG.5A43). Displaying the first user interface at a fixed, predefined location and at increased scale improves the visual feedback provided to the user (e.g., by enlarging and improving the readability of the user interface and displaying the user interface at an expected location or position in the simulated three-dimensional space rather than requiring the user to track the location of the user interface while displayed on a virtual object that may move as the input device moves), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, displaying the first user interface in the simulated three-dimensional space at a location away from the user interface object includes (1014) increasing a scale of the first user interface, and the method includes, after displaying the first user interface in the simulated three-dimensional space at the location away from the user interface object: detecting a second movement input via the input device; and in response to detecting the second movement input: in accordance with a determination that the second movement input corresponds to a movement of the input device relative to the physical environment surrounding the input device, and that the movement of the input device meets second pose criteria that require the pose of the input device to decrease as a result of the movement, decreasing the scale of the first user interface. In some embodiments, the second pose criteria require the pose of the input device to move outside of the first pose range as a result of the movement (e.g., by lowering the input device so that the pose of the input device falls below the first pose range) (e.g., as shown and described in greater detail herein with reference to FIGS.5A47-5A48). Decreasing the scale of the first user interface in response to movement that decreases the pose of the input device improves the visual feedback provided to the user (e.g., by providing consistency between what is displayed and what a user would expect to see and by making the computer system appear more responsive to user input and user movement), provides additional control options without cluttering the display environment with additional displayed controls and reduces the number of inputs needed to perform an operation (e.g., using a straightforward gesture to dismiss the enlarged user interface rather than requiring display and activation of additional control affordances), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the first pose criteria requiring a parameter of change in the pose of the input device to meet a set of one or more thresholds includes (1016) the first pose criteria requiring a pose of the input device to enter a first pose range, and the second pose criteria require the pose of the input device to leave a second pose range that encompasses a greater range of poses in at least one direction (or in all directions) than the first pose range (e.g., although the first user interface zooms in when the device enters the first pose range, once the device has entered the second pose range, the first user interface is maintained in the zoomed state until the input device is lowered so that the pose of the input device falls outside of a second pose range that encompasses a greater range of poses than the first pose range) (e.g., as shown and described in greater detail herein with reference to FIGS.5A47-5A48). In some embodiments, in accordance with the pose of the input device entering the first pose range (e.g., reaching a first threshold value, such as a lower limit, of the first pose range), the first user interface (e.g., virtual user interface5154, FIG.5A42) is displayed in the simulated three-dimensional space at a location away from the user interface object, and gradually displayed further and further from the user interface object and/or gradually increasing in scale (optionally in accordance with the pose of the input device continuing to increase) until the first user interface is displayed at the predefined location (e.g., in accordance with the pose of the input device reaching a second threshold pose in the first pose range) and/or at a predefined zoom scale (e.g., virtual user interface5154, FIG.5A43). In some embodiments, while the input device remains within the second pose range, the first user interface continues to be displayed at the predefined zoom scale and/or at the predefined location (e.g., without displaying gradual changes in scale or position in accordance with changes in pose of the input device) until the input device is lowered so that the pose of the input device falls below the first threshold value (e.g., lower limit) of the second pose range (e.g., as shown and described in greater detail herein with reference to FIGS.5A47-5A48). In some embodiments, in accordance with the input device being lowered so that the pose of the input device falls outside of the second pose range, the first user interface is redisplayed at a pose and/or scale that corresponds to the pose of the user interface object (e.g., the first user interface appears to be redisplayed on the surface of the user interface object) (e.g., as shown and described in greater detail herein with reference to FIG.5A48). Increasing the scale of the first user interface in accordance with the pose of the input device entering the first pose range, and maintaining the increased scale until the pose of the input device leaves a second pose range that encompasses a greater range of poses in at least one direction than the first pose range improves the visual feedback provided to the user (e.g., by stabilizing and maintaining readability of the first user interface so long as the pose of the input device is within the first pose range rather than continuously updating the position of the first user interface in response to small movements of the user's hand, which would be distracting and frustrating to the user), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by reducing user distraction and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, after displaying the first user interface in the simulated three-dimensional space at the location away from the user interface object (1018), the computer system: detects a third movement input via the input device; and in response to detecting the third movement input: in accordance with a determination that the third movement input corresponds to a movement of the input device relative to the physical environment surrounding the input device, and that the movement of the input device meets third pose criteria that require the pose of the input device to move outside of the first pose range as a result of the movement (e.g., by lowering the input device so that the pose of the input device falls below the first pose range), redisplays the first user interface at a pose that corresponds to a pose of the user interface object (e.g., the first user interface appears to be displayed, or redisplayed, on the surface of the user interface object) (e.g., as shown and described in greater detail herein with reference to FIGS.5A47-5A48). In some embodiments, the third pose criteria may include hysteresis, by requiring the pose of the input to move outside of a predefined pose range that extends beyond the first pose range (e.g., the input device must be raised above a first pose threshold to trigger display of the first user interface at a location away from the user interface object, while the input device must be lowered below a second pose threshold, lower than the first pose threshold, to trigger redisplay of the first user interface at a pose that corresponds to the pose of the user interface object) (e.g., as shown and described in greater detail herein with reference to FIGS.5A47-5A48). Redisplaying the first user interface at a pose that corresponds to the pose of the user interface object (e.g., on the surface of the user interface object) in response to movement of the input device outside of the first pose range improves the visual feedback provided to the user (e.g., by providing consistency between what is displayed and what a user would expect to see and by making the computer system appear more responsive to user input and user movement), provides additional control options without cluttering the display environment with additional displayed controls and reduces the number of inputs needed to perform an operation (e.g., using a straightforward gesture to return to the first user interface being displayed with the user interface object rather than requiring display and activation of additional control affordances), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by returning to a more intuitive interaction where the first user interface is displayed on the user interface object, by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, while the first user interface is displayed at a pose that corresponds to the pose of the user interface object (e.g., while the first user interface appears to be displayed on the surface of the user interface object) (1020), the first user interface is responsive to touch inputs from a user on the input device (e.g., virtual user interface5154is responsive to touch inputs from user5002on device100in physical space5004, FIG.5A41). Enabling the first user interface to be responsive to touch inputs on the input device while the first user interface is displayed at a pose that corresponds to the pose of the user interface object (e.g., on the surface of the user interface object) improves the visual feedback provided to the user (e.g., by providing consistency between what is displayed and what a user would expect to see and by making the computer system appear more responsive to user input and user movement), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by providing a more intuitive interaction as if the user were interacting with a corresponding physical device with a non-immersive display environment, by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, while the user interface is displayed at the location away from the user interface object (e.g., while the first user interface appears to be lifted away from the surface of the user interface object) (1022), the first user interface is responsive to touch inputs from a user on the input device (e.g., on a touch-sensitive surface of the input device) (e.g., while virtual user interface5154is displayed away from virtual device5016, virtual user interface5154is responsive to touch inputs from user5002on device100, as shown and described in greater detail herein with reference to FIGS.5A47-5A48). Enabling the first user interface to be responsive to touch inputs on the input device even while the first user interface is displayed away from the user interface object (e.g., lifted away from the surface of the user interface object) improves the visual feedback provided to the user (e.g., by enlarging and improving the readability of the user interface and by making the computer system appear more responsive to user input and user movement), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by facilitating user interaction with the computer system through providing an intuitive and familiar set of controls as if the user were interacting with a corresponding physical device with a non-immersive display environment, by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (1024) a touch input on the input device that includes a drag gesture (e.g., a contact and movement of the contact by at least a predefined distance along a touch-sensitive surface of the input device), and, in response to detecting the touch input that includes the drag gesture, scrolls at least a portion of the first user interface (e.g., a drag gesture on device100in FIG.5A47or5A48would cause scrolling through messages displayed in virtual user interface5154). Scrolling the first user interface in response to a touch input that includes a drag gesture improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), provides additional control options without cluttering the display environment with additional controls and reduces the number of inputs needed to perform an operation (e.g., using a straightforward and intuitive gesture rather than requiring display and activation of additional control affordances), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user to achieve an intended outcome with the required inputs and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, the computer system detects (1026) a touch input that includes a tap gesture (e.g., a contact and liftoff of the contact from a touch-sensitive surface of the input device, optionally within a predefined time period, and/or prior to movement of the contact by at least a predefined distance along the touch-sensitive surface) corresponding to a respective user interface object in the first user interface (e.g., an application icon, a multimedia file icon, a messaging contact, etc.), and, in response to detecting the touch input that includes the tap gesture, performing an operation associated with the respective user interface object in the first user interface (e.g., selecting the respective user interface object, launching the application associated with the application icon, displaying or playing the multimedia file, etc.) (e.g., as shown and described in greater detail herein with reference to entry of the text “OK” and activation of the “Send” affordance, FIGS.5A44-5A45). Performing an operation associated with a respective user interface object in the first user interface in response to a touch input that includes a tap gesture corresponding to the respective user interface object improves the visual feedback provided to the user (e.g., by making the computer system appear more responsive to user input), enhances the operability of the device, and makes the user-device interface more efficient (e.g., by facilitating user interaction with the computer system using a straightforward and intuitive gesture, by helping the user to achieve an intended outcome with the required inputs, and by reducing user frustration and mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. It should be understood that the particular order in which the operations inFIGS.10A-10Chave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods600,700,800, and900) are also applicable in an analogous manner to method1000described above with respect toFIGS.10A-10C. For example, the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described above with reference to method1000optionally have one or more of the characteristics of the contacts, gestures, user interface objects, pose thresholds, focus indicators, and/or animations described herein with reference to other methods described herein (e.g., methods600,700,800, and900). For brevity, these details are not repeated here. The operations described above with reference toFIGS.6A-6E,7A-7C,8A-8C,9A-9B, and10A-10Care, optionally, implemented by components depicted inFIGS.1A-1B. For example, display operations602,702,802,902,906,910, and1002; detection operations604,704,804,904,908, and1004; and adjusting and/or updating operations606and806are, optionally, implemented by event sorter170, event recognizer180, and event handler190. Event monitor171in event sorter170detects a contact on touch-sensitive display112, and event dispatcher module174delivers the event information to application136-1. A respective event recognizer180of application136-1compares the event information to respective event definitions186, and determines whether a first contact at a first location on the touch-sensitive surface (or whether rotation of the device) corresponds to a predefined event or sub-event, such as selection of an object on a user interface, or rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, event recognizer180activates an event handler190associated with the detection of the event or sub-event. Event handler190optionally uses or calls data updater176or object updater177to update the application internal state192. In some embodiments, event handler190accesses a respective GUI updater178to update what is displayed by the application. Similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted inFIGS.1A-1B. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described embodiments with various modifications as are suited to the particular use contemplated. | 332,181 |
11861137 | The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. DETAILED DESCRIPTION Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIG.1schematically illustrates an example system100in which the described techniques may be utilized. Using the techniques in the system100, a user may produce a reenactment using 3D representations of a vehicular incident. Unless the context indicates otherwise, a vehicular incident, as used herein, refers to an event, or occurrence, involving at least one vehicle that inflicts or potentially inflicts damage to the vehicle, another vehicle, passengers of vehicles, pedestrian, and/or property. The common term “automobile accident” is an example of a vehicular incident. Herein, a vehicular incident may be more simply called just an incident. The system100employs virtual reality (VR) and/or mixed reality (MR) and/or augmented reality (AR) tools to enable a user (such as a policyholder) to facilitate vehicular incident reenactment using three-dimensional (3D) representations. In addition, the system100employs VR and/or mixed or AR tools to assist the user in preparing a report of damage to their covered vehicle after an incident, such as to submit a claim for damage to an insurance company. In system100, a Virtual Reality/Augmented Reality/Mixed Reality (VR/AR/MR) processor102is provided. Virtual reality (VR) replaces a view of an actual environment, an actual reality, with a view of a virtual environment, a virtual reality. Augmented reality (AR) provides annotations onto a view of an actual environment. Mixed reality (MR) provides a view of an actual environment mixed with a virtual environment. Mixed reality can include, for example, overlaying spatially registered virtual objects on top of a user's direct view of an actual environment. While VR, AR and MR are sometimes treated as discrete concepts, a line between them in practice may be blurred. In the context of the described techniques, devices utilizing one, some, or all of these concepts may be employed, alone or in combination with each other. The VR/AR/MR processor102may include, for example, one or more processors programmed or otherwise configured to interoperate with a VR/AR/MR rendering device104. The VR/AR/MR processor102and the VR/AR/MR rendering device104may be configured for two-way communication, which may be across a network in some examples. The VR/AR/MR rendering device104may, for example, include a device such as a flat-screen display device via which a mix of a real environment and a virtual environment may be displayed simultaneously, such as in a superimposed manner. In some examples, the VR/AR/MR rendering device104is a headset, such as goggles, glasses, or a heads-up display, designed to be worn on or situated relative to the head of a user such that a display of the VR/AR/MR rendering device104is disposed in front of the eyes of the user. A heads-up display is a transparent display that presents data without requiring a user to look away from the actual environment the user is viewing. In the example system100shown inFIG.1, the VR/AR/MR rendering device104includes an output portion106and an input portion108. The output portion106includes an image output portion110and an audio output portion112. The image output portion110may, for example, be a display device such as an LED or LCD display screen. In some examples, the image output portion110is a display device that is configured to display an image that appears to be three-dimensional to a user of the VR/AR/MR rendering device104. The audio output portion112may, for example, be one or more speakers such as contained within a headphone to be worn in, over or around one or both ears of the user. Referring still to the example system100shown inFIG.1, the input portion108includes an image input portion114and an audio input portion116. The image input portion114may include one or more cameras and/or one or more other visual detection devices. The image input portion114, in some examples, uses infrared (IR) detection to generate a point-cloud representation of an actual environment. In another example, emission and detection are utilized to generate a point-cloud representation of an actual environment, such as Light Detection and Ranging (LIDAR), a sensing method that uses light in the form of a pulsed laser to measure ranges. A point cloud is a set of data points in a multi-dimensional space, typically a three-dimensional space. The image input portion114may be configured to generate pupil data based on a position of a user's eyes. The audio input portion116may include, for example, one or more microphones and/or one or more other listening devices. In some examples, the output portion106and input portion108are not configured to be disposed in a single device. Furthermore, the image output portion110and audio output portion112may not be disposed in a single device. Likewise, the image input portion114and audio input portion116may not be disposed in a single device. As just one example, the audio output portion112and the audio input portion116may utilize the functionality of a smart speaker device that the user already has within the actual environment. Referring still toFIG.1, as mentioned above, the VR/AR/MR processor102may include, for example, one or more processors programmed or otherwise configured to communicate with and interoperate with the VR/AR/MR rendering device104. In the example system100shown inFIG.1, the VR/AR/MR processor102includes a virtual assistant renderer118programmed or otherwise configured to render a virtual assistant on the image output portion110. A virtual assistant may assist the user, for example, much as if an actual assistant were physically present with the user. This can help a user to maximize the use of their insurance coverage. The virtual assistant may, for example, be a virtual visual representation having a humanoid appearance. In other examples, other appearances may be used, such as a floating robotic ball. As discussed later, the virtual assistant may guide the user through the gathering of relevant information at or near a scene of the incident. In addition, the virtual assistant may guide the user to reconstruct the scene and reenact the incident using 3D representations of vehicles and non-vehicular objects of the incident. The VR/AR/MR processor102may also include a gesture interpreter120. The gesture interpreter120may be programmed or otherwise configured to interpret one or more gestures of a user of the VR/AR/MR rendering device104. For example, gestures of the user may include hand or arm movements of the user, eye movements or other non-verbal communication by which the user communicates using visible bodily actions. The VR/AR/MR processor102may also include a pupil tracker122, which is programmed or otherwise configured to determine, based on pupil data, the location in a displayed environment of the user's gaze. The VR/AR/MR processor102may include other functionality not shown inFIG.1. The VR/AR/MR processor102in the example system100is connected to an internal database124via a network126. The internal database124may include, for example, a record of video, images, and audio data received from the output portion106of the VR/AR/MR rendering device104, 3D representations of various vehicles and non-vehicular objects, specific information about the vehicles covered by each policyholder. Unless the context indicates otherwise, a vehicle, as used herein, refers to a thing used for transporting people or goods, especially across land or a roadway. Examples of a vehicle include wagons, bicycles, automobiles, motorcycles, cars, trucks, sports utility vehicles (SUV), trains, trams, buses, watercraft, amphibious craft, and the like. As the label implies, a non-vehicular object is a thing or feature that is not a vehicle. The non-vehicular objects may be, for example, non-vehicular things or features that may be that may be proximate an vehicular incident. For example, a non-vehicular object may be a road, a sidewalk, a traffic light, traffic sign, building, parking lot, railroad track, person, pole, advertisement sign, lane marker, intersection, vegetation, construction materials, construction equipment, walls, landmarks, and the like. Depending upon the context, an object may be real or virtual. The virtual object represents the real object. For example, a 3D representation of a truck is a virtual object that represents an actual truck. Unless the context indicates otherwise, an object refers to a non-vehicular object herein. The VR/AR/MR processor102in the example system100shown inFIG.1also communicates via the network126to one or more external data sources128. The external data sources128may include, for example, global positioning systems (GPS), roadmaps, and satellite maps. These may be consulted for reference while reenacting the incident at a mapped location. For example, the satellite map of the location of the incident may be superimposed with the 3D representations being placed and moved by the user during the construction of the reenactment of the incident. The example system100shown inFIG.1also includes a virtual assistant control system130, which communicates via the network126with the VR/AR/MR processor102. The virtual assistant control system130may operate automatically and/or responsive to human input. For example, the virtual assistant control system130may communicate with the virtual assistant renderer118of the VR/AR/MR processor102, providing assistant control data to cause a virtual assistant to be output by the image output portion110of the VR/AR/MR rendering device104. As discussed in greater detail below, the virtual assistant may be displayed to a user of the VR/AR/MR rendering device104to assist the user while the example system100shown inFIG.1performs certain operations, such as the reconstruction of the scene of the incident and reenactment of the incident using 3D representations of vehicles and non-vehicular objects of the incident. As used herein, a 3D representation of an object is a visual image (or part of such an image) presented by a VR/AR/MR rendering device (such as VR/AR/MR rendering device104) to a user in a manner so that the object appears to be three dimensional. FIG.2schematically illustrates components of an example computing device200. Such components may comprise one or more processors such as the VR/AR/MR processor102and/or one or more processors embedded into the VR/AR/MR rendering device104. The example computing device200may comprise any type of device, such as a mobile phone or other mobile computing device (e.g., a tablet computing device), a personal computer such as a desktop computer or laptop computer, a portable navigation device, gaming device, portable media player, television, set-top box, automated teller machine, and so forth. In some examples, the computing device200is a computing device that also performs functions other than functionality used in processing VR/AR/MR data. For example, the computing device200may be part of a centralized computing system of a home or other premise, or the computing device may be part of an enterprise server system of an insurance company. In some examples, the computing device200is a specialized device configured specifically for processing VR/AR/MR data and, in other examples, the computing device200may perform other functions as well. As shown inFIG.2, an example computing device200may include at least one of a processing unit202, a transceiver204(e.g., radio, modem, etc.), a microphone206, a speaker207, power supply unit208, and a network interface210. The processing unit202may include one or more processors212and memory214. The one or more processors212may comprise microprocessors, central processing units, graphics processing units, or other processors usable to execute program instructions to implement the functionality described herein. Additionally, or alternatively, in some examples, some or all of the functions described may be performed in hardware, such as an application-specific integrated circuit (ASIC), a gate array, or other hardware-based logic device. The transceiver204may comprise one or more hardware and/or software implemented radios to provide two-way RF communication with other devices in a network. The transceiver204may additionally or alternatively include a modem or other interface device to provide wired communication from the computing device200to other devices. The microphone206may comprise physical hardware though, in some cases, an audio input interface may instead be provided to interface to an external microphone or other sound receiving device. Similarly, the speaker207may comprise physical hardware though, in some cases, an audio output interface may instead be provided to interface to an external speaker or other sound emitting device. The power supply unit208may provide power to the computing device200. In some instances, the power supply unit208comprises a power connector that couples to an Alternating Current (AC) or Direct Current (DC) mains power line. In other instances, such as when the computing device200is a mobile phone or other portable device, the power supply unit208may comprise a battery. The memory214may include an operating system (OS)216and one or more applications218that are executable by the one or more processors212. The OS216may provide functionality to present a display portion of a visual/tactile user interface on a display of the computing device200. The memory214may also include one or more communication stacks220configured to receive, interpret, and/or otherwise communicate with other devices. For example, the communication stacks may implement one or more of a cellular communication protocol, a Wi-Fi communication protocol, or other wireless or wired communication protocols. The communication stack(s)220describes the functionality and rules governing how the computing device200interacts with each of the specified types of networks. The memory214may also store other information. For example, the memory214may store vehicle information, object information, reenactment information, insurance claim information, etc.222. The object information may include, for example, image data of things or features that may be proximate a vehicular incident. The vehicle information may include, for example, image data of vehicles that may be part of an incident. The reenactment information may include, for example, change and movement data of the non-vehicular objects and vehicles that may be proximate a vehicular incident or directly involved in the incident. The various memories described herein (e.g., the memory214) are examples of computer-readable media. Computer-readable media may take the form of volatile memory, such as random-access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash RAM. Computer-readable media devices include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data for execution by one or more processors of a computing device. Examples of computer-readable media include, but are not limited to, phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to store information for access by a computing device. As defined herein, computer-readable media does not include transitory media, such as modulated data signals and carrier waves, and/or signals. While detailed examples of certain computing devices (e.g., the example computing device200) are described herein, it should be understood that those computing devices may include other components and/or be arranged differently. As noted above, in some instances, a computing device may include one or more processors and memory storing processor-executable instructions to implement the functionalities they are described as performing. Certain computing devices may additionally or alternatively include one or more hardware components (e.g., application-specific integrated circuits, field-programmable gate arrays, systems on a chip, and the like) to implement some or all of the functionalities they are described as performing. FIGS.3-17Bdepict example snapshots of a scenario of what might occur after an incident. In this scenario, a user302was the driver of a car306that was just involved in an automobile accident at an intersection. More particularly, a truck rear-ended the insured car306in the intersection. For this scenario, the car306is insured and the user302is the policyholder and/or a covered driver of the car. A policyholder is a label for the person who owns or holds the insurance policy that covers at least one vehicle involved in a subject vehicular incident. This term is used loosely herein and generally refers to any covered driver under the policy. Unless the context indicates otherwise, the policyholder and the user are the same. The snapshots depicted inFIGS.3-17Billustrate examples of what might occur in this example post-incident scenario in accordance with one or more implementations of the technology described herein. Of course, the details of a different scenario will result in differing examples of what occurred, but they would still be in accordance with one or more implementations of the technology described herein. ForFIGS.3-17B, the depicted snapshots include the items that the user302might see through a head-mounted VR/AR/MR rendering device304, and such items are shown with solid lines. In contrast, the user302and VR/AR/MR rendering device304are illustrated with dashed lines, indicating that the user302and VR/AR/MR rendering device304are not within the view of the user302using the VR/AR/MR rendering device304. Instead, the depiction of the user302and VR/AR/MR rendering device304is provided to show the perspective of the view of the environment300. This dashed line depiction for a user and rendering device is used inFIGS.3-17B. FIG.3illustrates an example view of an actual environment300as displayed to the user302wearing the VR/AR/MR rendering device304. The environment300includes the car306parked in a parking lot nearby a scene of an automobile accident in an intersection. Indeed, the car damage308to the rear of the car is partially seen in the example view of the environment300. The view of the environment300includes a virtual assistant320is assisting the user302to gather post-incident information. The VR/AR/MR rendering device304may be configured, for example, like the VR/AR/MR rendering device104. The image seen by the user302may be generated by the VR/AR/MR processor102and displayed on an image output portion of the VR/AR/MR rendering device304. The head-mounted VR/AR/MR rendering device304displays to the user302the actual environment300, such as a parking lot, and/or just a representation of the actual environment shown in the view of the environment300. The VR/AR/MR rendering device304may display the actual environment300(and/or a representation of the actual environment, such as a virtual representation) to the user302in a virtual-reality, mixed-reality, and/or augmented-reality fashion. That is, in one example, the user302may be in the actual environment300wearing the head-mounted VR/AR/MR rendering device304, and the view the VR/AR/MR rendering device304may display to the user302is an image of the actual environment300. That is, in the view of the environment300, the VR/AR/MR rendering device304displays the virtual assistant320to the user302. Still, the virtual assistant is not present in the actual environment. For example, referring back toFIG.1, the virtual assistant control system130may communicate via the network126to the VR/AR/MR processor102. The virtual assistant control system130may operate automatically, responsive to human input, or some combination. For example, the virtual assistant control system130may communicate with the virtual assistant renderer118of the VR/AR/MR processor102to cause the VR/AR/MR rendering device304to display the virtual assistant320on an image output portion of the VR/AR/MR rendering device304. The VR/AR/MR processor102may cause the VR/AR/MR rendering device304to display the virtual assistant320as pointing to or otherwise non-verbally indicating the car306. The VR/AR/MR processor102may cause the VR/AR/MR rendering device304to display the car306in a highlighted manner or otherwise emphasized. This may assist the user302to know that the virtual assistant320is pointing to or otherwise non-verbally indicating the car306. In addition to the virtual assistant320pointing to or otherwise non-verbally indicating the car306, the VR/AR/MR rendering device304may cause the virtual assistant320to verbally or non-verbally request the user302to act within in the actual environment, such as around the car306. In the view of the environment300, the VR/AR/MR rendering device304is causing the virtual assistant320to make an utterance322, requesting the user302to walk around the car306and look for any damage. The user302may perform the requested action in the actual environment so, for example, an image the input portion of the VR/AR/MR rendering device304obtains includes imagery that would not otherwise be included, such as a complete view around the car306. FIG.4illustrates a view400of the same environment300, but from a different angle as may be seen by the user302, who is wearing the head-mounted VR/AR/MR rendering device304. As suggested by the virtual assistant320by the utterance322, the user302is walking around the car306to inspect it for damage. As the user moves in the actual environment, the virtual assistant320is depicted to the user302via the VR/AR/MR rendering device304as moving along with the user. That is, the user302can see the 3D representation of the virtual assistant320accompany the user as the user walks around the car306. As the user302walks around the car306and views the car, the image input portion114of the VR/AR/MR rendering device304captures video images of the car306from various angles. In doing so, one or more images of a license plate410of the car306are captured. The VR/AR/MR rendering device304may spatially capture the actual environment to generate a data set that is representative of the actual environment. For example, the VR/AR/MR rendering device304may include an imaging device such as a three-dimensional scanner, and the VR/AR/MR rendering device304may generate a point cloud or other three-dimensional representation that is representative of an actual environment. The VR/AR/MR processor102or an analogous device may receive the one or more images of the license plate410from the VR/AR/MR rendering device304and process the received image to recognize the feature as being a license plate using image recognition software, artificial intelligence software, and/or other types of software and/or hardware. For example, a three-dimensional representation of feature, like the license plate410, in the format of a three-dimensional point cloud may be processed geometrically to determine that the combination and configuration of flat planes versus curved surfaces, size/scale, and color values are likely to compose a certain class of non-vehicular object (e.g., a flat rectangle with a defined aspect ratio on the front or back of a car is likely to be a license plate) and further a certain make and/or model of that class of non-vehicular object (e.g., comparing the license plate against a database of geometry for known types of license plates resulting in identifying the state of the license plate). In some examples, the VR/AR/MR processor102may communicate with external databases via a network, such as communicating with the external data sources128via the network126, to obtain metadata or other information about recognized features. Furthermore, characteristics like material (metal versus wood) may be identified to provide additional metadata about the features, such as identifying that the particular number of the license plate is associated with this type of car for the known policy. The VR/AR/MR processor102may provide an indication of the metadata or other information about the recognized features to the VR/AR/MR rendering device304. In the example shown inFIG.4, the VR/AR/MR processor102recognizes the car306as a car also recognizes the license plate410. Furthermore, the VR/AR/MR processor102or analogous device may obtain metadata or other information about the car306or license plate410from local information stored within the VR/AR/MR processor102or analogous device or from the external data sources128(e.g., state motor vehicle registry). The VR/AR/MR processor102indicates feature identifications, the metadata and/or other information to the VR/AR/MR rendering device304. The VR/AR/MR rendering device304may display the feature identifications, metadata and/or other information to the user302in association with the actual environment and/or a representation of the actual environment. FIG.5illustrates a view500of the same environment300shown in view400, but from a different angle as may be seen by the user302who is wearing the head-mounted VR/AR/MR rendering device304. In his walk around the car306, the user302arrives at the back of the car and can better see the damage308. An internal database124may store, for example, a point cloud representation of the make and model of an undamaged version of the car306, which is covered by the insurance policy of the user. In addition, other metadata about the car306may also have been collected and/or otherwise determined and stored in the internal database124. A shaded projection508is shown to the user302via the VR/AR/MR rendering device304. The shaded projection508represents the difference or delta between the damage308to that area of the car306and the undamaged version of the car (as derived from its internal database124). In response to a recognition of possible damage to the car306at the damage308, the VR/AR/MR rendering device304triggers the shaded projection508of the damage, the virtual assistant320to address the user302with utterance510, and generates annotations502and504. This interaction is intended to confirm that the detected damage was a result of this incident that just occurred and not some past or old damage. The utterances described herein may also confirm the user's input. The VR/AR/MR rendering device304may indicate the utterances to the VR/AR/MR processor102, for example. In some examples, the user may make a gesture or other indication in addition to or in place of the utterance, and the VR/AR/MR rendering device304may indicate the gesture or other indication to the VR/AR/MR processor102, for example. The VR/AR/MR processor102may utilize one or more received indications to populate entries in a database, such as to populate and/or modify entries in the internal database124. The virtual assistant320indicates (e.g., by pointing towards) the shaded projection508of the detected damage and asks whether the indicated damage is new. The annotations502and504are options that are projected in front of the user302wearing the head-mounted VR/AR/MR rendering device304. The VR/AR/MR processor102may, for example, generate data for the annotations502and504and provide the data to the VR/AR/MR rendering device304for a display to the user302. The VR/AR/MR rendering device304displays the annotations to assist the user in responding to the virtual assistant's query. If this damage is indeed new, the user302may select the YES option of the annotation502. Otherwise, the user302may select the NO option of the annotation504. The user can make the selection hands-free by speaking the desired option, and voice-recognition techniques will interpret accordingly. The user302can gaze at their choice or “touch” their choice. Those experienced with VR/AR/MR are familiar with these selection options available in such technology. FIG.6illustrates a view600of a different environment than shown in views400, and500. Indeed, this is a view of an actual scene620of the incident. For this example, this scene is an intersection between two roads. Thus, the user302is viewing view500of the actual scene of the incident via the VR/AR/MR rendering device304. As used herein, the scene of the vehicular incident includes the vehicles and non-vehicular objects that are proximate the incident. The bounds of the scene is largely based on the choices that the user makes in reconstructing the scene. The VR/AR/MR rendering device304projects the virtual assistant320into the view600of the actual scene620of the incident, which is presumably an intersection nearby the parking lot of environment300in this example. The virtual assistant320may indicate (e.g., by pointing towards) scene620and ask for the user302to capture the scene of the incident via utterance610. In response, the user302may look all around scene620to record the scene from various angles. The VR/AR/MR rendering device304records the images of the capture of scene620. This incident-scene capture is stored in the internal database124and is associated with the report or record of the incident. The user302may be asked to recount the incident while capturing scene620of that incident. If so, the audio of that incident is captured by the VR/AR/MR rendering device304. In addition to the images and audio, the VR/AR/MR rendering device304may acquire location information (e.g., a global positioning system) to identify the location of the incident. This location information may be stored in the internal database124and be used to acquire roadmap or satellite map data of the location. In addition, this location information may be used later to reconstruct the scene620via 3D representations. FIG.7illustrates a view700of a different environment than shown in the previous views. Presumably, this is the view700within a room720of the home of the user302. Room720is the setting forFIGS.7-17B, which will illustrate the user302using the technology described herein to facilitate vehicular incident reenactment using 3D representations. However, the incident reenactment may occur at a location convenient to the user302. Indeed, it may occur immediately after the incident once the user is in a safe location (as depicted inFIGS.3-6). The VR/AR/MR rendering device304projects the virtual assistant320into view700of room720. The virtual assistant320may suggest that the user make a recreation of the incident via utterance710. FIG.8illustrates a view800of the same environment shown in view700as may be seen by the user302, who is wearing the VR/AR/MR rendering device304. The VR/AR/MR rendering device304projects the virtual assistant320and annotations802,804, and806into this view800. As depicted, the virtual assistant320indicates the annotations and asks the user302, via utterance810, to select a road piece to configure the relevant sections of the road to start the reconstruction of the incident. Annotations802,804, and806appear as floating 3D representations of pieces of roads in front of the user302. Annotation802is a straight section of the road. Annotation804is a ninety-degree turn road section. Annotation806is a curved road section. Of course, these particular options are provided for illustration purposes. Other implementations may offer more or fewer options and/or utterly different road section options. FIG.9illustrates a view900of the same environment shown in views700and800, but from a different viewing angle, as may be seen by the user302who is wearing the VR/AR/MR rendering device304. The user302is sitting on a sofa in the same room as views700and800. The VR/AR/MR rendering device304projects the annotations802,804, and806in front of the user302. The user302may select annotation802to start the reconstruction of the incident using that road piece. As depicted, the user may “touch” or “grab” annotation802. Since the annotation802does not exist in reality, the touching or grabbing is virtual and based on known VR/AR/MR techniques of tracking the location of the user's hand and the annotation in 3D rendered space. In other instances, the user may use voice commands to select the appropriate annotation. FIG.10Aillustrates a view1000of the same environment shown in view900, as may be seen by the user302. In this view1000, the VR/AR/MR rendering device304projects the annotations802,804, and806in front of the user302and a selected road piece1002. While sitting on the sofa, the user302is virtually holding the selected road piece1002and moving that piece into a forthcoming recreation of the scene of the incident. As soon as the user302selected and moved the road piece1002away from its origin point at annotation802, the VR/AR/MR rendering device304replaced the annotation802in the view1000of the user so that the user can select another road piece like that again. FIG.10Billustrates a view1020of the same environment shown in view1000, as may be seen by the user302. This view1020illustrates actions that may take place after the events shown in view1000. In particular, this view1020shows the early stage of an image of the 3D incident reenactment1630of the scene of the incident. The 3D incident reenactment1630of the scene of the incident is a projection of 3D representations of the static objects that remain unchanged and unmoved relative to each other over a duration (e.g., timespan) of the 3D reenactment of the vehicular incident. Typically, the non-vehicular objects are often static objects. In some instances, non-vehicular objects may change or move relative to others during an incident reenactment. For example, a traffic light may change or a telephone pole may fall when a vehicle collides with it. The scene being recreated may be the actual scene620of the incident. While not shown, in some implementations, information about scene620may be projected for the user's reference while reconstructed. Indeed, in some instances, a 3D representation or 2D map of the location may be projected to the user302to aid the user with the reconstruction. The user302may use this projection as a virtual base, foundation, or scaffolding upon which he may base the reconstruction. The 3D representation or 2D map may be derived from the on-scene capture by the user, roadmaps, and/or satellite or aerial images of the location of scene620of the incident. The scene-reconstruction1010assembled thus far includes 3D representations of road pieces1004and1006connected and intersecting at approximately ninety degrees. In view1020, the VR/AR/MR rendering device304projects the user manipulating the selected road piece1002towards the scene-reconstruction1010. The manipulation is a virtual movement of the selected road piece. FIG.10Cillustrates a view1030of the same environment shown in views1000and1020, as may be seen by the user302. This view1030illustrates actions that may take place after the events shown in views1000and1020. In particular, this view1030shows the next stage of the image of the 3D scene-reconstruction1010of the scene of the incident. In particular, the view1030shows the user302virtually attaching the 3D representations of the selected road piece1002to the 3D scene-reconstruction1010. In the view1030, the VR/AR/MR rendering device304projects the user virtually attaching the selected road piece1002to the existing stitched together road pieces of the scene-reconstruction1010. In some instances, the VR/AR/MR rendering device304may predict where the user may or could manipulate (e.g., position or attach) the selected annotation to the existing scene-reconstruction1010. If so, the /AR/MR rendering device304projects highlighted annotation at the predicted location for positioning or attachment. This highlighting indicates a suggestion to the user302, where he might choose to place the selected annotation that is virtually in his hand. FIG.11illustrates a view1100of the same environment shown in views700and800, as may be seen by the user302who is wearing the VR/AR/MR rendering device304. The VR/AR/MR rendering device304projects the virtual assistant320and a menu1130of annotations into this view1100. As depicted, the virtual assistant320guides the user302, via utterance1140, to select a landmark or feature from the menu1130of such to continue the scene-reconstruction of the incident. As depicted, the menu1130includes the several annotations that each appear as floating 3D representations of landmarks or other features in front of the user302. Annotation1102represents an option for traffic lights. Annotation1104represents an option for construction equipment or signals. Annotation1106represents an option for road features, such as streetlight posts. Annotation1108represents an option for vegetation. Annotation1110represents an option for buildings, such as a corner drug store. Annotation1112represents an option for other categories or non-categorized landmarks and features. Of course, these particular options are provided for illustration purposes. Other implementations may offer more or fewer options and/or utterly different landmark options. As depicted, the user302selects the annotation1108, which may trigger a drop-down listing1116of sub-options. Highlight1114around annotation1108indicates that it has been selected for the drop-down listing1116, which includes trees1118, bushes1120, and rocks1122. The user302can select the appropriate sub-object from the drop-down list1116. FIG.12illustrates a view1200of the same environment shown in views900and1000, as may be seen by the user302. In this view1200, the VR/AR/MR rendering device304projects the present stage of the image of the 3D scene reconstruction1010of the scene of the incident. At this stage, the user302has already manipulated the 3D scene reconstruction1010of the scene of the incident. The user302has, for example, added several road pieces, features, and landmarks to the 3D scene reconstruction1010of the scene of the incident. For example, the user302added the 3D representations of a traffic light1212and building1214and placed them in the appropriate relative location of the intersection of the scene of the incident. Indeed, as shown, the user302is manipulating1218(e.g., virtually placing) a selected building1216into the appropriate relative location at the intersection of the scene of the incident As depicted, the virtual assistant320asks the user302, via utterance1210, to indicate how many vehicles were involved in the incident. Annotations1202,1204, and1206appear as floating 3D representations of numeric representations of the user302. Annotation1202is the number one, annotation1204is number two, and annotation1206indicates three or more. Other implementations may offer more or fewer options and/or utterly different road section options. The user302may select the appropriate answer by, for example, touching the correct annotation. For illustration purposes, the user302says aloud the number two. That is, the two vehicles were involved in the incident. FIG.13illustrates a view1300of the same environment shown in views700,800, and1100as may be seen by the user302who is wearing the VR/AR/MR rendering device304. The VR/AR/MR rendering device304projects the virtual assistant320and a menu1320of annotations into this view1300. The menu1320is a listing of the policyholder's vehicles, as depicted as 3D representations. As depicted, the virtual assistant320guides the user302, via utterance1310, to select from the menu1320which of the policyholder's vehicle was involved in the incident to continue the scene-reconstruction of the incident. As depicted, the menu1320includes several annotations that each appear as floating 3D representations of the policyholder's vehicles in front of the user302. Annotation1302represents an option for “Your Automaker ModelABC,” which is presumably the make and model of one of the policyholder's car. Annotation1304represents an option for “Your Roadtrip Motorcycle,” which is presumably the make of a motorcycle owned by the policyholder. Annotation1306represents an option for other options. The information and the 3D representations of the annotations of the menu1320may be found in the internal database124. More particularly, the insurance company knows the details of the vehicles covered by the policy of the policyholder. Thus, it generates the specific information that identifies the policyholder's vehicles and their 3D representations based on the known details. While it is not shown here, the user302picks annotation1302, which is the option for the car306that he was driving during the incident. FIG.14illustrates a view1400of the same environment shown in views700,800,1100, and1300as may be seen by the user302who is wearing the VR/AR/MR rendering device304. The VR/AR/MR rendering device304projects the virtual assistant320and a menu1420of annotations into this view1400. The menu1420is a listing of categories of vehicles, as depicted as 3D representations. As depicted, the virtual assistant320guides the user302, via utterance1410, to select from the menu1420, which type of vehicle was the other vehicle that was involved in the incident. The selected vehicle can be used to continue the scene-reconstruction of the incident. As depicted, the menu1420includes several annotations that each appear as floating 3D representations of various vehicle types in front of the user302. Annotation1402represents an option for a sedan. Annotation1404represents an option for a sports utility vehicle (SUV). Annotation1406represents an option for a motorcycle. Annotation1408represents an option for other options. Each of these annotations may offer drop-down listing (like that of drop-down listing1116) of sub-options to identify the vehicle with particularity. The information and the 3D representations of the annotations of the menu1420may be found in the internal database124or external data sources128. While it is not shown here, the user302picks a particular sedan from a drop-down listing of annotation1402. FIGS.15,16A,16B,17A, and17Billustrate a view1500of the same environment shown in views1000,1020,1030, and1200, as may be seen by the user302. This view1500illustrates actions that may take place after the events shown in views1000,1020,1030,1100,1200,1300, and1400. In particular, this view1500shows the image of a 3D incident reenactment1630of the scene of the incident after the selection of vehicles as described above with regard toFIGS.13and14. The 3D incident reenactment1630is based on the 3D scene reconstruction1010of the scene of the incident. However, the 3D incident reenactment1630is a depiction of the incident over a duration. Thus, the 3D incident reenactment1630is the projection of 3D representations of the static objects and dynamic objects. Over the duration of the 3D incident reenactment, the dynamic objects change or move relative to the static objects and/or other dynamic objects. Typically, the vehicles are often dynamic objects. In some instances, a vehicle may remain unchanged or unmoved relative to other objects during an incident reenactment. As depicted inFIG.15, the virtual assistant320asks the user302, via utterance1510, to manipulate (e.g., position) the vehicles where they were at the start of the incident. In response, the user302places selected vehicles into the 3D incident reenactment1630. For example, the user302manipulate1506(e.g., virtually places) the 3D representations of an insured vehicle1502and the other vehicle1504near the building1214. More particularly, the user302virtually places1506the vehicle in the appropriate relative locations as they approach the scene of the incident, which is near the intersection. As depicted inFIG.16A, the virtual assistant320asks the user302, via utterance1610, to drag the vehicles across the 3D incident reenactment1630to show the relative movements and locations of the vehicles during the incident. That is, the user302is asked to animate the indecent using the 3D representations of the selected vehicles in the 3D representation of the reconstructed scene. In some instances, the virtual assistant320may prompt the user302to move the 3D representation of an insured vehicle1502first. Furthermore, the virtual assistant320tells the user302, via utterance1610, that his movements of the vehicles within the 3D incident reenactment1630will be recorded. The prompting may take the form of the VR/AR/MR rendering device304providing a message (e.g., utterance1610) that request input from the user. That requested input includes manipulation of the 3D representations of at least one non-vehicular object (e.g., traffic light1212) and at least one vehicle (e.g., vehicle1502), via the VR/AR/MR rendering device304, as a reenactment of the vehicular incident. In response to this prompting, as shown inFIGS.16A and16B, the user302moves the selected vehicles in the 3D incident reenactment1630. For example, as shown inFIG.16A, the user302manipulates (e.g., virtually moves) the 3D representations of the insured vehicle1502and the other vehicle1504on the road in front of the building1214. As shown inFIG.16B, the user302has manipulated the 3D representation of the insured vehicle1502along to the crosswalk of the intersection. As depicted inFIG.16B, the virtual assistant320tells the user302, via utterance1620, to move the 3D representation of another vehicle1502next. As depicted inFIG.17A, the virtual assistant320tells the user302, via utterance1710, to touch traffic lights while he is moving the vehicles to indicate when the traffic lights changed relative to the location of the vehicles. In response to these promptings, as shown inFIG.17A, the user302starts to manipulate1506(e.g., virtually move) the 3D representation of the other vehicle1504on the road in front of the building1214. While manipulating1506the vehicle1504, the user302reaches out with his other hand and virtually touches1702the traffic light1212facing the vehicles to indicate a light change. The particular light (e.g., red, yellow, green, etc.) can be specified by one or more various factors, including the specific location of the virtual touch1702on the traffic light1212, duration of the virtual touch, and/or a spoken command. In other instances, just a spoken command may specify the type and timing of the traffic light. As depicted inFIG.17B, the virtual assistant320tells the user302, via utterance1720, to stop moving or touching virtual items when he is done. In response to this prompting, as shown inFIG.17B, the user302manipulates1506(e.g., virtually moves) the 3D representation of the other vehicle1504to intersect (i.e., rear-end) the insured vehicle1502just when he touches the traffic light1212to turn green. FIG.18illustrates a user1802wearing a head-mounted VR/AR/MR rendering device1804. As shown inFIG.18, methods of user input and interaction that are enabled by the VR/AR/MR technology assist in the process of facilitating incident reenactment using 3D representations. The VR/AR/MR rendering device1804may be similar to the VR/AR/MR rendering device304or to the VR/AR/MR rendering device104. The VR/AR/MR rendering device1804includes functionality to generate pupil data to determine the sight direction1806of the user1802. The sight direction determination functionality may include, for example, an image within the VR/AR/MR rendering device1804that may capture an image of the eyes of the user1802and a processor to process the captured image to determine a location of the pupils of one or both eyes. From the locations of the pupils, the sight direction1806may be determined. In some examples, the VR/AR/MR rendering device1804determines the sight direction and, in other examples, the VR/AR/MR rendering device1804provides the captured image to the VR/AR/MR processor102, to perform sight direction. In the example shown inFIG.18, the user's sight direction1806is toward an item1808. The VR/AR/MR rendering device1804may display the item1808to the user1802in a highlighted manner or otherwise indicating the item1808to the user1802. Herein, an example item may be a vehicle, landmark, or feature that was involved or may have been involved in the incident. The user1802may also provide information relevant to the item1808or ask questions about the item, such as by providing information1810about the item1808. Other means for the user1802to provide the information about the item1808may be provided. For example, the VR/AR/MR rendering device1804may display a virtual keyboard1812to the user1802, and the VR/AR/MR rendering device1804may recognize the movement by the user1802that indicates the user providing the information1810to the VR/AR/MR rendering device1804via the virtual keyboard, such as typing information1814about the item. The VR/AR/MR rendering device1804may provide the information1814about the item1808and/or the information1810to the VR/AR/MR processor102. In some examples, there may be several items in the user's sight direction1806, and the VR/AR/MR rendering device1804may interact with the user1802about each of the items in turn or, for example, allow the user1802to select an item about which to interact. The VR/AR/MR processor102may utilize the information1810to populate entries in a database, such as to populate and/or modify entries in the internal database124. In some instances, this information1810may be used to describe a vehicle or landmark at the scene of the incident, such as scene620. FIG.19AandFIG.19Btogether illustrate a user1902wearing a head-mounted VR/AR/MR rendering device1904. The VR/AR/MR rendering device1904may be similar, for example, to the VR/AR/MR rendering device304or to the VR/AR/MR rendering device104. The VR/AR/MR rendering device1904may include functionality to detect a gesture1906made by the user1902relative to an item1908in the environment of the user1902. Herein, an example item may be a vehicle, landmark, or feature that was involved or may have been involved in the incident Using the gesture1906and/or other gestures, the user1902may interact with the virtual environment in ways that assist in the process of facilitating incident reenactment using 3D representations. In some examples, one or more images including the gesture are provided to the VR/AR/MR processor102, which has the functionality to detect the gesture1906. In the example shown inFIG.19AandFIG.19B, the user1902makes a gesture1906by framing the item1908in her view with her fingers1910. This is just an example, and other methods of gesturing are possible, such as pointing or waving. The VR/AR/MR rendering device1904may display the item1908to the user1902in a highlighted manner or otherwise showing that the user1902has indicated the item1908with the gesture1906. With the item1908being indicated, the user1902may provide information relevant to the item1908, such as by making an utterance1912about the item1908that includes the information, or otherwise providing the information. The VR/AR/MR rendering device1904may provide the information to the VR/AR/MR processor102. The VR/AR/MR processor102may utilize the information to populate entries in a database, such as to populate and/or modify entries in the internal database124. In some instances, this information1810may be used to describe a vehicle or landmark at the scene of the incident, such as scene620. FIG.20illustrates a view in which a user2002is wearing a VR/AR/MR rendering device2004to facilitate submitting an insurance claim. The VR/AR/MR rendering device2004may be similar to the VR/AR/MR rendering device104. The VR/AR/MR rendering device2004displays to the user2002an image that includes a summary listing2006of the incident for which an insurance claim may be submitted. The VR/AR/MR rendering device2004provides a query2008, verbally or visually, as to whether the user2002would like to submit an insurance claim that includes the incident referenced by the summary2006. The query2008may include additional information, such as the identity of the insured vehicle. The user2002may make an utterance2010or otherwise indicate, such as with a gesture, that the insurance claim should be submitted. For example, the VR/AR/MR renderer2004may indicate the utterance2010to the VR/AR/MR processor102. The VR/AR/MR processor102may provide the information of the insurance claim, such as the reenactment of the incident, via the network126to the internal database124. FIG.21is a flowchart illustrating an example process2100to provide an immersive environment for a user, to a to facilitate incident reenactment using 3D representations. For ease of illustration, the process2100may be described as being performed by a device described herein, such as one or more processors of a VR/AR/MR rendering device. However, the process2100may be performed by other devices. Moreover, the devices may be used to perform other processes. The process2100(as well as each process described herein) is illustrated as a logical flow graph, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-readable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-readable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. In some contexts of hardware, the operations may be implemented (e.g., performed) in whole or in part by hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Further, any number of the described operations may be omitted. At2102, one or more processors present, on a display of an electronic device, an image including 3D representation of at least one vehicle involved in a vehicular incident. For example, one or more processors of the AR/VR/MR processor102may present an image on the AR/VR/MR rendering device104. That image includes 3D representation of at least one vehicle, such as vehicle1502, that was involved in a vehicular incident, such as the one that occurred at the actual scene620of the incident, which is presumably an intersection nearby the parking lot of environment300. FIGS.13-17Band their associated descriptions illustrate examples of presentations that may be the result of operation2102of this process. They depict and describe the presentation of images that include 3D representations of one or more vehicles. Vehicles1302,1402,1502, and1504are examples of vehicles that may have been involved in a vehicular incident that has their 3D representations presented by operation2102. At2104, one or more processors present, on the display of the electronic device, the image that also includes 3D representations of at least one non-vehicular object proximate of the vehicular incident. The image includes 3D representation of at least one non-vehicular object, such as traffic light1212, that was part of the scene of the vehicular incident, such as the one that occurred at the actual scene620of the incident, which is presumably an intersection nearby the parking lot of environment300. FIGS.8-12and15-17Band their associated descriptions illustrate examples of presentations that may be the result of operation2104of this process. They depict and describe the presentation of images that include 3D representations of one or more non-vehicular objects. The non-vehicular objects may be, for example, things or features that may be part of the scene of an incident or nearby the scene of such an incident. For example, a non-vehicular object may be a road, a sidewalk, a traffic light, traffic sign, building, parking lot, railroad track, person, pole, advertisement sign, lane marker, intersection, vegetation, construction materials, construction equipment, walls, landmarks, and the like. Road pieces802,804,806,1002,1004,1006, traffic light1102, construction equipment1104, road features1106, vegetation1008, building1110, other1112, tree1118, bush1120, rock1122, traffic light1212, and buildings1214,1216are examples of non-vehicular objects that may have been part of the scene of a vehicular incident that has their 3D representations presented by operation2104. At2106, the one or more processors prompt the user to provide input the manipulate the 3D representations so as to reenact the vehicular incident. This operation may be described as providing a message that requests that the user of the electronic device provide manipulation input. That manipulation input is data representing the user's manipulation of or changes to the 3D representations of the at least one non-vehicular object and/or the at least one vehicle, via the display of the electronic device. This manipulation or changes are done to produce a reenactment of vehicular incident. For example, one or more processors of the AR/VR/MR processor102may present an image on the AR/VR/MR rendering device104. The presented image may be based, for example, on a point cloud of data. The image may include a 3D representation of a virtual assistant providing guidance and/or prompting the user to take some action that facilitates incident reenactment using 3D representations. Thus, the virtual assistant virtual assistant configured to interact with the user to provide the message of the providing operation2106. FIGS.3-20and their associated descriptions illustrate examples of promptings or messages that may be result of operation2106of this process. The utterances, menus, and annotations are examples of such promptings. More particularly, examples of such include utterances322,510,610,710,810,1140,1210,1310,1410,1510,1610,1620,1710,1720, and2008, annotation menus1130,1320, and1420, and annotations502,504,802,804,806,1002,1004,1006,1102-1112,1118,1120,1122,1202-1206,1212-1216,1302-1306,1402-1408, and1502-1504. In some instances, the presenting operation2104includes at least a visual indication of the at least one vehicle by the virtual assistant. For example,FIG.13and its accompanying description shows the virtual assistant320visually indicating1330the vehicles1302and1304. With many of the examples described herein, the prompting (e.g., messages) includes the projection of an image of an utterance from the virtual assistant320that may be combined with the projection of manipulatable and/or selectable annotations. In some instances, the prompting (e.g., messages) may include audio (e.g., verbal instructions or questions) alone or in combination with utterances and/or annotations. At2108, the one or more processors receive an input from the user of the electronic device and based at least in part on the message of operation2106. The input being received via the electronic device, such as the VR/AR/MR rendering device104. As used herein, an input may, for example, include just one input. The inputs may be, for example, based upon the user making a gesture (e.g., virtually touching an annotation) and/or speaking a response to the prompt (e.g., message). The input may include, for example, the user302virtually touching vegetation annotation1108and then virtually touching tree annotation1118to select a tree as a non-vehicular object. As depicted inFIGS.8-10Cand their accompanying descriptions, the user302may provide an input (e.g., a first input) to manipulate one or more non-vehicular objects, such as road pieces (e.g., road pieces802,804,806,1002,1004, and1006). The manipulation includes selecting the road pieces by virtually touching one or more of them, virtually grabbing one or more of them, and virtually placing one or more of the road pieces into the 3D scene reconstruction1010of the scene of the incident. Since the road pieces will not move or change during the 3D incident reenactment1630, they are static objects. As depicted inFIGS.11-13and their accompanying descriptions, the user302may provide another input (i.e., second input and/or third input) to manipulate one or more non-vehicular objects, such as traffic light1102, construction equipment1104, road features1106, vegetation1008, building1110, other1112, tree1118, bush1120, rock1122. The manipulation includes selecting the object by virtually touching one or more of them, virtually grabbing one or more of them, and virtually placing one or more of the objects into the 3D scene reconstruction1010of the scene of the incident. Since these non-vehicular objects will not move or change during the 3D incident reenactment1630, they are static objects. As depicted inFIGS.14-17Band their accompanying descriptions, the user302may provide another input (e.g., a third and/or fourth input) to manipulate one or more vehicles, such as the vehicles1502and1504. The manipulation includes selecting the vehicle by virtually touching one or more of them, virtually grabbing one or more of them, virtually placing the vehicles1502and1504into the 3D scene reconstruction1010of the scene of the incident, and virtually moving the vehicles1502and1504within the 3D incident reenactment1630. Since these vehicles are moving and/or changing during the 3D incident reenactment1630, they are dynamic objects. As used herein, an input may include a plurality of related inputs. As used herein, an inputs may be described as first, second, third, or forth, or so on in order distinguish one input (or group of inputs) from another. This designation is only intended to distinguish one input from other. The label does not indicate priority, hierarchy, ranking, or differences in quality of the input. At2110, the one or more processors generate a 3D reenactment of the vehicular incident that includes the 3D representations of the at least one vehicle and/or at least one non-vehicular object. The generation of the 3D reenactment is based, at least in part on the input. The generated 3D reenactment may be presented to the user via the display of the electronic device. For example, one or more processors of the AR/VR/MR processor102may generate and/or present an image of the generated 3D incident reenactment1630on the AR/VR/MR rendering device104. For example, if the user302moves the vehicles1502and1504in the manner depicted inFIGS.16-17Band described in their accompanying descriptions, the generated 3D incident reenactment1630may show the vehicle1504rear-ending the insured vehicle1502on the roadway near the intersection. With the techniques described herein, an inventory of objects in an environment may be more easily and accurately created, such as for use in documenting an insurance claim. Furthermore, changes to objects in an environment may be more accurately determined, which may, for example, assist policyholders in preparing and/or documenting an insurance claim after an incident. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims. | 65,073 |
11861138 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION Example Mobile Device FIG.1Ais a block diagram of an example mobile device100. The mobile device100can be, for example, a handheld computer, a personal digital assistant, a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a network base station, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these data processing devices or other data processing devices. Mobile Device Overview In some implementations, the mobile device100includes a touch-sensitive display102. The touch-sensitive display102can implement liquid crystal display (LCD) technology, light emitting polymer display (LPD) technology, or some other display technology. The touch-sensitive display102can be sensitive to haptic and/or tactile contact with a user. In some implementations, the touch-sensitive display102can comprise a multi-touch-sensitive display102. A multi-touch-sensitive display102can, for example, process multiple simultaneous touch points, including processing data related to the pressure, degree, and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers, chording, and other interactions. Other touch-sensitive display technologies can also be used, e.g., a display in which contact is made using a stylus or other pointing device. Some examples of multi-touch-sensitive display technology are described in U.S. Pat. Nos. 6,323,846, 6,570,557, 6,677,932, and 6,888,536, each of which is incorporated by reference herein in its entirety. In some implementations, the mobile device100can display one or more graphical user interfaces on the touch-sensitive display102for providing the user access to various system objects and for conveying information to the user. In some implementations, the graphical user interface can include one or more display objects104,106. In the example shown, the display objects104,106are graphic representations of system objects. Some examples of system objects include device functions, applications, windows, files, alerts, events, or other identifiable system objects. Example Mobile Device Functionality In some implementations, the mobile device100can implement multiple device functionalities, such as a telephony device, as indicated by a phone object110; an e-mail device, as indicated by the e-mail object112; a network data communication device, as indicated by the Web object114; a Wi-Fi base station device (not shown); and a media processing device, as indicated by the media player object116. In some implementations, particular display objects104, e.g., the phone object110, the e-mail object112, the Web object114, and the media player object116, can be displayed in a menu bar118. In some implementations, device functionalities can be accessed from a top-level graphical user interface, such as the graphical user interface illustrated inFIG.1A. Touching one of the objects110,112,114, or116can, for example, invoke corresponding functionality. In some implementations, the mobile device100can implement network distribution functionality. For example, the functionality can enable the user to take the mobile device100and provide access to its associated network while traveling. In particular, the mobile device100can extend Internet access (e.g., Wi-Fi) to other wireless devices in the vicinity. For example, mobile device100can be configured as a base station for one or more devices. As such, mobile device100can grant or deny network access to other wireless devices. In some implementations, upon invocation of device functionality, the graphical user interface of the mobile device100changes, or is augmented or replaced with another user interface or user interface elements, to facilitate user access to particular functions associated with the corresponding device functionality. For example, in response to a user touching the phone object110, the graphical user interface of the touch-sensitive display102may present display objects related to various phone functions; likewise, touching of the email object112may cause the graphical user interface to present display objects related to various e-mail functions; touching the Web object114may cause the graphical user interface to present display objects related to various Web-surfing functions; and touching the media player object116may cause the graphical user interface to present display objects related to various media processing functions. In some implementations, the top-level graphical user interface environment or state ofFIG.1Acan be restored by pressing a button120located near the bottom of the mobile device100. In some implementations, each corresponding device functionality may have corresponding “home” display objects displayed on the touch-sensitive display102, and the graphical user interface environment ofFIG.1Acan be restored by pressing the “home” display object. In some implementations, the top-level graphical user interface can include additional display objects106, such as a short messaging service (SMS) object130, a calendar object132, a photos object134, a camera object136, a calculator object138, a stocks object140, a weather object142, a maps object144, a notes object146, a clock object148, an address book object150, and a settings object152. Touching the SMS display object130can, for example, invoke an SMS messaging environment and supporting functionality; likewise, each selection of a display object132,134,136,138,140,142,144,146,148,150, and152can invoke a corresponding object environment and functionality. Additional and/or different display objects can also be displayed in the graphical user interface ofFIG.1A. For example, if the device100is functioning as a base station for other devices, one or more “connection” objects may appear in the graphical user interface to indicate the connection. In some implementations, the display objects106can be configured by a user, e.g., a user may specify which display objects106are displayed, and/or may download additional applications or other software that provides other functionalities and corresponding display objects. In some implementations, the mobile device100can include one or more input/output (I/O) devices and/or sensor devices. For example, a speaker160and a microphone162can be included to facilitate voice-enabled functionalities, such as phone and voice mail functions. In some implementations, an up/down button184for volume control of the speaker160and the microphone162can be included. The mobile device100can also include an on/off button182for a ring indicator of incoming phone calls. In some implementations, a loud speaker164can be included to facilitate hands-free voice functionalities, such as speaker phone functions. An audio jack166can also be included for use of headphones and/or a microphone. In some implementations, a proximity sensor168can be included to facilitate the detection of the user positioning the mobile device100proximate to the user's ear and, in response, to disengage the touch-sensitive display102to prevent accidental function invocations. In some implementations, the touch-sensitive display102can be turned off to conserve additional power when the mobile device100is proximate to the user's ear. Other sensors can also be used. For example, in some implementations, an ambient light sensor170can be utilized to facilitate adjusting the brightness of the touch-sensitive display102. In some implementations, an accelerometer172can be utilized to detect movement of the mobile device100, as indicated by the directional arrow174. Accordingly, display objects and/or media can be presented according to a detected orientation, e.g., portrait or landscape. In some implementations, the mobile device100may include circuitry and sensors for supporting a location determining capability, such as that provided by the global positioning system (GPS) or other positioning systems (e.g., systems using Wi-Fi access points, television signals, cellular grids, Uniform Resource Locators (URLs)). In some implementations, a positioning system (e.g., a GPS receiver) can be integrated into the mobile device100or provided as a separate device that can be coupled to the mobile device100through an interface (e.g., port device190) to provide access to location-based services. In some implementations, a port device190, e.g., a Universal Serial Bus (USB) port, or a docking port, or some other wired port connection, can be included. The port device190can, for example, be utilized to establish a wired connection to other computing devices, such as other communication devices100, network access devices, a personal computer, a printer, a display screen, or other processing devices capable of receiving and/or transmitting data. In some implementations, the port device190allows the mobile device100to synchronize with a host device using one or more protocols, such as, for example, the TCP/IP, HTTP, UDP and any other known protocol. In some implementations, a TCP/IP over USB protocol can be used, as described in U.S. Provisional Patent Application No. 60/945,904, filed Jun. 22, 2007, for “Multiplexed Data Stream Protocol,” which provisional patent application is incorporated by reference herein in its entirety. The mobile device100can also include a camera lens and sensor180. In some implementations, the camera lens and sensor180can be located on the back surface of the mobile device100. The camera can capture still images and/or video. The mobile device100can also include one or more wireless communication subsystems, such as an 802.11b/g communication device186, and/or a Bluetooth™ communication device188. Other communication protocols can also be supported, including other 802.x communication protocols (e.g., WiMax, Wi-Fi, 3G), code division multiple access (CDMA), global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), etc. Example Mobile Device FIG.1Bis a block diagram of an example mobile device101. The mobile device101can be, for example, a handheld computer, a laptop computer, a personal digital assistant, a network appliance, a camera, a network base station, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these data processing devices or other data processing devices. In some implementations, device101shown inFIG.1Bis an example of how device100can be configured to display a different set of objects. In some implementations, device101has a different set of device functionalities than device100shown inFIG.1A, but otherwise operates in a similar manner to device100. Mobile Device Overview In some implementations, the mobile device101includes a touch-sensitive display102, which can be sensitive to haptic and/or tactile contact with a user. In some implementations, the mobile device101can display one or more graphical user interfaces on the touch-sensitive display102for providing the user access to various system objects and for conveying information to the user. Mobile Device Functionality In some implementations, the mobile device101can implement multiple device functionalities, such as a music processing device, as indicated by the music player object124, a video processing device, as indicated by the video player object125, a digital photo album device, as indicated by the photos object134, and a network data communication device for online shopping, as indicated by the store object126. In some implementations, particular display objects104, e.g., the music player object124, the video player object125, the photos object134, and store object126, can be displayed in a menu bar118. In some implementations, device functionalities can be accessed from a top-level graphical user interface, such as the graphical user interface illustrated inFIG.1B. Touching one of the objects124,125,134, or126can, for example, invoke corresponding functionality. In some implementations, the top-level graphical user interface of mobile device101can include additional display objects106, such as the Web object114, the calendar object132, the address book object150, the clock object148, the calculator object138, and the settings object152described above with reference to mobile device100ofFIG.1A. In some implementations, the top-level graphical user interface can include other display objects, such as a Web video object123that provides functionality for uploading and playing videos on the Web. Each selection of a display object114,123,132,150,148,138, and152can invoke a corresponding object environment and functionality. Additional and/or different display objects can also be displayed in the graphical user interface ofFIG.1B. In some implementations, the display objects106can be configured by a user. In some implementations, upon invocation of device functionality, the graphical user interface of the mobile device101changes, or is augmented or replaced with another user interface or user interface elements, to facilitate user access to particular functions associated with the corresponding device functionality. In some implementations, the mobile device101can include audio jack166, a volume control device184, sensor devices168,170,172, and180, wireless communication subsystems186and188, and a port device190or some other wired port connection described above with reference to mobile device100ofFIG.1A. Network Operating Environment FIG.2is a block diagram of an example network operating environment200. InFIG.2, mobile devices202aand202beach can represent mobile device100or101. Mobile devices202aand202bcan, for example, communicate over one or more wired and/or wireless networks210in data communication. For example, a wireless network212, e.g., a cellular network, can communicate with a wide area network (WAN)214, such as the Internet, by use of a gateway216. Likewise, an access device218, such as an 802.1 LG wireless access device, can provide communication access to the wide area network214. In some implementations, both voice and data communications can be established over the wireless network212and the access device218. For example, the mobile device202acan place and receive phone calls (e.g., using VoIP protocols), send and receive e-mail messages (e.g., using POP3 protocol), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over the wireless network212, gateway216, and wide area network214(e.g., using TCP/IP or UDP protocols). Likewise, in some implementations, the mobile device202bcan place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over the access device218and the wide area network214. In some implementations, the mobile device202aor202bcan be physically connected to the access device218using one or more cables and the access device218can be a personal computer. In this configuration, the mobile device202aor202bcan be referred to as a “tethered” device. The mobile devices202aand202bcan also establish communications by other means. For example, the wireless device202acan communicate with other wireless devices, e.g., other mobile devices202aor202b, cell phones, etc., over the wireless network212. Likewise, the mobile devices202aand202bcan establish peer-to-peer communications220, e.g., a personal area network, by use of one or more communication subsystems, such as the Bluetooth™ communication devices188shown inFIGS.1A-1B. Other communication protocols and topologies can also be implemented. The mobile device202aor202bcan, for example, communicate with one or more services230,240,250,260, and270over the one or more wired and/or wireless networks210. For example, a navigation service230can provide navigation information, e.g., map information, location information, route information, and other information, to the mobile device202aor202b. A user of the mobile device202bcan invoke a map functionality, e.g., by pressing the maps object144on the top-level graphical user interface shown inFIG.1A, and can request and receive a map for a particular location. A messaging service240can, for example, provide e-mail and/or other messaging services. A media service250can, for example, provide access to media files, such as song files, audio books, movie files, video clips, and other media data. In some implementations, separate audio and video services (not shown) can provide access to the respective types of media files. A syncing service260can, for example, perform syncing services (e.g., sync files). An activation service270can, for example, perform an activation process for activating the mobile device202aor202b. Other services can also be provided, including a software update service that automatically determines whether software updates exist for software on the mobile device202aor202b, then downloads the software updates to the mobile device202aor202bwhere the software updates can be manually or automatically unpacked and/or installed. The mobile device202aor202bcan also access other data and content over the one or more wired and/or wireless networks210. For example, content publishers, such as news sites, RSS feeds, web sites, blogs, social networking sites, developer networks, etc., can be accessed by the mobile device202aor202b. Such access can be provided by invocation of a web browsing function or application (e.g., a browser) in response to a user touching the Web object114. Example Mobile Device Architecture FIG.3is a block diagram300of an example implementation of the mobile devices100and101ofFIGS.1A-1B, respectively. The mobile device100or101can include a memory interface302, one or more data processors, image processors and/or central processing units304, and a peripherals interface306. The memory interface302, the one or more processors304and/or the peripherals interface306can be separate components or can be integrated in one or more integrated circuits. The various components in the mobile device100or101can be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to the peripherals interface306to facilitate multiple functionalities. For example, a motion sensor310, a light sensor312, and a proximity sensor314can be coupled to the peripherals interface306to facilitate the orientation, lighting, and proximity functions described with respect toFIG.1A. Other sensors316can also be connected to the peripherals interface306, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities. A camera subsystem320and an optical sensor322, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips. Communication functions can be facilitated through one or more wireless communication subsystems324, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem324can depend on the communication network(s) over which the mobile device100or101is intended to operate. For example, a mobile device100or101may include communication subsystems324designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth™ network. In particular, the wireless communication subsystems324may include hosting protocols such that the device100or101may be configured as a base station for other wireless devices. An audio subsystem326can be coupled to a speaker328and a microphone330to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. The I/O subsystem340can include a touch screen controller342and/or other input controller(s)344. The touch-screen controller342can be coupled to a touch screen346. The touch screen346and touch screen controller342can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen346. The other input controller(s)344can be coupled to other input/control devices348, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of the speaker328and/or the microphone330. In one implementation, a pressing of the button for a first duration may disengage a lock of the touch screen346; and a pressing of the button for a second duration that is longer than the first duration may turn power to the mobile device100or101on or off. The user may be able to customize a functionality of one or more of the buttons. The touch screen346can, for example, also be used to implement virtual or soft buttons and/or a keyboard. In some implementations, the mobile device100or101can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, the mobile device100or101can include the functionality of an MP3 player, such as an iPod™. The mobile device100or101may, therefore, include a 36-pin connector that is compatible with the iPod. Other input/output and control devices can also be used. The memory interface302can be coupled to memory350. The memory350can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory350can store an operating system352, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system352may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system352can be a kernel (e.g., UNIX kernel), as described in reference toFIGS.4A and4B. The memory350may also store communication instructions354to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. The memory350may include graphical user interface instructions356to facilitate graphic user interface processing; sensor processing instructions358to facilitate sensor-related processing and functions; phone instructions360to facilitate phone-related processes and functions; electronic messaging instructions362to facilitate electronic-messaging related processes and functions; web browsing instructions364to facilitate web browsing-related processes and functions; media processing instructions366to facilitate media processing-related processes and functions; GPS/Navigation instructions368to facilitate GPS and navigation-related processes and instructions; camera instructions370to facilitate camera-related processes and functions; and/or other software instructions372to facilitate other processes and functions, e.g., security processes and functions as described in reference toFIGS.4A and4B. The memory350may also store other software instructions (not shown), such as web video instructions to facilitate web video-related processes and functions; and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions366are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. An activation record and International Mobile Equipment Identity (IMEI)374or similar hardware identifier can also be stored in memory350. Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory350can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device100or101may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. Software Stack and Security Process FIG.4Aillustrates an example implementation of a software stack400for the mobile devices ofFIGS.1A-1B. In some implementations, the software stack400includes an operating system (OS) kernel402(e.g., a UNIX kernel), a library system404, an application framework406and an applications layer408. The OS kernel402manages the resources of the mobile device100or101and allows other programs to run and use these resources. Some examples of resources include a processor, memory, and I/O. For example, the kernel402can determine which running processes should be allocated to a processor, processors or processor cores, allocates memory to the processes and allocates requests from applications and remote services to perform I/O operations. In some implementations, the kernel402provides methods for synchronization and inter-process communications with other devices. In some implementations, the kernel402can be stored in non-volatile memory of the mobile device100or101. When the mobile device100or101is turned on, a boot loader starts executing the kernel102in supervisor mode. The kernel then initializes itself and starts one or more processes for the mobile device100or101, including a security process410for remote access management, as described in reference toFIG.4B. The library system404provides various services applications running in the application layer408. Such services can include audio services, video services, database services, image processing services, graphics services, etc. The application framework406provides an object-oriented application environment including classes and Application Programming Interfaces (APIs) that can be used by developers to build applications using well-known programming languages (e.g., Objective-C, Java). The applications layer408is where various applications exist in the software stack400. Developers can use the APIs and environment provided by the application framework406to build applications, such as the applications represented by the display objects104,106, shown inFIGS.1A-1B(e.g., email, media player, Web browser, phone, music player, video player, photos, and store). Secure Communication Channel FIG.4Billustrates an example implementation of a security process410for remote access management over a secure communications channel422. In the example shown, the mobile device412, e.g., mobile device100or101, is running the security process410, which communicates with the OS kernel402. Any remote access requests made to the kernel402are intercepted by the security process410, which is responsible for setting up secure communication sessions between the mobile device412and a mobile services access device218. In some implementations, the process410uses a cryptographic protocol, such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to provide secure communications between the mobile device412and the access device218. The access device218can be any device with network connectivity, including but not limited to: a personal computer, a hub, an Ethernet card, another mobile device, a wireless base station, etc. The secure communications channel can be a Universal Serial Bus (USB), Ethernet, a wireless link (e.g., Wi-Fi, WiMax, 3G), an optical link, infrared link, FireWire™, or any other known communications channel or media. In the example shown, the access device218includes device drivers414, a mobile services daemon416, a mobile services API418and one or more mobile service applications420. The device drivers414are responsible for implementing the transport layer protocol, such as TCP/IP over USB. The mobile services daemon416listens (e.g., continuously) to the communications channel422for activity and manages the transmission of commands and data over the communication channel422. The mobile services API418provides a set of functions, procedures, variables and data structures for supporting requests for services made by the mobile services application420. The mobile services application420can be a client program running on the access device218, which provides one or more user interfaces for allowing a user to interact with a remote service (e.g., activation service270) over a network (e.g., the Internet, wireless network, peer-to-peer network, optical network, Ethernet, intranet). In some implementations, a device activation process can be used, as described in co-pending U.S. patent application Ser. No. 11/767,447, filed Jun. 22, 2007, for “Device Activation and Access,” which patent application is incorporated by reference herein in its entirety. The application420can allow a user to set preferences, download or update files of content or software, search databases, store user data, select services, browse content, perform financial transactions, or engage in any other online service or function. An example of a mobile services application420is the iTunes™ client, which is publicly available from Apple Inc. (Cupertino, Calif.). An example of a mobile device412that uses the iTunes™ client is the iPod™ product developed by Apple Inc. Another example of a mobile device412that uses the iTunes™ client is the iPhone™ product developed by Apple Inc. In an example operational mode, a user connects the mobile device412to the access device218using, for example, a USB cable. In some other implementations, the mobile device412and access device218include wireless transceivers for establishing a wireless link (e.g., Wi-Fi). The drivers414and kernel408detect the connection and alert the security process410and mobile services daemon416of the connections status. Once the connection is established certain non-sensitive information can be passed from the mobile device412to the access device218(e.g., name, disk size, activation state) to assist in establishing a secure communication session. In some implementations, the security process410establishes a secure communication session (e.g., encrypted SSL session) with the access device218by implementing a secure network protocol. For example, if using SSL protocol, the mobile device412and access device218will negotiate a cipher suite to be used during data transfer, establish and share a session key, and authenticate the access device218to the mobile device412. In some implementations, if the mobile device412is password protected, the security process410will not establish a session, and optionally alert the user of the reason for failure. Once a secure session is successfully established, the mobile device412and the access device218can exchange sensitive information (e.g., passwords, personal information), and remote access to the mobile device412can be granted to one or more services (e.g., navigation service230, messaging service240, media service250, syncing service260, activation service270). In some implementations, the mobile services daemon416multiplexes commands and data for transmission over the communication channel422. This multiplexing allows several remote services to have access to the mobile device412in a single session without the need to start a new session (or handshaking) for each service requesting access to the mobile device412. Example Mobile Device Processes FIG.5illustrates an example process500for presenting information on a touch-sensitive display of a mobile device. In some implementations, the process500can be used with the mobile device100or101, as described in reference toFIGS.1A-1B. Generally, the process500includes presenting information in response to a user touch input. The process500begins with presenting a first page of user interface elements on the touch-sensitive display of a mobile device (502). In some implementations, a page of user interface elements is a view on the display that is capable of presenting one or more user interface elements on the display, where the user interface elements can be related or unrelated. In some implementations, the first page of user interface elements is displayed on the mobile device upon powering up the mobile device. In some other implementations, a user interaction can trigger presentation of the page of user interface elements. For example, selection of the home button120(as shown inFIG.1A) can present the user with an initial display screen including the first page of user interface elements. The user interface elements, for example, can include the display objects106(as shown inFIG.1A). In some implementations, the first page of user interface elements is a first portion of an application menu. A gesture performed on the touch-sensitive display is detected (504). In some implementations, the gesture includes a touch, tap, or dragging motion across the touch-sensitive display (e.g., using a finger, stylus, etc.). The gesture, in some implementations, is performed within a region where no user interface elements are displayed. For example, the user can perform a horizontal or vertical swipe across a blank region of the touch-sensitive display of the mobile device. In another example, the user can touch or tap a blank section of the display (e.g., to the top, bottom, left, or right of the blank region). In some implementations, a navigational guide may be displayed to the user. The user can, for example, touch or swipe a region of the navigational guide. In response to the gesture, a second page of user interface elements is presented (506). In some implementations, the second page of user interface elements is a second portion of the application menu. In some implementations, the user is provided with an indication that a second page of user interface elements is available for display. For example, when the first page of user interface elements is displayed, a portion of one or more of the second pages of user interface elements can be visible (e.g., a section of a display object at the edge of the display). In this example, the user interface elements in the second page can appear to be falling off of the edge of the display or can appear smaller, dimmer, less clear, or otherwise secondary to the user interface elements in the first page. In another example, a navigational guide704within the display can indicate to the user that there is additional information to be presented (e.g., second or additional pages of user interface elements). In some implementations, the navigational guide704can be shaped as a grid, a row, column, an arced or fan-out pattern, etc. For example, the navigational guide illustrated inFIG.7Ashows that there are three vertically scrollable pages of user interface elements. The second page of user interface elements, in some implementations, may include one or more of the elements included within the first page of user interface elements. For example, during user navigation of pages of user interface elements, one or more of the previously displayed elements may remain within the display when the user navigates to the second page of elements. Any number of pages of user interface elements can be available for navigation. In some implementations, the user can be presented with a looping display of pages of user interface elements. For example, upon detection of a first horizontal left-to-right swipe, the first page of user interface elements is replaced with the second page of user interface elements within the touch-sensitive display. Upon detection of a second horizontal left-to-right swipe by the user, the first page of user interface elements can be displayed to the user again. In some other implementations, the user may be presented with no change in information upon the second left-to-right swipe, but a right-to-left swipe can return the user to the first page of user interface elements. In some implementations, once a gesture has been received by the touch-sensitive display, the pages of user interface elements continue to scroll until a stop indication has been received by the user or until reaching the end of the pages of user interface elements. For example, the user can touch the display using a stop gesture (e.g., tap or press), select a user interface element, press a button on the mobile device, etc. The scrolling can be animated and can be accelerated in response to quicker, repeated gestures or de-accelerated in response to slower, repeated gestures, to give a Rolodex effect. FIG.6illustrates an example process600for indicating a restricted status of a user interface element on a mobile device. In some implementations, the process600can be used with the mobile device100or101, as described in reference toFIG.1A,1B. Generally, the process600includes presenting information regarding the status of an application available on a mobile device. The process600begins with presenting on the display of a mobile device a first user interface element corresponding to a first application in a manner indicating an unrestricted status of the first application (602). The user interface element, for example, can be one of the display objects106(as shown inFIG.1A). In some implementations, denotation of an unrestricted status of an application can be performed by displaying the user interface element using a brightness level, contrast, highlight, or other visual indicator of an unrestricted status. For example, an unrestricted application can be represented by a bright user interface element (e.g., icon image) with high contrast against the background screen. The contrast can be accentuated, in some examples, by a highlighted outline or frame. A second user interface element is presented corresponding to a second application, indicating a restricted status of the second application (604). The user interface element, for example, can be one of the display objects106(as shown inFIG.1A). In some implementations, denotation of a restricted status can be performed by displaying the user interface element using a brightness level, contrast, highlight, or other visual indicator of a restricted status. For example, a restricted application can be represented by a dimmed, lower resolution, or partially transparent user interface element (e.g., icon image) with low contrast against the background screen. In some implementations, one or more user interface elements can be framed in a manner (e.g., a dark box, dashed outline, or separate tray area) indicative of the restricted status of the application(s). A symbol can overlay the user interface element, in some implementations, to denote the restricted status of the application. For example, a transparent word or image can be presented on top of the user interface element. The status of the second application is changed from restricted to unrestricted (606). In some implementations, the user selects the interface element associated with the second application and is presented with the option of changing the status of the application. For example, the user can be prompted (e.g., within a new display or within a dialog box overlaying the present display) with the opportunity to purchase the second application. If the user chooses to purchase the application, the status of the application can change from restricted to unrestricted. In another example, the user can be prompted to supply a password or identification number to gain access to the application. In some other implementations, the restricted status can be changed from outside the mobile device. For example, a user can contact the service provider of the mobile device (e.g., call on the telephone, contact through a website on a computer, etc.) and place an order for the application. In this example, the mobile device can then receive a signal (e.g., via the communications devices186or188, through a link from a computer using the port device190, etc.) providing the mobile device with the new status of the application. In some implementations, changing the status of the application can include downloading additional software, files or other data to allow the application to run. In some other implementations, the application can require a key to unlock encrypted code within the mobile device. Once the status of the second application has been changed from restricted to unrestricted, the second user interface element is displayed without the indication of a restricted status (608). For example, the second user interface element can be displayed in the manner described above for indicating the unrestricted status of the application. In some other implementations, the second application can be available on a trial basis. For example, the user can be presented with an option to access a trial version of the application, in some implementations containing a limited version of the capabilities of the application. A means of indicating a trial status of the second application, in some implementations, can be applied to the user interface element. For example, a dashed outline or transparent word or symbol overlay of the user interface element can indicate that the second application is unrestricted for a limited amount of time. Example Mobile Device with Vertically Scrolling Menu FIGS.7A-7Ccontains block diagrams of the example mobile device101with a vertically scrolling application menu. Referring toFIG.7A, a second page of display objects702is partially covered by the menu bar118within the touch-sensitive display102. In comparison to the first page of display objects106, in some implementations, the second page of display objects702could be dimmer, transparent, or outlined in a manner indicating that the display objects in the second page are not yet active (e.g., not selectable within the touch-sensitive display102). As shown inFIG.7B, the touch-sensitive display102contains a modified view of the display objects106,702. The mobile device101detects a gesture710in an upward (vertical) direction in relation to the display102. In some implementations, the gesture710is detected within a navigational region712of the touch-sensitive display102. For example, the navigational region712may be left clear of elements such as display objects106,702to provide a section of the display in which a user can input navigational gestures. For example, navigational gestures can include swiping or dragging, with a finger or stylus, in the direction in which the user wishes the display to move. In some other implementations, navigational gestures can include tapping, pressing, swiping, or dragging within a navigational guide704presented in the navigational region712. In response to the gesture710, the first page of display objects106shifts upwards and are partially obscured by an information panel714at the top of the display102. The information panel can include the current time and a battery indicator icon. The display objects in the first page of display objects appear less vivid. In some implementations, the first page of display objects106are rendered in a different method to indicate that those user interface elements are no longer actively selectable within the touch-sensitive display102. The second page of display objects702is now fully visible within the display102, and the display objects702are rendered in a sharp, vivid presentation. In some implementations, the presentation of the second page of display objects702is indicative of the display objects being actively selectable within the touch-sensitive display102. As shown inFIG.7C, the first page of display objects106is partially visible beneath the information panel714. The visible portions of the clock object148, the calculator object138, and the settings object152appear dimmer, transparent, less vivid, or outlined to indicate that these objects are not active. The second page of display objects702are located directly beneath the first page of display objects106. In some implementations, the second page of display objects702moves from below the navigational region712to above the navigational region712. In some implementations, the additional movement of the display objects106,702occurs due to an additional gesture710by the user. In some other implementations, the display102continues to scroll due to the initial gesture710until the user inputs a stop gesture (e.g., tap, etc.) or makes another input such as selecting one of the display objects104,106,702, pressing the button120, etc. Example Mobile Device with Horizontally Scrolling Menu FIGS.8A-8Ccontains block diagrams of the example mobile device101with a horizontally scrolling application menu. Referring toFIG.8A, the touch-sensitive display102contains a set of three page indicator dots802within the navigational region712. The leftmost dot is open or filled with a bright color (e.g., white), while the middle and rightmost dots are filled with a dark color (e.g., black). In some implementations, the leftmost open dot is indicative of a first page within the display102. For example, the open dot can refer to the page in which the display objects106appear. In some implementations, dragging or swiping in a horizontal manner within the navigational region712causes the display to change to the second and/or third pages as indicated by the page indicator dots802. There can be any number of page indicator dots802displayed within the navigational region712. In some implementations, rather than page indicator dots802, the navigational region can contain a navigational guide. The navigational guide, for example, can provide the opportunity for both horizontal and vertical navigation within the display102. As shown inFIG.8B, upon detecting a gesture810, the display102within the mobile device101is modified to reflect horizontal movement towards the second page of display objects as referenced by the indicator dots802.FIG.8Billustrates an instant in the horizontal movement from the first page to the second page, with reference line812marking the boundary between the first page and the second page. The visible portion of the first page includes a portion of the display objects106(e.g., the calendar object132, the address book object150, and the settings object152), and the visible portion of the second page includes a portion of set of display objects814. One of the display objects814is only partially visible within the display102. In some implementations, partially displayed objects (e.g., display objects in which a percentage of the object is not contained within the region of the visible display) are rendered in a manner which reflects an inactive status. For example, a partially displayed object can be rendered in a transparent, dim, or low resolution image to indicate to the user that the object is not currently selectable. The page indicator dots indicate that the first page (leftmost open dot) continues to be active. In some implementations, when the gesture810is detected, the display scrolls horizontally from one page to another. In some other implementations, the display continues to scroll until a stop indication is detected. For example, the display could continue to scroll until selection of a display object104,106,814, selection of the button120, or other user input is detected. In some implementations, the second page of display objects814replaces the first page of display objects106without displaying an intermediate position. In some implementations, no visible reference line812is displayed between pages. For example, the scrolling pages can be rendered in the manner of a seamless rolling display. Referring toFIG.8C, the second page of display objects814is visible within the touch-sensitive display102of the mobile device101. The middle circle of the page indicator dots802is open, reflecting the active page. In some implementations, a swiping or dragging gesture towards the left of the display102returns the user to the display illustrated withinFIG.8A. Similarly, a swiping or dragging gesture towards the right of the display102, in some implementations, provides the user with access to additional pages of display objects. Example Mobile Device with Ergonomic Display FIG.9Ais a block diagram of the example mobile device101with an ergonomic touch-sensitive menu bar layout. The display objects104are arranged in an arc. For example, the arrangement of the display objects104follows the sweep of the thumb of a user. In some implementations, the user initiates the positioning and radius of the arc through a touch range setup. For example, selecting the settings object152could present the user with the option of initializing the touch-sensitive display102in an ergonomic manner. In some other implementations, the ergonomic presentation of the display objects104can use a default arc arrangement. The arc presentation of the display objects104versus the menu bar presentation118(as shown inFIG.1B), in some implementations, may be a choice available to the user within user-selectable settings (e.g., selecting the settings object152). FIG.9Bis a block diagram of the example mobile device101with an ergonomic touch-sensitive display object layout. The display objects106in addition to the display objects104are arranged in three stacked arcs. The music object124, the video object125, the photos object134, and the store object126are arranged in the bottom-most arc. The clock object148, the calculator object138, and the settings object152are arranged in a middle arc, and the web object114, the web video object123, the calendar object132, and the address book object150are arranged in a top-most arc. In some implementations, an upper region902is left blank as a navigational region (e.g., as described inFIG.7B). In some other implementations, display objects can populate the entire display area102. In some implementations, the user can set a maximum distance for the ergonomic display object layout. For example, within a setup option (e.g., through the settings object152), the user could input a thumb sweep gesture, indicating the range of comfortable motion for an ergonomic display region. Displaying additional objects which may not fit within the range of comfortable motion, for example, can be accomplished by aligning the objects in straight rows from the top of the display102downwards, in a manner similar to the layout of the display objects106as illustrated withinFIG.9A. Example Mobile Device Displaying Elements Indicating a Restricted Status FIG.10Ais a block diagram of an example mobile device1000displaying user interface elements associated with applications which have a restricted status. A set of display objects1002are arranged within a restricted applications tray1004. The display objects1002and the tray1004are cross-hatched to make them appear darker than the remaining display objects106,104within the touch-sensitive display102of the mobile device1000. The display objects1002include the e-mail object112, the web object114, the stocks object140, and the maps object144. Any number of restricted display objects1002, in some implementations, can be arranged within the tray1004. In some implementations, rather than being arranged within a restricted applications tray1004, visual indications within the restricted display objects1002can be used to associate the display objects1002with restricted applications. In some examples, the restricted status display objects1002can be outlined with a dashed line, made transparent, overlaid with a transparent indicator of restricted status (e.g., text or image), etc. In some implementations, the applications associated with the display objects1002are not presently available for use within the mobile device1000. For example, to use the e-mail application associated with the e-mail object112, the user can select the e-mail object112and purchase/activate the e-mail application. The applications associated with the restricted display objects1002, in some implementations, are not fully installed within the mobile device1000. For example, upon selecting the e-mail object112and purchasing the e-mail application, some portion of the e-mail application can be downloaded to the mobile device1000(e.g., via the communications devices186or188, through a link from a computer using the port device190, etc.). FIG.10Bis a block diagram of the example mobile device1000displaying an information dialog1010regarding the restricted web object114. The information dialog1010, for example, can open upon selecting the web object114from within the restricted object tray1004(as shown inFIG.10A). A title bar1012alerts the user to the opportunity to upgrade the application. A cancel button1014provides the user with the option to decline the offer. Selection of the cancel button1014, for example, can return the user to the display102as shown withinFIG.10A. A message box1016contains information regarding the capabilities of the restricted application (e.g., a web browsing application associated with the web object114). In some implementations, navigational means can exist within the message box1016. For example, if the description of the application does not fit within the space provided by the message box1016, a scroll bar or other navigational means can be provided to the user to view the remaining information regarding the application. A set of screenshots1022illustrate example display layouts using the application associated with the restricted web object114. For example, the screenshots1022can illustrate the browsing, searching, and bookmarking capabilities of the web browser application. A free trial button1018gives the user the opportunity to try the application for a limited time. In some implementations, a version of the application with limited functionality or other limitations is provided to the user during the free trial. For example, the free trial version of the web browsing application can have some features (e.g., bookmarking, history, customized settings, etc.) disabled. A buy now button1020, in some implementations, can open a further dialog to provide the user with the opportunity to purchase the application. In some implementations, upon selection of the buy now button1020or the free trial button1018, the mobile device1000downloads some or all of the application software. In some other implementations, the mobile device1000can download a security element (e.g., password, cryptographic key, authorization code, etc.) to unlock the application. For example, the mobile device1000can download an encryption key to decrypt the application. FIG.10Cis a block diagram of the example mobile device1000displaying the web object114with an unrestricted status. The web object114is no longer located within the restricted tray1004, and the tray1004has been resized accordingly. The web object114is free of the hatch-marking associated with the restricted status display objects1002. In some implementations, the web object114is displayed with unrestricted status because the user selected the buy now button1020or the free trial button1018within the information dialog1010(as shown inFIG.10B). In some other implementations, the user can modify the status of the web application associated with the web object114outside of the device1000. For example, the user could call the service provider of the mobile device1000or access the web site of the service provider1000to purchase the web application associated with the web display object114. Upon request by the user, the service provider could then upload to the mobile device1000application content and/or a decryption means for making the application available to the user on the mobile device1000. In some implementations, upon selecting the free trial button1018, the web display object114could be rendered in a method indicating the temporary availability of the web application. For example, the web display object114could be displayed surrounded by a dashed line, overlaid with a transparent symbol, or embellished with another indication representing the temporary status of the availability of the application for use within the device1000. Example Mobile Device with Alternative Display FIG.11is a block diagram of the example mobile device101with interface elements docked to a menu tray floor. A docking tray1102contains the display objects104. The music object124stands above the docking tray1102with a reflection object1124beneath it. Similarly, the video object125, the photo object134, and the store object126are mirrored by the reflection objects1125,1134, and1126. In some implementations, selection of one of the display objects104launches the application associated with the display object104, while selection of the reflection object1124,1125,1134, or1126launches an information dialog regarding the associated display object104. For example, selection of the reflection object1134can open a settings dialog associated with the photos application. In some implementations, the reflection objects1124,1125,1134,1126are not user-selectable. A display object104,106, in some implementations, can be dragged and dropped between the docking tray1102and the region of the display102which contains the display objects106. For example, the user can choose to populate the docking tray1102with any set (e.g., set of four) of display objects104,106for quick access (e.g., favorites). In some implementations, the display objects104,106can be repositioned by a user. For example, a user can initiate an interface reconfiguration mode on the device100or101. While in the interface reconfiguration mode, the display objects104,106are movable by the user. The user can touch one of the display objects and drag it to the desired position. The user can drag the display object to an empty position in the display102, menu bar118, or docking tray1102to drop the display object into that position, to drag the display object to a position between two other display objects to insert the dragged display object into that position, or to drag the display object over another display object in the display102, menu bar118, or docking tray1102to have the two display objects exchange positions, for example. The repositioning of display objects on a touch-sensitive display is described in further detail in U.S. patent application Ser. No. 11/459,602, titled “Portable Electronic Device with Interface Reconfiguration Mode,” filed Jul. 24, 2006, the disclosure of which is incorporated by reference in its entirety. In implementations where user interface elements (e.g., display objects106) can be displayed in multiple pages and the user can navigate between the pages (e.g., as described above in reference toFIGS.7A-7C and8A-8C), the user can drag a display object from one page to another. For example, a user can drag a display object within a first page toward the edge of the display102. When the display object is dragged to within a predetermined distance from the edge of the display (e.g., 5 pixels), a second page is displayed. The user can then position the dragged display object within the second page. While this specification contains many specifics, these should not be construed as limitations on the scope of what being claims or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understand as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments have been described. Other embodiments are within the scope of the following claims. | 61,517 |
11861139 | DETAILED DESCRIPTION As briefly discussed above, modern computers provide for the ability to concurrently execute multiple applications and multiple instances of those applications. As a result, multitasking may be commonplace, where switching between files and applications occurs. This multitasking leads to significant difficulties in tracking the applications and files, which often results in work being left behind or abandoned. Recent surveys suggest that over 75% of information workers need to defer in-progress work before it is finished. In such scenarios, when the user returns to the work, queries are generally executed or multiple navigation hops are completed before the files can be identified and reopened. In other examples, multiple applications are left open and running, which taxes the computing resources of the client device and often prevents shut down or restart cycles of the client device. In either case, substantial computing resources are expended to reidentify files that have been left behind before their completion. The technology disclosed herein relates to computing capabilities that allow for efficient deferral of in-progress work and/or content for later retrieval, while minimizing increases in memory and bandwidth usage. The technology utilizes a content-deferral application (which may also be referred to herein as a “park application”). The content-deferral application enables users to defer, and return to, in-progress work. For instance, the present technology provides an action to defer, or park, and retrieve content (e.g., content items) during the regular flow of work. As used herein, the terms park and defer may be substantially synonymous with one another. The technology also provides a home for the users' heterogenous parked content that is available within other applications such that the parked content may be accessed from a variety of applications. By creating a ubiquitous and contextual mechanism for deferring work, the computing process from beginning to end of a project is more efficient and users can unload the stress of unfinished work, while being able to jump back in when ready—resulting in a more efficient and effective work experience. In examples, the content-deferral application reduces the computing burden required to create, store, preview, and/or access content or items a user defers or seeks to return to at a later time by leveraging a metadata storage layer and/or meta folders. For instance, at any point in time, a content item may be deferred or parked, forming parked content, directly from the native application in which the content is being edited or accessed. As an example, when a word-processing document is open in a word-processing application, the word-processing document may be parked directly from the word-processing document. A different type of content may be open in a different application, and that content may similarly be parked directly from the different application. The content-deferral application utilizes a subset of data (e.g., metadata) associated with the parked content to enable identification, quick access to pertinent information, and context-related attributes, so that parked content may be previewed along with context-related attributes without the need to load the parked content itself. The original content item (e.g., file, message) corresponding to the parked item remains stored in its original location (e.g., remote server) and is not duplicated or replicated by the content-deferral application. FIG.1depicts an example system100for deferring content in accordance with various embodiments of the present disclosure. The example system100includes a plurality of different client devices, such as client device A102and client device B124, in bidirectional communication with one or more cloud servers106. The client device A102and the client device B124both are able to access a park application or content-deferral application104comprising a defer or park feature that enables each user to defer or park content and resume this parked content via the context of the parked content. The content-deferral application104may be a web-based application, native application, and/or hybrid application. For instance, when in the form of a web application, the content-deferral application104may be accessed via a web browser or through a pane of another application. The content-deferral application104may also be a local or native application on the content-deferral application104. The content-deferral application104may be accessed as a standalone application (e.g., shown as content-deferral application104) and/or as an integrated user interface (e.g., park pane140) that can be accessed from various other compatible applications (e.g., applications130,132,134, and136). For example, the functionality of the content-deferral application104may be accessed through a sidebar or pane, referred to as a park pane140, of various applications, such as email applications, word-processing applications, collaboration applications, messaging applications, spreadsheet applications, presentation applications, etc.). Both user interfaces of the content-deferral application104provide the capabilities to defer content items and to access parked items. Parked content items may be associated with a particular user and accessed by the user across different devices, such as client device A102and client device B124. To provide such functionality, the content-deferral application104may require a user login or similar identity authentication, so that an individual may login and use the content-deferral application104on any device on which the content-deferral application104is configured to operate. This functionality may be enabled by a park app platform118, which stores parked items on a remote server106and can be accessed by various client devices of the respective user. As such, the client device A102and client device B124may be any of a desktop or laptop computer, a mobile device, such as a mobile phone or tablet, or any similar device on which the content-deferral application104is capable of operating. As an example, the user of client device A102may park an item using a laptop computer, modify the parked item using a tablet, and access the modified parked item on a mobile phone. Similarly, this user may park an item using the standalone user interface of the content-deferral application104, park another item using the integrated park pane140of application130, and access either of these parked items using the park pane140of application132. The user of client device B124may have the same flexibility in creating parked items, modifying parked items, and accessing parked items via various client devices and user interfaces of the content-deferral application104. One or more parked items150,160,180, and190may be generated via the standalone user interface of the content-deferral application104and/or the integrated user interface in a park pane140. In the example depicted, upon the content-deferral application104being activated through either the standalone content-deferral application104or the park pane140, the content-deferral application104may create a new parked item container, such as parked item A150. As one example of generating a parked item, such as parked item A150, a first file A108may be opened within a first application130. The file A108may be a word-processing document, and the first application103is a word-processing application that can be used to edit the file A108. The file A108is stored in a storage platform116on the one or more remote servers106. The storage platform116may also store many additional files, such as file B110, file C112, and file D114. When file A108is opened within the first application130, file A108is accessed from the storage platform116. Each of the files stored in the storage platform116may also have associated metadata that is accessible separately from the respective file. For instance, file A108has metadata A170, file B110has metadata B172, file C112has metadata C174, and file D114has metadata D176. While files are shown in example system100, content items other than files, along with corresponding metadata, may be stored in the content platform116. With the first file A108opened in the first application130, a selection within the application to defer or park the first file A108is received. Such a selection may be of a user interface element, such as a button or graphical element within a ribbon of the first application130. In response to receiving the selection to park the first file A108, the parked item A150is created by the content-deferral application104. To create the parked item A150, a new data structure or container is created for the new parked item. The new data structure may be generated in a content-deferral platform118that is stored on one or more the remote servers106. The data structure or data container includes defined fields for the metadata of the corresponding file and the additional data, which may be created or edited by a user. For instance, metadata fields that are supported by the content-deferral application104and/or the content deferral platform118are generated in the data structure. For example, defined fields for the link or pointer to the content item along with fields for the metadata types discussed herein may be generated as part of the data container. Similarly, defined fields for the additional data supported by the content-deferral application104and/or the content-deferral platform118may be generated in the container. The defined metadata fields and/or additional data fields may then be filled with received metadata or received additional data. The content-deferral application104may then request metadata A170for file A108from the storage platform116. Or, if the metadata A170is already loaded by the first application130, the first application130may transfer the metadata A170to the content-deferral application104. Once the metadata A170is received by the content-deferral application104, the metadata A170is incorporated into the data container of the new parked item container for parked item A150. The actual contents of file A108, however, is not stored in the parked item A150or the park app platform118. Accordingly, no duplicate files need be created, and the parked item A150may operate with the metadata A170, which requires significantly less storage space than the file A. In some examples, updates to the metadata of the files stored in the storage platform116may be pushed to the metadata in the corresponding parked items such that the metadata in the parked items remains updated. In other examples, when a parked item is accessed, the metadata may be updated from the storage platform116upon access or retrieval of the parked item. In some examples, additional data A120may also be stored in the new parked item A150. The additional data A120may include data that is specified or editing by the user through the content-deferral application104. For example, the additional data A120may include notes for the parked item A150, a category for the parked item A150, ordering information, and/or other additional data that is not otherwise stored as part of the file A108or the metadata A170. The additional data A120may also include a date stamp as to when the parked item A150was created and/or last edited or accessed. Once the new parked item A150is created, the parked item A150from the park app platform118via any of the authenticated client devices. For example, client device A102and/or client device B124may access the parked item A150. The parked item A150may also be accessed via the different interfaces of the content-deferral application104, such as through the standalone application or through a park pane140of another application (e.g., one of applications130,132,134,136). For instance, when accessing the content-deferral application104, the content-deferral application104requests the parked items, from the park app platform118, associated with the current user. Those parked items are then received at the respective client device and displayed. Because the parked items do not include the entire file to which the parked item corresponds, transmission of the parked items also requires significantly less bandwidth than transmission of the actual file. In some examples, when a new parked item is created, the parked item may be initially generated at the client device and then transmitted to the park app platform118for storage. For displaying and/or accessing a parked item, content-deferral application104receives a request to view a parked item which can be opening the standalone content-deferral application104or opening a park pane140within another application. This request causes the content-deferral application104to retrieve the stored parked items for that user in the park app platform118of the cloud server(s)106. The cloud server(s)106then sends the parked items to the respective client device(s). The parked items are displayed based on the user interface of the content-deferral application104that received a request to view the parked item. For example, the parked items may be displayed in a park pane140view within another application and/or within the standalone content-deferral application104. Furthermore, upon display of parked items in various user interfaces, the parked items may be opened or modified based on user inputs and interactions with the parked items. For instance, a user interface element within the displayed parked item may cause the file corresponding to the parked item to be opened. Triggering the file associated with the parked item to be reopened causes the application associated with the launch and open the file. For example, if the parked item A150is selected to open the word-processing document of file A108, the word-processing application130may be launched and the file A108may be opened by the word-processing application130. In other examples, the corresponding file (e.g., file A150), may be opened within the park pane140or a secondary pane of a standalone content-deferral application104by launching a web-based version of the application corresponding to the file within the park pane140(e.g., a web-based word-processing application). When multiple parked items are created, the parked items may be stored as different subsets according to categories or tags of the parked items. In the example depicted, parked item A150(corresponding to file A108), parked item B160(corresponding to file B110), parked item C180(corresponding to file C112), and parked item D (corresponding to file D114). Parked item A150and parked item B160may have the same categorization or tag and are thus related together as subset A155of the parked items. Similarly, parked item C108(with metadata C174and additional data C126) and parked item D190(with metadata D176and additional data D128) may share a common categorization or tag and are thus related together as subset B185of parked items. The categorization or tag that results in the groupings of parked items may be stored in the respective additional data of the parked items and is editable by a user. Accordingly, the user may add or change the categorization of the parked items through the content-deferral application104, either in the standalone form or as a park pane140. Categories can include current project, reference, personal, or any other general categorization associated with the content of the parked item. The categories may be set manually or automatically via the content-deferral application104. For instance, based on the parked items may be automatically categorized based on the extracted metadata associated with that parked item. For example, the metadata for a file associated with a parked item may include particular tags (such as a project name) that may be automatically copied into the additional data of a parked item and used for categorization. Other automatic categorization may result from analyzing the content of the file or other content item to extract or generate one or more categories for the parked item. For instance, keywords or other concepts within the file or other content item may be determined and used to automatically tag a category for the parked item. As briefly discussed above, the metadata for the parked items may be extracted from a meta storage layer or store of the storage platform116. The metadata provides identifying data about the corresponding content item or file that can used to create a display of the parked item that informs the user about the file. For instance, the metadata may include data such as file title, file type, a preview of the file, author, created time, last modified time, tags, status, subject, related people, etc. The parked item may also include a link or pointer to the associated file, and such a link or pointer may be included in the metadata. The content-deferral application104may display the metadata within the respective parked items in various manners as discussed below. When parked items are accessed within a park pane140of an application, parked items may be organized or sorted based on context of parked items, which may be based on the metadata and/or additional data of the respective parked item. As an example, the parked items may be sorted or filtered based on the type of application in which the park pane140is activated. For instance, if the park pane140is activated in a word-processing application, parked items corresponding to word-processing documents may be displayed first. As another example, if the park pane140is activated in an email application, parked items corresponding to emails may be displayed first. The sorting or filtering of the parked items within a park pane140may also be based on tags or metadata of a document that is currently open in the application. For instance, if a word-processing application has a document open for editing, and the document has a “Project A” tag, the parked items that similarly have a “Project A” tag may be displayed first. FIG.2depicts an example home interface200for the content-deferral application104in accordance with various embodiments of the present disclosure. In this example, the content-deferral application104is a web-based application and accessed via a web browser through a web address entered into a navigation bar281. However, in other embodiments, the content-deferral application104may be a native application or hybrid application. The home interface200serves as a hub for parked items and generates a preview, “card”, or representation220for each parked item. Each parked item preview or representation220displays various levels of information, such as the metadata as well as the additional data associated with each parked item. The home interface200may provide general functionalities directed to all or a subset of parked items as well as specific functionalities associated with each parked item representation220. The home interface200of the content-deferral application104may include a list230of web-based application icons and an indicator232of the current app being accessed. A selectable expansion user interface element234may also be provided to display additional web-based applications that are available for selection. The home interface200may provide top-level functionalities for interacting with the parked items. For instance, a selectable archive icon202may be included for archiving a selected parked item. A selectable note icon204may be included for generated and parking a note (e.g., created a new parked item for a newly created note). A search bar250may also be included to allow for searching for all content items (e.g., files) and parked items. A search icon206may be provided for searching just parked items provided within the web-based park application. For instance, upon selection of the search icon206, a search field may be presented where query terms may be entered and executed against the available parked items for the user accessing the content-deferral application104. A filter icon208for filtering park items may also be displayed. When the filter icon208is selected, a set of filter criteria may be displayed. The filter criteria may be based on fields of the metadata and/or additional information of the available parked items. Category filter options280may also be separately displayed. Each of the category filter options correspond to one or more categories of the parked items. For example, the category filter option is “All items,” and when the “All items” element is selected, all the park items are displayed (e.g., the parked items are not filtered by category). A second category filter option, associated with a second category (“Reference”), and a third category filter option, associated with a third category (“Current Project”) may also be displayed. When one of those category filter options is selected, the displayed parked items are filtered based on the corresponding category. A new category option240may also be displayed for creating a new category. Each of the parked item representations220may also include interactive and/or informative features. For instance, the parked item representation220may include the title228of the file corresponding to the parked item. A content preview214may also be included in the parked item representation. The content preview214provides a preview or an image of content of the file associated with the parked item. The image of content of the file may be stored as part of the metadata for the file. The parked item representation220may include content type indicator212that indicates a type of file for which the parked item corresponds. The content type indicator212may be an icon of an application for which the file of the parked item corresponds and/or the application that would be used to open the corresponding file. For instance, the content type indicator212may be an icon of a word-processing application where the file of the parked item is a word-processing document. A recent activity indicator226may also be provided that indicates recent activity relating to the parked item and/or the file corresponding to the parked item. For example, the recent activity indicator226may indicate that the user (“you”) parked the file as the most recent activity. In other examples, the recent comments and/or share activity of the file may be reflected in the recent activity indicator226. A category label260may also be provided in the parked item representation220. The category label260indicates the category260, which may be stored as additional data of the parked item. The category label260may be selectable, and selection of the category label260provides an interface for changing the category of the parked item. The parked item representation220also includes action icons for actions that may be performed on the parked item representation220and/or the parked item. For instance, the parked item representation220may include an archive icon222that, when selected, causes the parked item to be archived (e.g., deleted from the content-deferral platform). The parked item representation220may also include a launch icon224that, when selected, causes the file associated with the parked item to be launched in its corresponding application (e.g., a word-processing application where the file is a word-processing document). A note generation icon270may also be included that, when selected, provides an interface for adding or editing a note associated with the parked item. When the note is added, the note may be stored as additional data of the parked item (rather than metadata of the file). Also once a note is added, or if a note has been previously associated with a parked item, the parked item representation220includes a display or portion of the content of the note in a note indicator218. An additional-functions indicator216may also be provided in the parked item representation220. When the additional-functions indicator216is selected, additional functions for interacting with the parked item may be displayed, such as an option to share the parked item. In some examples, an expansion icon210may also be displayed. When the expansion icon210is selected, the parked item representation220is expanded to show more information about the parked item, as shown inFIG.3and discussed below. FIG.3depicts another example home interface300for the content-deferral application104after an expansion icon210of a parked item representation220has been selected. More specifically,FIG.3depicts an example interface that may be generated by the content-deferral application104upon a user activating the expand button210in the parked item representation220titled “User Research” inFIG.2. In this example, an expanded or detailed parked item preview320is displayed following expansion of the parked item representation220. Expanding the parked item representation220to form the detailed parked item preview320allows for more information to be displayed about the parked item and/or the corresponding file. For instance, expanded preview images308and additional page previews312of the file may be displayed in the detailed parked item preview320. The additional page previews312may be selected for enlargement and a search option314for searching the file may also be provided. Additional activity data302regarding activity on the file may also be displayed along with timestamps304of the activity. An expanded notes indicator306and a note editing icon316may also be displayed for editing the note. The expansion icon210may also change to a contraction icon310that, when selected, causes the detailed parked item preview320to become the parked item representation220again. FIGS.4-8illustrate various integrations of the content-deferral application104into other applications, such as file or content-item viewing applications, productivity applications (e.g., word-processing applications, spreadsheet applications, etc.), video applications, communication applications, among others.FIGS.4A-Cdepict an example interface400of a file-accessing application404that allows multiple types of files to be identified and accessed. InFIG.4A, the file-accessing application404lists a plurality of different content item or file representations401. Each of the file representations401may include an additional-functions option402(e.g., an ellipsis icon) that, when selected, provides an action menu406of additional options for interacting with the corresponding file, such as opening the file or sharing the file. The action menu406also includes a defer or park option408. The park option408, when selected, causes a parked item to be created for the file or content item. While the park option408is depicted as being provided in an activation menu406that is activated upon selection of an additional-functions option402, in other examples, the park option408may be presented in different manners. For instance, the park option408may be displayed in a context menu that is activated based on a hover, long-press, and/or right-click of the content-item or file representation. In other examples, the park option408may be displayed within the file representation. In still other examples, the park option408may be displayed as a more permanent icon within the interface of the file-accessing application, such as part of the ribbon. In such examples, a particular file representation401may first be selected, and the park option may be subsequently selected to cause the creation of parked item for the file of the file representation401. The interface400of the file-accessing application404may also include a park-pane activation icon410that, when selected, causes a park pane to be displayed, as discussed further below. As shown inFIG.4B, once the park option408has been selected, a park notification412may be displayed indicating that the parked item has been created. The notification412may include the title of the file to convey to the user which file has been parked. The notification412may also include an undo option413that causes the created parked item to be deleted. The notification412may also include a view option415that, when selected, causes the created parked item to be displayed. Displaying the parked item may include activating a park pane and/or launching a standalone content-deferral application104. FIG.4Cdepicts a park sidebar or park pane440. As discussed above, the park pane440may be accessed by activating a park-pane activation icon410in the ribbon of the application404, or the park pane440may be automatically activated upon parking a file. The park pane440displays representations of parked items for the user, and the parked items may be represented similarly to the parked item representations220discussed above. Accordingly, the park pane440may list the parked items for a specific user and include capabilities to add, modify, or categorize the parked items therein. These capabilities are illustrated inFIGS.8A-8Jand further described in the corresponding paragraphs below. In the example depicted, representations for the parked item A150and the parked item B160may be displayed within the park pane440. The representations may include information or data based on the metadata A170, the additional data A120, the metadata B172, and/or the additional data B122. FIGS.5A-Cdepict an example interface500of a collaboration or messaging application504in accordance with various embodiments of the present disclosure. The content-deferral application104is integrated with the messaging application504to allow for messages and/or files to be parked. InFIG.5A, the messaging application includes a plurality of messages503that have been sent between users in a chat-based setting. In one of the messages, a file has also been included or shared, as indicated by the file representation501. The file representation501may include an additional-functions option502(e.g., an ellipsis icon) that, when selected, provides an action menu of additional options for interacting with the corresponding file, such as opening the file or sharing the file. The action menu also includes a defer or park option. The park option, when selected, causes a parked item to be created for the file. Similar to the discussion above, in other examples the park option may be presented in other manners (such as in the representation501or in another context menu or ribbon). The interface500may also include a park-pane activation icon510. Alternatively or additionally, parked items may be created for one or more of the messages503. For instance, rather than interacting with the file representation501, an interaction with the message503may allow for the message to be parked. For example, upon activation of a context menu506for a message503(e.g., via right-click, long press, hover, etc.), a park message option508may be displayed, as shown inFIG.5B. When the park message option508is selected, a park item is created for the message503. A notification may also be displayed similar to the notification discussed above. FIG.5Cdepicts an example of the interface500with the park pane540activated, such as after creation of a parked item or activation of the park-pane activation icon510. The park pane540may be populated similarly to the park pane440discussed above. However, the parked items that are displayed, or the order of their display, in the park pane540may be different from those in park pane440because the messaging application504is a different type of application than the application404. For instance, the parked items displayed in the park pane540may be parked items with files that correspond to the messaging application504. In the example depicted, representations may be displayed for parked item C180and parked item D190. Additional examples of park pane540configurations and parked item representations are provided inFIGS.8A-8J. FIGS.6A-Cdepict an example interface600of a word-processing application604in accordance with various embodiments of the present disclosure. The content-deferral application104is integrated with the word-processing application604to allow for word-processing documents to be parked. InFIG.6A, the word-processing application604displays an open word-processing document in a content editing portion of the interface600. Within a ribbon of the interface, the word-processing application604includes a selectable park options icon610. In the example depicted, selection the park options icon610causes a drop-down menu606to be displayed below the park options icon610. The drop-down menu606may include two options: a park option608and park-pane activation option611. Selection of the park option608causes a new parked item to be created for the document601that is currently open in the word-processing application604. Selection of the park-pane activation option611causes a park pane to be displayed. In other examples, the park-pane activation option611and/or the park option608may be displayed in different positions, such as directly in the ribbon and/or in a context menu that is activated from an interaction with the displayed portion of the document (e.g., right click). When the park option608is selected, the new parked item is created and a notification612may be displayed, as shown inFIG.6B. The notification612may include the title of the file to convey to the user which file has been parked. The notification612may also include an undo option613that causes the created parked item to be deleted. The notification612may also include a view option615that, when selected, causes the created parked item to be displayed. Displaying the parked item may include activating a park pane and/or launching a standalone content-deferral application104. FIG.6Cdepicts the interface600of the word-processing application604with the park pane640activated. Similar to the park panes discussed above, the park pane640can be populated with representations of parked items, such as parked item A150and parked item B160. The parked items displayed within the park pane640may be based on context of the word-processing application604. For instance, the parked items displayed in the park pane640may be filtered or sorted such that parked items corresponding to word-processing documents are displayed first. Context of the open document601may also be used to filter or sort the parked items for display in the park pane640. For example, if the open document601has a particular tag or category, parked items may be filtered or sorted based on that particular tag or category. In some examples, the data that is presented in each parked item in the park pane640may be copied-and-pasted into the open document601, which allows for further efficiencies. For example, instead of having to launch a new application or window to find data about another file that has been parked, the data may be readily available within the park pane840. FIGS.7A-Bdepict an example interface700of an email application704in accordance with various embodiments of the present disclosure. The content-deferral application104is integrated with the email application704to allow for emails to be parked. InFIG.7A, the email application704displays a listing703of email representations705of a folder, such as an inbox. Email content701from a selected email representation705is also concurrently displayed. A park options icon710is also provided in a ribbon of the email application704. In the example depicted, selection the park options icon710causes a drop-down menu706to be displayed below the park options icon710. The drop-down menu706may include two options: a park option708and park-pane activation option711. Selection of the park option708causes a new parked item to be created for the email that is currently open/selected in the email application704(e.g., the email for which content is being displayed). Selection of the park-pane activation option711causes a park pane to be displayed. In other examples, the park-pane activation option711and/or the park option708may be displayed in different positions, such as directly in the ribbon and/or in a context menu that is activated from an interaction with the displayed portion of the document (e.g., right click). In other examples, a user may cause a context menu to be displayed for a particular email in the email list703(e.g., by right-clicking, long-pressing, etc.). The context menu may include at least the park option708. When the park option708is selected, a new parked item is created for the email for which the context menu was generated (e.g., the email in the list where the right-click was received). FIG.7Bdepicts the interface700of the email application704with the park pane740activated. Similar to the park panes discussed above, the park pane740can be populated with representations of parked items, such as parked item C180and parked item D190. The parked items displayed within the park pane740may be based on context of the email application704. For instance, the parked items displayed in the park pane740may be filtered or sorted such that parked items corresponding to emails are displayed first. Context of the selected email701may also be used to filter or sort the parked items for display in the park pane740. For example, if the selected email has a particular tag or category, parked items may be filtered or sorted based on that particular tag or category. FIGS.8A-8Ldepict example interfaces of a sidebar or park pane according to various examples of the present technology.FIG.8Adepicts an example interface800of a file-accessing application804, in a home802state, through which a park pane may be accessed. The file-accessing application804may be cloud-based service platform that incorporates a portfolio of applications, tools, and/or other services. For instance, the application804may include a park-pane activation icon810that, when selected, causes a park pane840to be displayed, as shown inFIG.8B. FIG.8Bdepicts an example content-deferral sidebar or park pane840that includes multiple parked item representations820. In addition to the parked item representations820, the park pane840may also include search icon824and a filter icon826. The search icon824allows for searching for parked items and may function similarly to the search icon206discussed above. The filter icon allows for filtering parked items and may operate similarly to filter icon208discussed above. A category selection menu825may also be provided in the park pane840, which allows for a category of parked items to be selected. The category selection menu825may operate similarly to the category filter options280discussed above. The park pane840may also include a selectable note icon806for creating a new note and corresponding parked item for the note. The selectable note icon806may operate similarly to the selectable note icon204discussed above. A close option812may also be presented. Selection of the close option812causes the park pane840to close. The parked item representations820may include details and features similar to the parked item representations220discussed above with reference toFIG.2. For example, the parked item representations820may each include a description or title829of the file corresponding to the parked item. A content preview811may also be included in the parked item representation. The content preview811provides a preview or an image of content of the file associated with the parked item. The image of the content of the file may be stored as part of the metadata for the file. In some examples, the content preview811may be an image associated with the file. For instance, when the parked item is for a message, the content preview may be an image of the user(s) who sent the message. The parked item representation820may include content type indicator813that indicates a type of file for which the parked item corresponds. The content type indicator813may be an icon of an application for which the file of the parked item corresponds and/or the application that would be used to open the corresponding file. For instance, the content type indicator813may be an icon of a word-processing application where the file of the parked item is a word-processing document. A recent activity indicator828may also be provided that indicates recent activity relating to the parked item and/or the file corresponding to the parked item. For example, the recent activity indicator828may indicate the time at which the file was last created or modified. A category label or indicator818may also be provided in the parked item representation820. The category indicator818indicates the category for the parked item, which may be stored as additional data of the parked item and/or as metadata of the corresponding file. As a result, the park pane840provides sufficient identifying and contextual information about each parked item to be meaningful to a user without requiring the user to open the original file associated with the parked item. The parked item representation820also includes action icons for actions that may be performed on the parked item representation820and/or the parked item. The parked item representation820may also include a launch icon814that, when selected, causes the file associated with the parked item to be launched in its corresponding application (e.g., a spreadsheet application where the file is a spreadsheet document). A note icon816may also be included where are note has been previously created for the particular parked item. Selection of the note icon816reveals the content of the note associated with the parked item. A note interface for altering or editing the note may also be displayed. In some examples, the order of the parked item representations820may be adjusted by the user. For instance, one parked item representation820may be dragged-and-dropped into a new position with the list of parked item representations820. The ordering of the parked item representations may also be stored as additional data within the parked item such that the ordering is stored in the content-deferral platform. Thus, when the parked items are accessed at a later time, the ordering of the parked items may remain the same. FIG.8Cdepicts an example park pane840after selection of the additional-functions indicator822. Based on a selection of the additional-functions indicator822, an action menu834may be displayed over a portion of the parked item representation820. In other examples, the action menu may be activated by a secondary selection of a parked item representation820, such as a right-click, long press, etc. The action menu834includes an open action element836that indicates an application for which the file corresponding to the parked item may be opened. In the example depicted, the parked item is a word-processing document and the action element836indicates that the document would be opening in a word-processing application. Selection of the open action element836causes the file associated with the parked item to be opened in the indicated application. Additional action elements838may also be provided in the action menu834. The additional actions may include a share action element835, a category change element837, and an archive element838. Selection of the archive element838causes the corresponding parked item to be archived or deleted. The share action element835provides functionality for sharing the parked item. For instance, selection of the share action element835may present additional fields or options for providing contact or other identifying information of another user with which the parked item is to be shared. Upon confirmation, the parked item is then shared with the identified other users. Sharing the parked item may entail providing the other user(s) with access to the parked item stored remotely in a cloud storage platform (e.g., the content-deferral platform118) and/or sending a copy of the parked item (e.g., the data structure including the metadata and additional data). In some examples, the category change element837changes state based on whether the parked item already has an associated category. In the example depicted, the parked item already has a category of “Current Project” associated with the note. Accordingly, because the parked item already has an associated category, the category change element837indicates the current category, and when the category change element837is selected, the categorization of the parked item may be removed or options for selecting a different category may be displayed. FIG.8Ddepicts an example park pane840and an action menu834for a different parked item that did not previously have a category assigned. Where the parked item does not have a category assigned, the category change element837changes state to allow for a category to be added. When the category change element837is selection, a category options panel844may be displayed that provides categories for selection and/or the ability to define a new category or search for other categories. In the example depicted, the “Reference” category is selected, and a category indicator818is added to the current parked item representation820. FIG.8Edepicts an example park pane840when the category selection menu825is selected. Selection of the category selection menu825may cause a category selection menu846to be displayed. The category selection menu846may include categories for parked items that are available to the user. Selection of one of the categories causes the displayed parked item representations820to filtered according to the selected category. FIG.8Fdepicts an example park pane840and an action menu generated for a parked item that does not have previous associated note in the additional data of the parked item. For instance, because the parked item does not have a previously associated note, the action menu834provides a note generation option849. Selection of the note generation option849provides an interface for adding a note to the parked item. FIG.8Gdepicts an example park pane840where a content preview811of a parked item representation820is being selected. When the content preview811is selected, the parked item representation820expands to show more information about the parked item, as shown inFIG.8H. FIG.8Hdepicts an expanded parked item representation854. The expanded parked item representation854includes an expanded content preview855that may show further images of corresponding file, such as additional page previews for the file. The expanded content preview855may be scrollable to view more of the preview. An expansion indicator857may also be displayed within the expanded content preview855. Selection of the expansion indicator857causes further expansion of the expanded content preview855. FIG.8Idepicts an example park pane840where a note icon816is selected within an expanded parked item representation854. Selection of the note icon816causes the associated note860to be displayed within the expanded parked item representation854. The note may then be edited from within the parked item representation. FIG.8Jdepicts an example park pane840with after the expansion element857has been selected. When the expansion element857is selected, the expanded content preview855is further enlarged and the overall size of the parked item representation further expands to become an overlaid parked item representation858where the content preview that includes a further enlarged content preview866. The further enlarged content preview866and the overlaid parked item representation may be displayed as an overlay of other parked item representations820within the park pane840. The further enlarged content preview866may be scrolled or paged in some examples. The enlarged content preview866may include a further expansion icon862as well. FIG.8Kdepicts an example park pane840when the selectable note icon806has been selected. Selection of the selectable note icon806cause a new note element874to be displayed where text can be entered for a new note. Once text for the note has been entered into the new note element874, a confirm or add element876may be selected to cause a new park item to be generated for the note.FIG.8Ldepicts an example park pane840after a new parked item is created for a newly generated note, such as the note generated inFIG.8K. The new parked item878is represented by the new parked item representation878. FIG.9Adepicts an example method900for deferring content and creating a parked item in accordance with various embodiments of the present disclosure. The example method900may be performed by a content-deferral application operating on a client device and/or a web-based content-deferral application operating on a server and causing the display of application features through a web browser. At operation902an input is received from within a running application for creating a parked item for a file or content item. The file or content item may be open, displayed, or selected within the running application. For example, the input may be a user selection of a park option, as discussed above. For instance, when a file or content item is displayed within the running application (e.g., messaging application, email application, video application, productivity application), the park option may be selected from an action menu, context menu, or ribbon of the application. In other examples, the trigger for creating a parked item may be automatically performed by the content-deferral application. For instance, the content-deferral application may monitor or track various processes, activities, or interactions between files and applications during their use, or the content-deferral application may receive signals that indicate similar usage data. For example, upon an open file not receiving any interactions for a threshold period of time and still remaining open within an application, a parked item may automatically be created for the file. In some examples, a prompt may be generated requesting confirmation that the parked item should be created. Based on the input or trigger to create the parked item, a new data structure or container for the parked item is created at operation904. The container for the parked item includes fields for metadata and/or additional data for the file or content item that was displayed or selected from the running application. For example, the data structure may be the data structure described above inFIG.1. At operation906, metadata for the file or content item is requested. The metadata for the file or content item may be requested from the running application which may have access to the metadata and/or from a remote server where the metadata is stored. For instance, a metadata storage layer of a storage platform may be queried for the metadata of the file. The query may be in the form of a unique identifier for the file, such as global unique identifier (GUID). The metadata may then be received and stored within the data container for the parked item at operation908. Additional data for the parked item may then be received or generated at operation910. The additional data may include a note or annotation that is generated based on input from the user. The additional data may also include a category for the parked item, which may be added via input from the user or automatically generated. A confirmation notification that the parked item has been created may be caused to be displayed at operation912. The confirmation notification may be displayed prior to receiving the additional data in some instances. At operation914, a parked item representation for the newly created parked item is caused to be displayed. For instance, the parked item representation may be displayed in a park pane or with in a standalone content-deferral application. FIG.9Bdepicts an example method920for accessing parked items in accordance with various embodiments of the present disclosure. At operation922, an indication to initialize a content-deferral application on a client device is received. Initializing the content-deferral application may include launching a standalone content-deferral application or a park pane within another application. The indication to initialize the content-deferral application may be receiving a selection to launch the standalone content-deferral application or a selection of a user interface element to launch the park pane. At operation924, parked items are retrieved from a content-deferral platform on a server remote from the client device. The parked items may be retrieved based on a user that is signed into the client device, the content-deferral application, and/or the application in which the park pane is activated. For instance, each user may be associated with a different set of parked items that the particular user has created (or that has been shared with the user). The parked items may also be retrieved based on context of the content-deferral application. For instance, if the content-deferral application is launched as a standalone application, all parked items may be retrieved. In an example, the content-deferral application is launched as a park pane in a productivity application, the parked items may be retrieved based on context of the productivity application. For instance, where the park pane is activated in a word-processing application, parked items corresponding to word-processing documents may be retrieved. In yet other examples, where the content-deferral application is launched as a park pane within another application, the parked items that are retrieved are based on the context of a file or content item that is open within the application. For instance, if the opened file is associated with a particular category, parked items matching that category may be retrieved. In other examples, if the opened file or message is associated with another user (e.g., shared with another user, received from another user), parked items for files that are also associated with that user may be retrieved. Retrieving the parked items may include transmitting a query to the content-deferral storage platform. The query criteria may include the context of the application, document, and/or content item. For instance, the query may include a value for a document type. The content-deferral platform executes the query against the metadata stored in the parked items and returns parked items that match the query criteria. At operation926, the retrieved parked items are caused to be displayed on the client device, either in the standalone content-deferral application or in the park pane. The parked items may be displayed as a parked item representations and have the features of the parked item representations discussed above. At operation928, an edit to additional data of the parked item is received via user input. For instance, a note for a parked item may be added or edited, a category of the parked item may be added or edited, and/or an ordering of the parked item may be adjusted. Such edits may be stored in the content-deferral platform on the remote server such that they can be accessed at subsequent time or by a different client device. At operation930, a selection to launch a file or content item associated with a parked item is received. For instance, a selection of a launch icon from within a parked item representation may be selected. Based on receiving the selection to launch the file or content item, the content-deferral application causes the application corresponding to the file type of the file or content item type to be launched and the file or content item loaded with the launched application at operation932. FIG.9Cdepicts an example method950for accessing parked items from multiple applications in accordance with various embodiments of the present disclosure. At operation952, an indication is received to initialize a park pane within a first application that has a first application type (e.g., messaging application, video application, communication application, word-processing application). Based on receiving the indication to initialize the park pane in the first application, parked item representations for a first set of parked items are caused to be displayed in the park pane, at operation954, based on a context of the first application, such as the application type. For instance, the first set of parked items may be parked items having files or content items corresponding to the application type of the first application. The first set of parked items may be displayed prior to, or above, other parked items. Accordingly, the context may be used for sorting and/or filtering the parked item representations. At operation956, an indication is received to initialize a park pane within a second application having a second application type that is different from the first application type. Based on receiving the indication to initialize the park pane in the second application, parked item representations for a second set of parked items are caused to be displayed in the park pane, at operation958, based on a context of the second application, such as the application type. The second set of parked items may be different than the first set of parked items. FIG.10is a block diagram illustrating physical components (i.e., hardware) of a computing device1000with which examples of the present disclosure may be practiced. The computing device components described below may be suitable for a client device running the web browser discussed above. In a basic configuration, the computing device1000may include at least one processing unit1002and a system memory1004. The processing unit(s) (e.g., processors) may be referred to as a processing system. Depending on the configuration and type of computing device, the system memory1004may comprise, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), flash memory, or any combination of such memories. The system memory1004may include an operating system1005and one or more program modules1006suitable for running software applications1050such as the park application1021and productivity applications1023. The operating system1005, for example, may be suitable for controlling the operation of the computing device1000. Furthermore, aspects of the invention may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated inFIG.10by those components within a dashed line1008. The computing device1000may have additional features or functionality. For example, the computing device1000may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated inFIG.10by a removable storage device1009and a non-removable storage device1010. As stated above, a number of program modules and data files may be stored in the system memory1004. While executing on the processing unit1002, the program modules1006may perform processes including, but not limited to, one or more of the operations of the methods illustrated inFIGS.8-9. Other program modules that may be used in accordance with examples of the present invention and may include applications such as electronic mail and contacts applications, word processing applications, spreadsheet applications, database applications, slide presentation applications, drawing or computer-aided application programs, etc. Furthermore, examples of the invention may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. For example, examples of the invention may be practiced via a system-on-a-chip (SOC) where each or many of the components illustrated inFIG.10may be integrated onto a single integrated circuit. Such an SOC device may include one or more processing units, graphics units, communications units, system virtualization units and various application functionality all of which are integrated (or “burned”) onto the chip substrate as a single integrated circuit. When operating via an SOC, the functionality, described herein, with respect to generating suggested queries, may be operated via application-specific logic integrated with other components of the computing device1000on the single integrated circuit (chip). Examples of the present disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. The computing device1000may also have one or more input device(s)1012such as a keyboard, a mouse, a pen, a sound input device, a touch input device, etc. The output device(s)1014such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used. The computing device1000may include one or more communication connections1016allowing communications with other computing devices1018. Examples of suitable communication connections1016include, but are not limited to, RF transmitter, receiver, and/or transceiver circuitry; universal serial bus (USB), parallel, and/or serial ports. The term computer readable media as used herein may include computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. The system memory1004, the removable storage device1009, and the non-removable storage device1010are all computer storage media examples (i.e., memory storage.) Computer storage media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information and which can be accessed by the computing device1000. Any such computer storage media may be part of the computing device1000. Computer storage media does not include a carrier wave or other propagated data signal. Communication media may be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. FIGS.11A and11Billustrate a mobile computing device1100, for example, a mobile telephone, a smart phone, a tablet personal computer, a laptop computer, and the like, with which aspects of the invention may be practiced. With reference toFIG.11A, an example of a mobile computing device1100for implementing at least some aspects of the present technology is illustrated. In a basic configuration, the mobile computing device1100is a handheld computer having both input elements and output elements. The mobile computing device1100typically includes a display1105and one or more input buttons1110that allow the user to enter information into the mobile computing device1100. The display1105of the mobile computing device1100may also function as an input device (e.g., a touch screen display). If included, optional side input elements1115allows further user input. The side input elements1115may include buttons, switches, or any other type of manual input elements. In alternative examples, mobile computing device1100may incorporate more or less input elements. Key input may generally be received from a soft keyboard displayed on the display1105, but in other examples, the mobile computing device1100may also include an optional physical keypad. Optional keypad1135may be a physical keypad or a “soft” keypad generated on the touch screen display. One or more audio transducers1125(e.g., speakers) may also be included. In some examples, the mobile computing device1100incorporates a vibration transducer for providing the user with tactile feedback. In yet another example, the mobile computing device1100incorporates input and/or output ports, such as an audio input (e.g., a microphone jack), an audio output (e.g., a headphone jack), and/or a video output for sending signals to or receiving signals from an external device. FIG.11Bis a block diagram illustrating the architecture of one example of a mobile computing device. That is, the mobile computing device1100can incorporate a system (i.e., an architecture)1102to implement some examples. In one example, the system1102is implemented as a “smart phone” capable of running one or more applications (e.g., videoconference or virtual meeting application, web browser, e-mail, calendaring, contact managers, messaging clients, games, and media clients/players). In some examples, the system1102is integrated as a computing device, such as an integrated personal digital assistant (PDA) or wireless phone. One or more application programs1150may be loaded into the memory1162and run on or in association with the operating system1164. Examples of the application programs include videoconference or virtual meeting programs, phone dialer programs, e-mail programs, personal information management (PIM) programs, word processing programs, spreadsheet programs, Internet browser programs, messaging programs, and so forth. The system1102also includes a non-volatile storage area1168within the memory1162. The non-volatile storage area1168may be used to store persistent information that should not be lost if the system1102is powered down. The application programs1150may use and store information in the non-volatile storage area1168, such as e-mail or other messages used by an e-mail application, and the like. A synchronization application (not shown) may also reside on the system1102and is programmed to interact with a corresponding synchronization application resident on a host computer to keep the information stored in the non-volatile storage area1168synchronized with corresponding information stored at a remote device or server. As should be appreciated, other applications may be loaded into the memory1162and run on the mobile computing device1100. The system1102has a power supply1170, which may be implemented as one or more batteries. The power supply1170might further include an external power source, such as an AC adapter or a powered docking cradle that supplements or recharges the batteries. The system1102may also include a radio1172that performs the function of transmitting and receiving radio frequency communications. The radio1172facilitates wireless connectivity between the system1102and the “outside world,” via a communications carrier or service provider. Transmissions to and from the radio1172are conducted under control of the operating system1164. In other words, communications received by the radio1172may be disseminated to the application programs1150via the operating system1164, and vice versa. The visual indicator1120may be used to provide visual notifications and/or an audio interface1174may be used for producing audible notifications via the audio transducer1125. In the illustrated example, the visual indicator1120is a light emitting diode (LED) and the audio transducer1125is a speaker. These devices may be directly coupled to the power supply1170so that when activated, they remain on for a duration dictated by the notification mechanism even though the processor1160and other components might shut down for conserving battery power. The LED may be programmed to remain on indefinitely until the user takes action to indicate the powered-on status of the device. The audio interface1174is used to provide audible signals to and receive audible signals from the user. For example, in addition to being coupled to the audio transducer1125, the audio interface1174may also be coupled to a microphone to receive audible input, such as to facilitate a telephone conversation. The system1102may further include a video interface1176that enables an operation of an on-board camera1130to record still images, video stream, and the like. A mobile computing device1100implementing the system1102may have additional features or functionality. For example, the mobile computing device1100may also include additional data storage devices (removable and/or non-removable) such as, magnetic disks, optical disks, or tape. Such additional storage is illustrated inFIG.11Bby the non-volatile storage area1168. Data/information generated or captured by the mobile computing device1100and stored via the system1102may be stored locally on the mobile computing device1100, as described above, or the data may be stored on any number of storage media that may be accessed by the device via the radio1172or via a wired connection between the mobile computing device1100and a separate computing device associated with the mobile computing device1100, for example, a server computer in a distributed computing network, such as the Internet. As should be appreciated such data/information may be accessed via the mobile computing device1100via the radio1172or via a distributed computing network. Similarly, such data/information may be readily transferred between computing devices for storage and use according to data/information transfer and storage means, including electronic mail and collaborative data/information sharing systems. As will be understood from the foregoing disclosure, many technical advantages and improvements result from the present technology. For instance, the present technology provides for significant improvement in computing resources associated with accessing pertinent information associated with parked content or parked items. In an aspect, the technology relates to a system for reviewing parked content via a park pane interface of the web-based park app in various other productivity applications. This pane enables review and resumption of content previously deferred effectively and efficiently. In this way, the resource intensive aspect associated with reidentifying content at a later time through searches and multiple window switching actions. In addition, the parked items use of metadata rather than the entire file corresponding to the parked item results in bandwidth savings in transmitting and received the parked items. In an example, the technology relates to a system for deferring content. The system includes a processor; and memory storing instructions that, when executed by the processor, cause the system to perform operations. The operations include receiving an input from within an application executing on a client device to create a parked item for a content item displayed within the application; creating a data container, in a content-deferral platform, for the parked item; requesting, from remote storage platform, metadata associated with the content item; storing the metadata associated with the content item in the data container; causing a display of a representation of the parked item; receiving, via interactions with the parked item representation, additional data for the parked item; and storing the additional data in the data container for the parked item at the content-deferral platform. In an example, the application is one of a messaging application, a communication application, a video application, or a productivity application. In another example, the operations further include causing a display of a park pane within the application; and wherein display of the representation of the parked item is within the park pane of the application. In still another example, the representation of the parked item includes at least three of a content preview, a content type indicator, a title, a recent activity indicator, a category indicator, a note icon, or a launch icon. In yet another example, the representation of the parked item includes a note generation icon and the operations further include receiving a selection of the note generation icon; based on receiving the selection of the note generation icon, causing a display of a note editing interface element, within the representation of the parked item, for receiving text for a note of the parked item; receiving text for the note in the note editing interface; and storing the note as the additional data for the parked item. In another example, the application is a first application and the operations further include receiving a launch indication for a park pane within a second application; and causing a display of the representation of the parked item associated with the first application in the park pane. In still another example, the data container includes defined metadata fields for metadata supported by the content-deferral platform, and storing the metadata associated with the content item includes filling the defined metadata fields with the received metadata. In a further example, the representation of the parked item includes a category indicator indicating a category for the parked item. In another aspect, the technology relates to a computer-implemented method for accessing deferred content. The method includes receiving an indication to initialize a park pane in a productivity application on a client device; based on receiving the indication to initialize the park pane, retrieving parked items from a content-deferral platform of a remote server, wherein each parked item includes a data container of each parked item includes metadata for a content item and additional data for the parked item; and causing a display of the park pane including the parked item representations for the retrieved parked items, wherein each of the parked item representations include at least three of a content preview, a content type indicator, a title, a recent activity indicator, a category indicator, a note icon, or a launch icon. In an example, the additional data includes at least one of a note or a category for the parked item. In another example, the content preview, the content type indicator, and the title are based on the metadata stored in the parked item; and the category indicator and the note icon are based on the additional data stored in the parked item. In still another example, the productivity application is one of a word-processing application, a spreadsheet application, or a presentation application. In yet another example, the at least one of the retrieved parked items is for a file stored in a remote storage platform, and retrieving the parked items includes querying the remote storage platforms for updates to metadata of the file. In still another example, the parked items are retrieved based on an application type of the productivity application. In still yet another example, the method further includes receiving an interaction with the representation of the parked item; and based on receiving the interaction, adjusting the representation to be an expanded representation of the parked item. In another aspect, the technology relates to a computer-implemented method for accessing deferred content. The method includes receiving an indication to initialize a first park pane within a first application having a first application type; based on receiving the indication to initialize the first park pane, causing a display of first parked item representations, within the first park pane, for a first set of parked items, wherein the first set of parked items are based on the first application type; receiving an indication to initialize a second park pane within a second application having a second application type different than the first application type; and based on receiving the indication to initialize the second park pane, causing a display of second parked item representations, within the second park pane, for a second set of parked items, wherein the second set of parked items are based on the second application type. In an example, the first application is one of a messaging application or a communication application, and the second application is one of a word-processing application, a presentation application, or a spreadsheet application. In yet another example, the method further includes, based on receiving the indication to initialize the first park pane, querying a content deferral-platform for parked items matching the first application type, wherein results returned from the query include the first set of parked items. In still another example, the first application displays an open file having a category, the first set of parked items are further based on the category. In still yet another example, the first park pane includes a category selection menu, and the method further includes receiving a selection of a category from the category selection menu; and filtering the displayed first parked item representations based on the selected category. Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many manners and as such are not to be limited by the foregoing aspects and examples. In other words, functional elements being performed by a single or multiple components. In this regard, any number of the features of the different aspects described herein may be combined into single or multiple aspects, and alternate aspects having fewer than or more than all of the features herein described are possible. Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known. Further, as used herein and in the claims, the phrase “at least one of element A, element B, or element C” is intended to convey any of: element A, element B, element C, elements A and B, elements A and C, elements B and C, and elements A, B, and C. In addition, one having skill in the art will understand the degree to which terms such as “about” or “substantially” convey in light of the measurement techniques utilized herein. To the extent such terms may not be clearly defined or understood by one having skill in the art, the term “about” shall mean plus or minus ten percent. Numerous other changes may be made which will readily suggest themselves to those skilled in the art and which are encompassed in the spirit of the disclosure and as defined in the appended claims. While various aspects have been described for purposes of this disclosure, various changes and modifications may be made which are well within the scope of the disclosure. Numerous other changes may be made which will readily suggest themselves to those skilled in the art and which are encompassed in the spirit of the disclosure and as defined in the claims. | 79,687 |
11861140 | DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described with reference to the drawings. Each embodiment is described for convenience of describing the present invention, and the technical scope of the present invention is not limited to the following embodiments. 1. First Embodiment 1.1 Overall Configuration FIG.1is a diagram illustrating an outline of a system1. The system1includes, for example, a control device10, a storage device20, a display device30, and a voice input/output device40. Herein, the devices constituting the system1may be respective independent devices, may be configured in one device, or may be a combination of a plurality of devices. Further, the voice input/output device40may use an external service. For example, the voice input/output device40recognizes a word from voice uttered by a user, and transmits the word as an input sentence to conversation service. In the conversation service, a corresponding response sentence (conversation sentence) is transmitted to the voice input/output device40on the basis of the received input sentence. The voice input/output device40outputs voice on the basis of the response sentence. Herein, the conversation service is service of receiving input of a sentence or voice from a user, recognizing a request of the user from the input content, and outputting an execution result of a process for the request, a response sentence including information to be presented to the user, or response voice, as a response to the request. In addition, the conversation service establishes dialogue by continuously repeating such input from the user and response to the input. The conversation service may be realized by the system1without using any external service. For example, by executing a program that realizes a conversation process in the control device10, the conversation service can be provided to the user only by the system1. 1.2 Functional Configuration FIG.2is a diagram illustrating a configuration in a case where the system1is applied to a processing device50. The processing device50includes a controller500corresponding to the control device10, a storage550corresponding to the storage device20, a display510corresponding to the display device30, and a voice inputter/outputter530corresponding to the voice input/output device40. The processing device50will be described below. The controller500is a functional section for controlling the whole of the processing device50. The controller500realizes various functions by reading and executing various programs stored in the storage550, and is, for example, composed of one or a plurality of arithmetic devices (such as a central processing unit (CPU)). The controller500functions as a voice recognizer502by executing the program. In a case where voice is input via the voice inputter/outputter530, the voice recognizer502recognizes the input voice. The voice recognizer502may temporarily output information indicating the content of the recognized voice (for example, character information) to the storage550. The display510displays the content of a file, various states of the processing device50, or the state of operation input. For example, the display510is composed of a liquid crystal display (LCD), an organic electroluminescent (EL) panel, electronic paper using electrophoresis, or the like. The inputter520receives operation input from the user. For example, the inputter520is composed of a capacitance type touch panel or a pressure sensitive type touch panel. The inputter520may be a combination of a touch panel and an operation pen, or may be an input device such as a keyboard and a mouse, as long as the user can input information. The voice inputter/outputter530inputs and outputs voice. For example, the voice inputter/outputter530is composed of a voice input device such as a microphone that inputs voice, and a voice output device such as a speaker that outputs voice. The voice inputter/outputter530may be an interface. For example, an external microphone or speaker, or the like may be connected to an interface. The voice inputter/outputter530may also be a device such as short-range wireless communication (for example, Bluetooth (registered trademark)) and a smart speaker. The storage550is a functional section for storing various programs and various data required for the operation of the processing device50. The storage550is composed of a storage device such as, for example, a solid-state drive (SSD), which is a semiconductor memory, a hard disc drive (HDD). In addition, the storage550secures a file storage area552for storing a file. The communicator560communicates with other devices. For example, the communicator560is connected to a local area network (LAN) to transmit and receive information related to comments with other devices, or transmit and receive documents. In addition to the LAN, which is a general Ethernet (registered trademark), communication such as LTE/4G/5G may be used as a communication method. 1.3 Flow of Process The flow of a file selection process executed by the processing device50will be described with reference to a flow diagram ofFIG.3. The file selection process is a process of selecting one file from files stored in the file storage area552on the basis of input voice. Assuming that the files are stored in the file storage area552in advance, the description will be made. First, the controller500determines whether or not a command for displaying a list is received (Step S102). The command for displaying a list is, for example, voice indicating a request to display the files stored in the file storage area552in a list on the display510. The command is, for example, voice such as “Display file”, “Open folder”, and “Open document”. The controller500determines whether or not the command for displaying a list is received on the basis of the content of the voice input via the voice inputter/outputter530and recognized by the voice recognizer502. Specifically, the storage550stores in advance information (keyword) indicating, with characters, the content to be uttered by the user when the list of the files is displayed. Then, the controller500determines that the command for displaying a list is received in a case where the character information indicating the content of the voice recognized by the voice recognizer502matches the keyword. In a case where the command for displaying a list is received, the controller500extracts the file to be displayed on the display510from the file storage area552(Yes in Step S102, Step S104). The controller500may extract all the files stored in the file storage area552, may extract the file stored in a predetermined folder, or may extract the file that satisfy a predetermined condition (for example, the file whose date and time is today). Subsequently, the controller500assigns an identification code corresponding to the extracted file (Step S106). The identification code is a number that can uniquely specify the file, for example, a serial number (number). The identification code may be an alphabet, for example, as long as the file can be uniquely specified. Subsequently, the controller500displays a list of each file and the identification code on the display510(Step S108). Specifically, the controller500displays, on the display510, first identification display for specifying the file and second identification display for indicating the identification code assigned to the file side by side for each file. The controller500displays each first identification display side by side on the display510, and then displays the second identification display corresponding to each first identification display around the first identification display. Thus, the first identification display and the second identification display are displayed on the basis of a predetermined method, so that the user can grasp the correspondence between each file and the identification code assigned to the file by looking at the display510. The controller500displays, for example, a file name and the attributes of the file (for example, a type of the file, a creator of the file, a creation date of the file, and the like) as the first identification display. The controller500may display the attributes of the file by an icon, a picture, a symbol, or the like. In addition, the controller500displays, for example, a rectangle including an identification code as the second identification display in the vicinity of the corresponding file. Subsequently, the controller500determines whether or not a command including an identification code is received (Step S110). The command including an identification code is at least voice including a content indicating an identification code, for example, voice such as “five” and “Number five”. The command including an identification code may include the content indicating a process for the file. For example, the command including an identification code may be voice such as “Open number five” and “Open fifth document” including the content of the process of “open” as the process for the file. Specifically, in a case where the content of the voice recognized by the voice recognizer502includes a way of calling a number indicating any of the identification codes displayed as the second identification display, the controller500determines that the command including the identification code is received. In a case where the command including an identification code is received, the controller500selects a file corresponding to the identification code included in the received command (Yes in Step S110, Step S112). In a case where the command including an identification code is not received and the user designates to execute other process, the controller500executes the designated process (No in Step S110, Step S114; Yes). In a case where the command including an identification code is not received and any process is not designated by the user, the controller100again transitions the process to Step S110(No in Step S114, Step S110). As described above, according to this embodiment, the processing device assigns the identification code to the file and displays the assigned identification code together with the information for specifying the file. Therefore, the user can easily select the file simply by uttering voice including the identification code assigned to the file. The file can be selected simply by uttering the voice including the identification code, and therefore the user can properly select the file by the identification code without considering how to utter a file name or a filename extension included in the file name. 2. Second Embodiment A second embodiment will be described. The second embodiment is an embodiment to which a system1is applied as a conference system.FIG.4is a diagram illustrating a display device60capable of providing a conference system. The display device60includes a controller600corresponding to the control device10, a storage650corresponding to the storage device20, a display610corresponding to the display device30, and a voice inputter/outputter630corresponding to the voice input/output device40. The display device60is, for example, a display device such as an interactive whiteboard (IWB) installed in a conference room. The display device60may be a terminal device used by a user. The controller600is a functional section for controlling the whole of the display device60. The controller600realizes various functions by reading and executing various programs stored in the storage650, and is, for example, composed of one or a plurality of arithmetic devices (such as a CPU). The controller600functions as a voice recognizer602and a conference processor604by executing a program. In a case where voice is input via the voice inputter/outputter630, the voice recognizer602recognizes the input voice. The voice recognizer602may temporarily output information (for example, character information) indicating the content of the recognized voice to the storage650. The conference processor604executes a process (conference process) related to a conference in which a plurality of users can participate, in order to support the progress of the conference. The conference processor604executes, for example, a process of providing a chat function performed by a plurality of users who participate in a conference, as a conference process. In addition, the conference processor604performs a process of transmitting and receiving files between devices (for example, respective terminal devices used by a plurality of users who participate a conference) connected to the display device60, as the conference process. The display610displays the content of a file, various states of the display device60, the state of operation input, and the like. For example, the display610is composed of, for example, a liquid crystal display, an organic electroluminescent (EL) panel, electronic paper using electrophoresis, or the like. The inputter620receives operation input from the user. For example, the inputter520is composed of a capacitance type touch panel or a pressure sensitive type touch panel. The inputter620may be a combination of a touch panel and an operation pen, or may be an input device such as a keyboard and a mouse, as long as the user can input information. The voice inputter/outputter630inputs and outputs voice. For example, the voice inputter/outputter530is composed of a voice input device such as a microphone that inputs voice, and a voice output device such as a speaker that outputs voice. The voice inputter/outputter630may be an interface. For example, an external microphone or speaker, or the like may be connected to an interface. The voice inputter/outputter630may also be a device such as short-range wireless communication (for example, Bluetooth) and a smart speaker. The storage650is a functional section for storing various programs and various data required for the operation of the display device60. The storage650is composed of, for example, an SSD, which is a semiconductor memory, an HDD, or the like. In addition, the storage650secures a file storage area652for storing a file. The communicator660communicates with other devices. For example, the communicator660is connected to a LAN to transmit and receive information related to a conference with another device, or transmit and receive files. In addition to the LAN, which is a general Ethernet, communication such as LTE/4G/5G may be used as a communication method. Next, a process executed by the display device60will be described with reference toFIG.5. First, the controller600displays a menu screen on the display610as an initial state (Step S202). The menu screen displays, for example, information on one or a plurality of conferences and files related to the conference. For example, the controller600acquires conference information from a device that manages the conference information (such as a conference name, names of users who participate in the conference, start time, end time, and a file name of a file related to the conference) via the communicator660, and displays the acquired conference information on the display610. The file related to the conference is, for example, a file designated when the conference information is registered by a user, or a filed attached to a conference invitation email transmitted in advance to the user who is a participant of the conference. When the menu screen is displayed, the conference is not yet started and the conference process is not yet executed. Therefore, the controller600prevents the user from selecting the file related to the conference displayed on the menu screen. Specifically, the controller600does not assign an identification code to the file, and does not display second identification display. In addition, even when voice (command) including an identification code is input from the user, the controller600does not receive the input of the voice. Subsequently, the controller600(conference processor604) starts the conference by starting the conference process on the basis of the operation of starting the conference by the user (Step S204). At this time, for example, the controller600acquires the file related to the started conference and stores the acquired file in the file storage area652. Further, the controller600(conference processor604) may receive the file from a terminal device used by the user who participates in the conference, and store the received file in the file storage area652, in the conference process. Subsequently, the controller600executes a file selection process (Step S206). The file selection process is the same process as the file selection process described in the first embodiment. For example, the controller600recognizes the voice input via the voice inputter/outputter630by the voice recognizer602, and determines whether or not a command of list display is received. In a case where the command of list display is received, the controller600assigns an identification code to the file stored in the file storage area652, and performs first identification display and second identification display. Thus, the controller600displays a list of the first identification display and the second identification display on the display610when the conference process is executed. Further, when voice including an identification code is input from the user who is a participant of the conference via the voice inputter/outputter630, the controller600selects a file corresponding to the input identification code. Thus, the controller600displays the first identification display and the second identification display on the display610only during the execution of the conference process in which the user can select the file. In addition, the controller600selects the file corresponding to the identification code when the voice (command) including the identification code is input from the user only during the execution of the conference process in which the user can select the file. Subsequently, the controller600displays the file selected by the file selection process on the display610(Step S208). For example, the controller600activates an application capable of displaying the selected file, and displays the file selected by the user by performing a display process via the application. Next, an operation example of this embodiment will be described with reference toFIGS.6and7.FIG.6is an example of a display screen W200illustrating the menu screen. The display screen W200is displayed on the display610or displayed on the terminal device used by the user. The display screen200includes an area E200for displaying files related to the conference. The area E200includes, for example, display M200and display M202as the first identification display. As the display M200, a file name for specifying one file (for example, “ConferenceUI_v1.3.pptx”) and an icon indicating a file type that is the attribute of the one file are displayed. Similarly, as the display M202, a file name for specifying one file (for example, “check_mobileUI_v6.pptx”) and an icon indicating a file type that is the attribute of the one file are displayed. At this point, the conference is not yet started, so the file is not selected on the basis of the operation by the user. In addition, the second identification display is not displayed in the area E200. Therefore, the display device60does not receive the input of the voice (command) including an identification code. FIG.7Ais an example of a display screen W210displayed on the display610and the terminal device used by the user after the conference process is started. The display screen210is a screen on which a file can be selected on the basis of the input by the user. The display screen210includes an area E210for displaying files related to the conference. In addition, the area E210includes an area for displaying the first identification display and an area for displaying the second identification display for each file, and the first identification display and the second identification display are displayed vertically in a row. Therefore, the user can select the file from the first identification display displayed in the list. In addition, the display device60receives input of voice (command) input including an identification code. In the display device60, the file selection based on the operation by the use may be performed, for example, after start of a process related to a function for recognizing the voice of the user and controlling the display device60(voice recognition control), in addition to the conference process. For example, a voice switch icon B200illustrated inFIG.6is a button that enables voice recognition control by being selected by the user. The display device60may start a function related to voice recognition control on the basis of the selection of the voice switch icon B200inFIG.6, and add and display the second identification display as illustrated in the display screen W210illustrated inFIG.7A. As illustrated inFIG.7A, for example, the area E210includes display M210which is first identification display and display M212which is second identification display, and the second identification display M212is displayed to the left of the first identification display M210. Display M210is the same as the display M200inFIG.6. In addition, a rectangle including “1” which is an identification code is displayed as the display M212. Such display indicates that the file with the file name “ConferenceUI_v1.3. pptx” corresponds to the identification code “1”. Similarly, the area E210includes display M214which is first identification display and display M216which is second identification display, and the second identification display M216is displayed to the left of the first identification display M214. The display M214is the same as the display M202inFIG.6. In addition, a rectangle including “2” which is an identification code is displayed as the display M216. Such display indicates that the file with the file name “check_mobileUI_v6. pptx” corresponds to the identification code “2”. The user can select the file corresponding to the identification code by inputting the voice including “1” or “2” which is the identification code which is displayed on the display screen W210. A screen on which the file can be selected on the basis of the input by the user may be other than the display screen W210illustrated inFIG.7A, for example, a display screen W220illustrated inFIG.7B. The display screen W220includes an area E220in which the first identification display and the second identification display are arranged and displayed in a plurality of rows unlike the display screen W210on which the first identification display and the second identification display are vertically arranged and displayed in one row. In the area E220, as illustrated inFIG.7B, the first identification display and the second identification display are displayed for each file. For example, the area E220includes display M220which is first identification display and display M222which is second identification display. As the display M220, a file name for specifying one file (for example, “New notebook. one”) and an icon indicating a file type that is the attribute of the one file are displayed. In addition, a rectangle including “1” which is an identification code is displayed as the display M222. In this case, the file with the file name “New notebook. one” corresponds to the identification code “1”. In addition, as illustrated inFIG.7B, the display screen W220displays “1” to “12” as identification codes. Therefore, the user can select the file corresponding to the identification code by inputting the voice including any number from “1” to “12” which are the identification codes. In a case where the number of files displayed on the display610and the terminal devices used by the users exceeds a predetermined value (for example, 12 in the case ofFIG.7B), the area including the first identification display and the second identification display may be scrollable. When the scroll operation by the user is performed, the display device60reassigns the identification code from1to the file to be displayed in the area including the first identification display and the second identification display, and then updates the first identification display and the second identification display. Consequently, even in a case where the number of files is large, a number equal to or less than the predetermined value is displayed as the second identification display. Therefore, even in a case where the number of files stored in the file storage area652is large, the user can select a file by uttering any number up to the predetermined value. For example, in the example illustrated inFIG.7B, the user only needs to utter any number of 1 to 12. In addition, the display device60may group files and assign one identification code to a plurality of files included in the same group. When the files are grouped, the display device60uses attributes such as a date such as a creation date and an update date, a creator, a file format, and a frequency of use. Consequently, the user can input a command including one identification code to display a plurality of files corresponding to the input identification code on the display device60. Thus, in the display device60, it is possible to improve the convenience of the user by displaying a plurality of files by one command. According to this embodiment, the user can select and display a file to be displayed on the display by a simple operation of inputting the command including an identification code by voice. 3. Third Embodiment A third embodiment is an embodiment in which command input is received from a device other than a voice device in addition to the processes described in the first embodiment or the second embodiment. First, a case where this embodiment is applied to the conference system described in the second embodiment will be described. In this case, the controller500receives either a command by voice or a command operated by the inputter620as a command for displaying a list, in Step S102of the file selection process. The case where the command by the operation by the inputter620is received is, for example, a case where the controller600displays a button for displaying the list on the display610and a user selects the button for displaying the list. In a case where a voice command is input in Step S102, the controller600determines whether or not the command including an identification code is input via the voice inputter/outputter630, in Step S110. In this case, even when the command including an identification code via the inputter620, such as operation of touching the second identification display or operation of selecting a button having a number corresponding to second identification display is input, the controller600ignores the command input via the inputter620. On the other hand, in a case where the command by the operation by the inputter620is input in the Step S102, the controller600determines whether or not the command including an identification code is input via the inputter620. In this case, even when the command including an identification code is input via the voice inputter/outputter630, the controller600ignores the command input via the voice inputter/outputter630. By such a process, the controller600receives the input of the command including an identification code on the basis of a functional section input as a command for displaying a list. In a case where this embodiment is applied to the system1of the first embodiment, an input device that receives input of operation from a user by a method other than voice is connected to the control device10in the system1. When the control device10receives the command for displaying a list from the input device, the control device10receives the command including an identification code from the input device and does not receive the command including an identification code from the voice input/output device40. On the other hand, when the control device10receives the command for displaying a list from the voice input/output device40, the control device10receives the command including an identification code from the voice input/output device40and does not receive the command including an identification code from the input device. As described above, according to this embodiment, the user can unify the operation of inputting the command for displaying a list and the operation of inputting the command including an identification code into only the voice operation or only the input operation to the input device, and continuously perform the operation. Moreover, it is possible to prevent the file from being selected due to unintended operation. 4. Fourth Embodiment A fourth embodiment will be described. The fourth embodiment is an embodiment in which the system1is applied as a print system. 4.1 Overall Configuration FIG.8is a diagram illustrating an outline of a print system2to which the system1is applied. The print system2includes, for example, an image forming apparatus70, a voice processing device80, a dialogue device85, and a voice input/output device90. In addition, the image forming apparatus70and the dialogue device85are connected each other, the voice processing device80and the dialogue device85are connected each other, and the voice processing device80and the voice input/output device90are connected each other. The image forming apparatus70and the voice input/output device90may be installed at a place where a user is located, and the voice processing device80and the dialogue device85may be installed on the Internet (on the cloud). The devices constituting the print system2may be respective independent devices, may be configured in one device, or may be a combination of a plurality of devices. In the print system2, the control device10, the storage device20, and the display device30of the system1are composed of the image forming apparatus70. The voice input/output device40of the system1is composed of the voice processing device80, the dialogue device85, and the voice input/output device90. Specifically, in the print system2illustrated inFIG.8, the voice input/output device90inputs voice uttered by a user, and transmits a voice stream to the voice processing device80. The voice processing device80recognizes the input voice stream and transmits the recognized input voice stream as an input sentence to the dialogue device85. The dialogue device85is a device that provides conversation service and generates a response sentence to the input sentence. The dialogue device85transmits/receives information to/from the image forming apparatus70, acquires a state of the image forming apparatus70, and generates, for example, a response sentence indicating the state of the image forming apparatus70to the input sentence, and transmits the generated response sentence to the voice processing device80. The voice processing device80that receives the response sentence generates a voice stream for outputting the response sentence as voice, and transmits the generated voice stream to the voice input/output device90. The voice input/output device90that receives the voice stream outputs voice on the basis of the received voice stream. 4.2 Functional Configuration In the following, a case where the print system2illustrated inFIG.8is configured by the image forming apparatus70will be described. In this case, the image forming apparatus70includes functional sections corresponding to the voice processing device80, the dialogue device85, and the voice input/output device90, which are illustrated inFIG.8. FIG.9is a diagram illustrating the image forming apparatus70capable of providing the print system2. The image forming apparatus70includes a controller700corresponding to the control device10, a storage770corresponding to the storage device20, a display750corresponding to the display device30, and a voice inputter/outputter760corresponding to the voice input/output device40. The controller700is a functional section for controlling the whole of the image forming apparatus70. The controller700realizes various functions by reading and executing various programs stored in the storage770, and is composed of one or a plurality of arithmetic devices (for example, a CPU). The controller700functions as a voice recognizer702and an dialogue processor704by executing a program. In a case where voice is input via the voice inputter/outputter760, the voice recognizer702recognizes the input voice. The voice recognizer702may temporarily output information indicating the content of the recognized voice (for example, character information) to the storage770. The dialogue processor704realizes conversation service. The dialogue processor704outputs a response sentence corresponding to the input voice via the voice inputter/outputter760. An image inputter710is a functional section for acquiring image data to be input to the image forming apparatus70. Also, the image inputter710may acquire the image data from a storage medium such as a universal serial bus (USB) memory or an SD card. Moreover, the image inputter710may acquire the image data from another terminal device via a communicator790which connects the image inputter to the other terminal device. In addition, the image inputter710stores the acquired image data as a file in a file storage area772. A document reader715is a functional section that reads an image and generates image data. For example, the document reader715is composed of a scanner device that generates digital data by converting an image into an electric signal by an image sensor such as a CCD (Charge Coupled Device) or a CIS (Contact Image Sensor), and quantizing and encoding the electric signal. Further, the document reader715stores the generated image data as a file in the file storage area772. The image processor720is a functional section which performs various image processes to the image data. For example, the image processor720performs a sharpening process of image data, or performs a color conversion process. The image former730is a functional section that forms an image based on a file on a recording medium (for example, recording paper). The image former730includes, for example, an electrophotographic laser printer. The inputter740is a functional section for receiving an operation instruction by a user, and is composed of a hardware key (for example, a numeric keypad), a button, and the like. The display750is a functional section for displaying various information to a user, and is composed of, for example, a display such as an LCD and an organic EL display. The image forming apparatus70may include a touch panel in which the inputter740and the display750are integrally formed. A method of detecting input may be a common detection method such as a resistive film type, an infrared type, an electromagnetic induction type, and a capacitive type. A user authenticator755performs user authentication. For example, authentication is performed on the basis of whether or not a user name and a password input from the inputter740match a user name and a password stored in user information776. In addition, the user authenticator755may acquire bio-information and an image of a user, and perform biometric authentication (for example, fingerprint authentication, palm print authentication, face authentication, voice authentication, iris authentication, or the like). The voice inputter/outputter760performs voice input and voice output. For example, the voice inputter/outputter530is composed of a voice input device such as a microphone that inputs voice, and a voice output device such as a speaker that outputs voice. The storage770is a functional section that stores various programs and various data necessary for the operation of the image forming apparatus70. The storage770is composed of a storage device such as an SSD, which is a semiconductor memory, or an HDD. In addition, the storage770secures the file storage area772for storing a file, and stores a print file list774, the user information776, standby screen information778, and job execution screen information780. The print file list774is a list that stores information (for example, a file name) that specifies a file of an image formed in the image former730among the files stored in the file storage area772. The print file list774may store a print order, priority, print setting, a name of a user who performs print operation, and the like, in addition to the information for specifying a file. The user information776stores information about a user. For example, the user information776stores information about user authentication (for example, a user name, a password, bio-information about a user). The standby screen information778stores information necessary to display a standby screen waiting for command input on the display750(for example, a character string and an icon to be displayed on the display750, and information of positions where the character string and the icon are disposed, and the like). In addition, the job execution screen information780stores information necessary to display a job execution screen to be displayed on the display750when a job executed by the image forming apparatus70is executed. The communicator790communicates with other devices. For example, the communicator790is connected to a LAN to transmit and receive a file. In addition to the LAN, which is a general Ethernet, communication such as LTE/4G/5G may be used as a communication method. 4.3 Flow of Process Next, a process in which the image forming apparatus70reads and executes a program stored in the storage770will be described with reference toFIG.10. A process illustrated inFIG.10is executed by the controller700after a user is authenticated by the user authenticator755. First, the controller700reads the standby screen information778as an initial state and displays the standby screen on the display750(Step S502). Subsequently, the controller700determines whether or not a screen switching command is received, on the basis of the content of the voice input via the voice inputter/outputter760and recognized by the voice recognizer702(Step S504). The screen switching command is a command indicating that the input of the operation of the image forming apparatus70is switched to a screen to be performed by voice. The screen switching command is, for example, a command by voice including a specific word (wake word) such as “Operate by voice”. In a case where the voice switching command is received, the controller700switches the standby screen displayed on the display750to a voice operation screen and displays the voice operation screen (Yes in Step S504, Step S506). The voice operation screen is a screen that includes the content of a command capable of being input by voice. The controller700determines whether or not a file acquisition command is received, on the basis of the content of the voice input via the voice inputter/outputter760and recognized by the voice recognizer702(Step S508). The file acquisition command is a voice command for instructing to acquire the file stored in the file storage area772, and is, for example, a voice command such as “Display jobs” or “Release my jobs”. In a case where the file acquisition command is received, the controller700acquires the file stored in the file storage area772(Yes in Step S508, Step S510). At this time, the controller700may acquire only the file which an authenticated user can print. Subsequently, the controller700displays a list of thumbnail images of the acquired file on the display750(Step S512). Further, the controller700assigns an identification code to the acquired file, superimposes the image including the identification code on the thumbnail image of the file corresponding to the identification code, and displays the superimposed images (Step S514). Thus, the controller700displays the thumbnail image of the file and the image including the identification code when the file acquisition command is received. Therefore, in this embodiment, the file acquisition command corresponds to the command for displaying a list. The image including the identification code is, for example, a rectangular image in which a predetermined color is used as a background color and an identification code is superimposed on the background. The size of the image including an identification code is at least one-third of a short side of the thumbnail image. In addition, the image including the identification code may be a non-transparent image, or may be a semi-transparent image in which the identification code is transmitted such an extent that the identification code can be identified by the user. The controller700superimposes the image including the identification code on any corner of the thumbnail image and displays the superimposed images. Subsequently, the controller700determines whether or not a print command is received (Step S516). The print command is a command indicating that printing is performed by forming an image based on a file by the image former730. The print command is, for example, a command by voice including at least an identification code such as “Five” or “Print number five”. The controller700determines whether or not a print command is received, on the basis of the content of the voice input via the voice inputter/outputter760and recognized by the voice recognizer702. In a case where the print command is received, a print process is executed by forming an image of a file corresponding to the identification code included in the print command by the image former730(Yes in Step S516, Step S518). In a case where printing cannot be performed immediately after the print command is received, the controller700may store information for specifying the file corresponding to the identification code in the print file list774. Further, when the print process is executed, the controller700may read the job execution screen information780and display a screen related to a print job to be executed on the display750. In a case where a command other than the print command is received in Step S516, it is determined whether or not the received command receives a command indicating that a process other than printing is to be performed (No in Step S516, Step S520). In a case where the command indicating that other process is to be performed is received, the controller700executes the other process on the basis of the received command (Step S520; Yes). In a case where the command indicating that other process is to be performed is not received, the controller700transitions the process to Step S516again (No in Step S520, Step S516). 4.4 Operation Example Subsequently, an operation example of this embodiment will be described with reference toFIGS.11A,11B, andFIGS.12A to12F.FIG.11Ais an example of a display screen W500of the voice operation screen. The display screen W500includes an area E500where the content of voice uttered by a user (for example, “Release my job”) is displayed in order to input the file acquisition command. The user confirms the content displayed in the area E500and the like, and inputs a command by voice. The display screen W500may include an area E502where the number of files that can be printed by the authenticated user is displayed. FIG.11Bis an example of a display screen W510, which is a screen in which a thumbnail image is displayed, and in which an image including an identification code is displayed so as to be superimposed on the thumbnail image. For example, as illustrated inFIG.11B, on the display screen W510, an image M512including an identification code is displayed in the left corner of a thumbnail image M510. In addition to the thumbnail image and the identification code, a file name M514may be displayed in the vicinity of the thumbnail image. FIGS.12A to12Fare diagrams illustrating a display example of the image including the identification code. For example, as illustrated inFIG.12A, in a case where the thumbnail image is vertically long, the image including the identification code is displayed such that the length of the long side of the image including the identification code is one-third or more of the length of the horizontal side which is the short side of the thumbnail image. Further, as illustrated inFIG.12B, in a case where the thumbnail image is horizontally long, the image including the identification code is displayed such that the length of the short side of the image including the identification code is one-third or more of the length of the vertical side which is the short side of the thumbnail image. Thus, the image including the identification code becomes an image having the size of one-third or more of at least the short side of the thumbnail image, and the identification code can be easily confirmed by the user. In a case where the thumbnail image is vertically long, the length of the long side of the image including the identification code may be preferably at least half of the short side of the thumbnail image in the image including the identification code, as illustrated inFIG.12C. The image including an identification code may be displayed in any corner of the thumbnail image. For example, the image including the identification code may be displayed in the lower left as illustrated inFIG.12D, or may be displayed in the upper right or lower right. Furthermore, the image including the identification code may be displayed in the center of the thumbnail as illustrated inFIG.12E. Even in this case, the image including the identification code is an image that is one-third or more of the short side of the thumbnail image. In addition, the image including the identification code may be on the basis of the long side even in a case where the thumbnail image is vertically long. For example, as illustrated inFIG.12F, the image including the identification code may be displayed such that the length of the short side of the image including the identification code is one-fourth or more of the length of the long side of the thumbnail image. Thus, various display methods can be considered for the image including the identification code, but the display method may be set in advance or may be set by a user. As described above, according to this embodiment, the image forming apparatus can display the list of the thumbnail images of the stored file together with the identification codes to be uttered by the user, when the file acquisition command is received from the user. In addition, the image forming apparatus can execute printing on the basis of the print command in a case where the voice including an identification code is uttered as a print command by the user. Consequently, the user can make the image forming apparatus print a desired file simply by uttering voice including an identification code. 5. Fifth Embodiment Next, a fifth embodiment will be described. In the fifth embodiment, in addition to the process described in the fourth embodiment, a process in which the image forming apparatus sequentially selects one thumbnail image from thumbnail images displayed in a list, and an identification code corresponding to the selected thumbnail image is output by voice from the voice inputter/outputter is executed. In this embodiment,FIG.10of the fourth embodiment is replaced withFIG.13. The same functional sections and processes are given the same reference numerals, and description are omitted. A process in which an image forming apparatus70reads out and executes a program stored in a storage770will be described with reference toFIG.13. In this embodiment, a controller700executes processes of Step S512and Step S514to display a thumbnail image and an identification code on a display750, and thereafter uses a variable n with1as an initial value to sequentially select a thumbnail image from an n-th thumbnail image. Then, the controller700outputs voice including an identification code assigned to a file corresponding to the selected n-th thumbnail image via the voice inputter/outputter760(Step S602). In addition to the identification code, the voice including an identification code may include information such as a file name, the type of the file, and the creation date and time. When one thumbnail image is selected, the controller700may make a display method for the selected one thumbnail image and an image including an identification image superimposed and displayed on the selected thumbnail image different from other thumbnail images and images including identification images superimposed and displayed on the other thumbnail images. Thus, the controller700links the identification code output by voice with the thumbnail image corresponding to the identification code and the image including the identification image. For example, when one thumbnail image is selected, the controller700displays the thumbnail image as follows, in order to display (highlight) the selected thumbnail image by a method different from the display method for other thumbnail images. (1) Enlarge the selected thumbnail image (2) Display a frame around the selected thumbnail image (3) Blink the selected thumbnail image (4) Invert the color of the selected thumbnail image (5) Superimpose a predetermined color (for example, red) on the selected thumbnail image in a semi-transparent state (6) Increase the transmittance of thumbnail images other than the selected thumbnail image to make the thumbnail images less noticeable than the selected thumbnail image. The controller700may display the image as follows in order to highlight the image including the identification code corresponding to the selected thumbnail image instead of the selected thumbnail image. (1) Enlarge the image including the identification code (2) Display a frame around the image including the identification code (3) Change the background color of the image including the identification code (4) Blink and display the image including the identification code Thus, the controller700highlights only the image including the identification code, so that the thumbnail image is not enlarged and other thumbnail images are not hidden, and visibility can be ensured. The controller700may combine two or more display methods of the aforementioned display methods. For example, the selected thumbnail image and the identification code corresponding to the selected thumbnail image may be enlarged and displayed. In addition, the controller700may highlight and display a thumbnail image or an image including an identification code by a display method other than the aforementioned display method. Further, when the n-th voice output in the Step S602is completed, the controller700returns the display of the highlighted n-th thumbnail image and the highlighted image including the identification code to an original (before highlighting) display mode. Subsequently, the controller700determines whether or not the voice output of the identification code for all the files is completed (Step S604). In a case where the voice output of the identification code for all files is not completed, 1 is added to n and the process returns to Step S602(Step S604; No, Step S606, Step S602). Thus, the controller700can output the identification code by voice for all the files. The controller700does not determine whether or not a print command is received after the voice output of the identification code for all the files is completed, but may determine whether or not the print command is received during output of the voice identification code. In this case, when the controller700receives the print command, the output of the identification code by voice is completed, and the controller700executes the print process of the file corresponding to the identification code included in the print command. Thus, the user can input the print command immediately after confirming the identification code corresponding to the file desired to be printed. An operation example in this embodiment will be described with reference toFIGS.14A and14B.FIG.14Ais an example of the display screen W600when a first thumbnail image is selected and the thumbnail image is enlarged and displayed. An area E600of the display screen W600includes the enlarged first thumbnail image, an image including an identification code, and a file name. At this time, the voice including the identification code (for example, voice such as “Number one, Ocean.jpg”) is output via the voice inputter/outputter760. The file name is output by voice according to predetermined reading. FIG.14Bis an example of a display screen W610when a second thumbnail image is selected and the thumbnail image is enlarged and displayed. An area E610of the display screen W610includes the enlarged second thumbnail image, an image including an identification code, and a file name. At this time, the voice including the identification code (for example, the voice such as “Number two, Flower.png”) is output via the voice inputter/outputter760. In the same manner, third to sixth thumbnail images are sequentially selected, enlarged and displayed, and voice including each identification code is output via the voice inputter/outputter760. According to this embodiment, the user can appropriately select a file desired for printing by checking voice output from the voice inputter/outputter and a screen displayed on the display. 6. Modification The present invention is not limited to the aforementioned embodiments, and various modifications can be made. That is, the technical scope of the present invention also includes embodiments obtained by combining technical means appropriately modified without departing from the gist of the present invention. In addition, although the aforementioned embodiments are described separately for convenience of explanation, it is needless to say that the embodiments may be combined and executed within the technically possible range. For example, the second embodiment and the fifth embodiment may be combined. In this case, the display device60in the second embodiment displays a list of files and identification codes, and then outputs sound including the identification code via the voice inputter/outputter530. In addition, a program that operates in each device in the embodiment is a program that controls the CPU and the like (a program that functions the computer) so as to realize the functions of the aforementioned embodiment. Then, the information handled by these devices is temporarily stored in a temporary storage device (for example, RAM) at the time of processing, and then stored in various storage devices such as a ROM (Read Only Memory) and an HDD, and read, modified and written by the CPU as needed. Herein, the recording medium for storing the program may be any of a semiconductor medium (such as a ROM and a non-volatile memory card), an optical recording medium/magneto-optical recording medium (for example, DVD (digital versatile disc), an MO (magneto optical), an MD (mini disc), a CD (compact disc), a BD (Blu-ray Disc), etc.), a magnetic recording medium (such as a magnetic tape and a flexible disc), and the like. In addition, not only the functions of the aforementioned embodiments are realized by executing the loaded program, but also the functions of the present invention are sometimes realized by processing in collaboration with an operating system or other application programs on the basis of the instruction of the program. In addition, when distributing to the market, the program can be stored in a portable recording medium and distributed, or transferred to a server computer connected via a network such as an internet. In this case, it goes without saying that the storage device of the server computer is also included in the present invention. | 56,164 |
11861141 | DETAILED DESCRIPTION Techniques for screenshot capture based on content type are described and are implementable to enable a screenshot of content displayed on a device to be captured based on a content type for the content. Generally, the described implementations automatically determine a content type for content displayed on a device and capture a screenshot based on the content type. This provides for greater automation and user convenience than is provided by traditional techniques for capturing screenshots. Conventional screenshot techniques typically have difficulty capturing screenshots of dynamic content, e.g., content such as animation and video. Due to complications in activating screenshot functionality in conventional techniques, for example, a user attempting to capture a screenshot of dynamic content will often miss portions of the dynamic content that the user wishes to capture. This causes user frustration and may result in a user abandoning an attempt to capture a screenshot of dynamic content. Alternatively, to capture dynamic content a user may record a video of the dynamic content. Capturing video content, however, is typically memory and storage intensive and results in excess usage of system resources. Accordingly, in the described techniques, content type for content is automatically determined and utilized to determine how to capture a screenshot of the content. For instance, consider a scenario where a user is viewing content displayed on a device and wishes to capture a screenshot of the content. Accordingly, the user provides input to the device requesting a screenshot capture, such as via touch input (e.g., a specific gesture), pressing a button or set of buttons on the device, voice input, and so forth. In response to the user request for a screenshot, a content type for the content is determined. For instance, it is determined if the content represents static content or dynamic content. Generally, static content represents content that includes static (e.g., unchanging) visual features, such as a still digital image. Dynamic content represents content with changing visual features, such as an animation and/or a digital video. Generally, content type for content is determinable in various ways, such as by querying a functionality of the device (e.g., a display manager, an application, an operating system, etc.). Alternatively or additionally, content type is determinable by inspecting attributes of content, such as by determining whether the content includes static or dynamic visual features. Accordingly, utilizing the determined content type, a screenshot of the content is captured. For instance, when content is determined to be static content, a screenshot of the content is captured as a static screenshot. In another example when the content is determined to be dynamic content, a dynamic screenshot of the content is captured. For instance, for dynamic content, multiple candidate screenshots of the content are captured over time, such as to provide different candidate screenshots that depict changing visual attributes of the dynamic content. When the dynamic content represents an animation or a digital video, for example, the candidate screenshots depict changing visual attributes of the animation/digital video over time, such as over n seconds. Thus, the candidate screenshots are aggregateable into a dynamic screenshot, such as a media file that depicts a portion of dynamic content. In at least one implementation, the described techniques provide functionality for enabling a user to provide guidance for generating a screenshot. For instance, where multiple candidate screenshots of dynamic content are captured, a guidance query is provided that queries a user for whether to generate a static screenshot and/or a dynamic screenshot using the candidate screenshots. In an example implementation where a user indicates to capture a static screenshot, candidate screenshots are presented to the user and the user is able to select an instance and/or instances of the candidate screenshots for generating a static screenshot and/or set of static screenshots. In an example implementation where a user indicates to capture a dynamic screenshot, candidate screenshots are presented to the user and the user is able to select a subset or all of the candidate screenshots for generating a dynamic screenshot. The selected candidate screenshots, for example, are aggregated (e.g., concatenated) into a media file that is presentable to depict changing visual attributes exhibited in the source dynamic content. Accordingly, the described techniques enable screenshots of content to be captured based on content type without requiring manual interactions to specify content type for content. While features and concepts of screenshot capture based on content type can be implemented in any number of environments and/or configurations, aspects the described techniques are described in the context of the following example systems, devices, and methods. Further, the systems, devices, and methods described herein are interchangeable in various ways to provide for a wide variety of implementations and operational scenarios. FIG.1illustrates an example environment100in which aspects of screenshot capture based on content type can be implemented. The environment100includes a client device102which can be implemented in various ways and according to various form factors such as a smartphone, tablet device, a laptop computer, a desktop computer, a wearable computing device, and so forth. The client device102includes various functionalities that enable the client device102to perform different aspects of screenshot capture based on content type discussed herein, including an operating system104, a display device106, a display manager108, applications110, and a screenshot module112. The operating system104represents functionality for managing hardware and software resources of the client device102, such as for invoking and enabling communication between hardware and software resources of the client device102. The display device106represents functionality for outputting visual content via the client device102and the display manager108represents functionality for controlling the display device106. The display manager108, for instance, provides display functionality for different processes of the client device102. In at least one implementation the display manager108includes a display driver for the display device106. The applications110represent functionality for performing various tasks via the client device102such as productivity tasks, gaming, web browsing, social media, and so forth. The screenshot module112represents functionality for capturing screenshots114of content displayed on the display device106. A screenshot114, for instance, represents a digital content file that captures a representation of visual content displayed on the display device106and is stored as part of media files116of the client device102. In this particular example the screenshots114includes static screenshots118and dynamic screenshots120. The static screenshots118represent still digital images captured of content displayed on the display device106, and the dynamic screenshots120represent digital content captured of content displayed on the display device106that exhibit changing visual features, e.g., motion of visual objects included in the content. The dynamic screenshots120, for example, capture portions of animation, video content, and/or other digital content that includes changing visual features. Further to the environment100a user implements functionality of the screenshot module112to capture a screenshot114aof display content122displayed on the display device106. Generally, the display content122represents various types of visual content output by the display device106, such as static content (e.g., still images), dynamic content (e.g., video content, animations, etc.), and so forth. In at least one implementation the display content122is generated by a functionality of the computing device (e.g., the applications110) and/or is received from a remote source, such as a network-based content source. For instance, in conjunction with display of the display content122on the display device106, a user provides input to invoke the screenshot module112and capture the screenshot114aof the display content122. Generally, the screenshot114arepresents an instance of a static screenshot118and/or a dynamic screenshot120. As further detailed below, for example, the screenshot114ais captured based on a content type for the display content122, e.g., whether the display content122represents static content or dynamic content. Further, the screenshot module112implements and exposes a screenshot graphical user interface (GUI)124that is utilized to expose and control various functionality of the screenshot module112. For instance, the screenshot GUI124is presented to enable a user to specify types of screenshots114to be captured (e.g., static and dynamic screenshots) as well as to select instances of screenshots114for storage as part of the media files116. Having discussed an example environment in which the disclosed techniques can be performed, consider now some example scenarios and implementation details for implementing the disclosed techniques. FIG.2depicts an example system200for implementing aspects of screenshot capture based on content type in accordance with one or more implementations. Generally, the system200can be implemented in the environment100and incorporates attributes of the environment100introduced above. In the system200the screenshot module112receives an indication of a screenshot action202to capture a screenshot of display content122displayed on the display device106. A user, for instance, provides input to the client device102to request that a screenshot of the display content122be captured. Based on the screenshot action202the screenshot module112communicates a content query204that requests a content type for the display content122, e.g., whether the display content122is static content or dynamic content. The content query204can be communicated to a functionality of the client device102such as an application110(e.g., an application that generates the display content122), the operating system104, and/or the display manager108. Based on the content query204the screenshot module112receives a content response206that identifies a content type for the display content122. The content response206, for example, indicates whether the display content122is static content or dynamic content. Generally, the content response206can be received from a functionality of the client device102such as an application110(e.g., an application that generates the display content122), the operating system104, and/or the display manager108. Accordingly, based on a content type identified in the content response206, the screenshot module112performs a screenshot capture208of the display content122. The display manager108, for example, provides the screenshot module112with access to the display content122for purposes of the screenshot capture208. Generally, the screenshot capture208includes a static screenshot118and/or a dynamic screenshot120. For instance, in an implementation where the content response206identifies the display content122as static content, the screenshot capture208captures the display content122as a static screenshot118. In an implementation where the content response206identifies the display content122as dynamic content, the screenshot capture208can capture the display content122as a dynamic screenshot120. In at least one implementation, in conjunction with the screenshot capture208, the screenshot module112generates and outputs a guidance query210that prompts a user to provide guidance for capturing a screenshot. For instance, in an implementation where the display content122represents dynamic content, the screenshot capture208captures candidate screenshots212of the display content122. The candidate screenshots212, for instance, represent screenshots captured over a period of time. Accordingly, the guidance query210queries a user for guidance in determining how to generate a screenshot based on the candidate screenshots212. For instance, and as detailed below, a static screenshot118and/or a dynamic screenshot120can be generated via selection of a candidate screenshot212and/or a set of candidate screenshots212. In an implementation that utilizes the guidance query210, the screenshot module112receives a query response214that indicates parameters for capturing a screenshot, such as whether to capture a static screenshot118and/or a dynamic screenshot120. Further, the query response214can identify a candidate screenshot212and/or set of candidate screenshots212to be used to generate a screenshot. Accordingly, based on the screenshot capture208(and optionally the query response214) the screenshot module112provides screenshot output216that represents an instance of a static screenshot118and/or a dynamic screenshot120. FIG.3adepicts a scenario300afor presenting a guidance query in conjunction with capturing a screenshot in accordance with one or more implementations. In the scenario300athe screenshot GUI124is output to include an instance of the guidance query210, such as introduced above with reference to the system200. The screenshot GUI124with the guidance query210, for example, is output on the display device106in conjunction with (e.g., concurrently and/or after) executing the screenshot capture208. The guidance query210includes a notification that an active screenshot has been captured and queries a user as to whether the user would like to generate a static screenshot (a “still”) or a dynamic (“active”) screenshot. Further, the guidance query210includes a preview302of the captured screenshot. Accordingly, to enable a user to specify a type of screenshot to be captured, the guidance query210includes a static selectable control304and a dynamic selectable control306. The static selectable control304is selectable to enable a static screenshot118to be generated from the captured dynamic screenshot and the dynamic selectable control306is selectable to enable a dynamic screenshot120to be generated from the captured dynamic screenshot. FIG.3bdepicts a scenario300bfor generating a static screenshot in accordance with one or more implementations. The scenario300b, for instance, represents a continuation of the scenario300aand is implemented in response to selection of the static selectable control304. In the scenario300bmultiple candidate screenshots308are output including a candidate screenshot308a, a candidate screenshot308b, a candidate screenshot308c, and a candidate screenshot308n. The candidate screenshots308, for example, are output as part of the screenshot GUI124. Accordingly, to generate a static screenshot118, a user performs a still selection310by selecting an instance of the candidate screenshots308. Based on the still selection310the screenshot module112generates a static screenshot118. The static screenshot118, for instance, represents a still digital image generated from a selected candidate screenshot308. FIG.3cdepicts a scenario300cfor generating a dynamic screenshot in accordance with one or more implementations. The scenario300c, for instance, represents a continuation of the scenario300aand is implemented in response to selection of the dynamic selectable control306. In the scenario300cmultiple candidate screenshots312are output including a candidate screenshot312a, a candidate screenshot312b, a candidate screenshot312c, and a candidate screenshot312n. The candidate screenshots312, for example, are output as part of the screenshot GUI124. Accordingly, to generate a dynamic screenshot120, a user performs an active selection314by selecting instances of the candidate screenshots312. Generally, the user can select all of the candidate screenshots312or a subset (less than all) of the candidate screenshots312. Based on the active selection314the screenshot module112generates a dynamic screenshot120. The dynamic screenshot120, for instance, represents multiple of the candidate screenshots312selected by a user. In at least one implementation the screenshot module112generates the dynamic screenshot120by aggregating the selected candidate screenshots312into a media file116that can be output to present the selected candidate screenshots312with visual motion, such as a Graphics Interchange Format (GIF) file. FIG.4illustrates a flow chart depicting an example method400for capturing a screenshot based on content type in accordance with one or more implementations. At402an indication is received to capture a screenshot of visual content displayed on a display device. A user, for instance, provides input to the client device102requesting that a screenshot be captured of content displayed on the display device106. Generally, different types and forms of input can be utilized to request a screenshot, such as touch input to the display device106(e.g., a predetermined screenshot gesture), selecting one or more buttons of the client device102, voice input, and so forth. At404it is determined whether the visual content represents static visual content or dynamic visual content. The screenshot module112, for example, determines whether content being displayed on the display device106represents static visual content (e.g., a still image) or dynamic visual content, such as a video, an animation, and/or other visual content that exhibits changing visual attributes over time. Generally, determining whether visual content is static content or dynamic content can be performed in various ways. For instance, the screenshot module112can query a functionality of the client device for a content type, such as the display manager108, the operating system104, an application110, and so forth. Accordingly, the queried functionality can respond with a content type for the visual content. Alternatively a functionality of the client device102can proactively notify the screenshot module112of a content type, e.g., independent of a query from the screenshot module112. In an alternative or additional implementation the screenshot module112captures multiple candidate screenshots of visual content and compares the candidate screenshots to determine whether the candidate screenshots include static visual features and thus represent static visual content, or whether the candidate screenshots exhibit changing visual features and thus represent dynamic visual content. In an event that the visual content represents static visual content, at406the visual content is captured as a static screenshot. The screenshot module112, for example, captures a screenshot of the visual content as a still image. In an event that the visual content represents dynamic visual content, at408the visual content is captured as a dynamic screenshot. For instance, the screenshot module112captures the visual content as visual content that exhibits changing visual features, e.g., an animation file. At410editing of the captured screenshot is enabled. For instance, the screenshot module112and/or other functionality of the client device102presents an editing experience that enables a user to edit the captured screenshot, such as to edit visual attributes of the captured screenshot. In an implementation where a dynamic screenshot is captured, the editing experience can enable a user to edit a playback length of the dynamic screenshot. FIG.5illustrates a flow chart depicting an example method500for capturing a screenshot utilizing candidate screenshots in accordance with one or more implementations. The method500, for instance, is performed in conjunction with the method400, such as to determine whether to capture a screenshot as a static screenshot or a dynamic screenshot. At502candidate screenshots of visual content are captured. For instance, in response to user input requesting that a screenshot be captured, the screenshot module112captures multiple candidate screenshots of visual content displayed on the display device106. In at least one implementation the multiple candidate screenshots are captured over a specified period of time, e.g., t seconds. The specified period t can be defined by a system setting of the screenshot module112and/or can be user customized, such as based on user configuration of settings of the screenshot module112. For instance, in response to user input requesting a screenshot, the screenshot module112captures a candidate screenshot at intervals of i and for the period of time t. Generally, i is definable in various ways, such as time divisions of t, as groups of frames based on a frame rate of the display device106, etc. In at least one implementation the candidate screenshots include screenshots of visual content that is output prior to receiving a request to capture a screenshot. The display manager108, for instance, buffers display content122such that display content that is output prior to a request to capture a screenshot (e.g., prior to the screenshot action202) is available in a buffer and can be used to generate candidate screenshots. At504visual features of the candidate screenshots are compared. The screenshot module112, for example, compares visual features of each of the candidate screenshots to one another. Generally, different visual features are comparable such as visual objects included in the candidate screenshots, color features of the candidate screenshots (e.g., colors, brightness, contrast, etc.), image aspect ratio of visual features included in the candidate screenshots, and so forth. At506it is determined based on comparing the visual features whether the candidate screenshots include duplicate visual features or varying visual features. The screenshot module112, for example, determines whether the candidate screenshots include the same visual features or varying visual features. For instance, if each candidate screenshot includes the same (e.g., unchanging) visual features the candidate screenshots are determined to include duplicate visual features. If some of the candidate screenshots include differing (e.g., changing) visual features the candidate screenshots are determined to including varying visual features. For example, in an implementation where the candidate screenshots capture motion of a visual object from visual content, position and/or orientation of the visual object will vary among at least some of the candidate screenshots. In an event that the candidate screenshots include duplicate visual features, at508it is determined that the visual content represents static visual content. The screenshot module112, for example, determines that the candidate screenshots include duplicate visual features and thus that the visual content represents static visual content. In an event that the candidate screenshots include varying visual features, at510it is determined that the visual content represents dynamic visual content. The screenshot module112, for example, determines that the candidate screenshots include varying visual features and thus that the visual content represents dynamic visual content. At512a screenshot is captured based on the determined content type. For instance, when the visual content is determined to be static content, a static screenshot is captured and when the visual content is determined to be dynamic content, a dynamic screenshot is captured. FIG.6illustrates a flow chart depicting an example method600for enabling user input for capturing a screenshot utilizing candidate screenshots in accordance with one or more implementations. The method600, for instance, is performed in conjunction with the method400and/or the method500. At602it is determined whether candidate screenshots represent static content or dynamic content. The screenshot module112, for example, captures candidate screenshots of visual content and determines whether the candidate screenshots include static visual content or dynamic visual content, such as described above. In an event that the candidate screenshots represent static content (“Static”), at604a graphical user interface is presented that includes the candidate screenshots. For instance, the screenshot module112presents the screenshot GUI124including the candidate screenshots. At606a selection of an instance of a candidate screenshot is received. A user, for instance, provides input the screenshot GUI124to select a particular candidate screenshot. At608a static screenshot is generated using the instance of the candidate screenshot. The screenshot module112, for example, generates a static screenshot118using the selected candidate screenshot. In at least one implementation a user is able to select multiple candidate screenshots and each selected candidate screenshot is utilized to generate an instance of a static screenshot118. In an event that the candidate screenshots represent dynamic content (“Dynamic”), at610a graphical user interface is presented that includes the candidate screenshots. For instance, the screenshot module112presents the screenshot GUI124including the candidate screenshots. At612a selection of a subset of candidate screenshots is received. A user, for instance, provides input the screenshot GUI124to select a group of candidate screenshots. At614a dynamic screenshot is generated using the subset of candidate screenshots. The screenshot module112, for example, generates a dynamic screenshot120using the selected candidate screenshots. In at least one implementation the screenshot module112concatenates the subset of candidate screenshots to generate the dynamic screenshot, such as in the form of a GIF file and/or other media file that exhibits motion and/or other dynamic visual features when output. FIG.7illustrates a flow chart depicting an example method700for enabling user input for selecting a screenshot type in accordance with one or more implementations. The method700, for instance, is performed in conjunction with the methods400-600. At702a guidance query is presented requesting whether to generate a static screenshot or a dynamic screenshot using candidate screenshots. The screenshot module112, for example, captures candidate screenshots of dynamic visual content and presents the guidance query210to enable a user to specify whether to generate a static screenshot or a dynamic screenshot using the candidate screenshots. At704user input is received to select to generate a static screenshot or a dynamic screenshot. The screenshot module112, for example, detects that a user provides input to the guidance query210to select either a static screenshot option or a dynamic screenshot option. At706a static screenshot or a dynamic screenshot is generated based on a response to the guidance query. For instance, the screenshot module112generates a static screenshot or a dynamic screenshot based on whether a static screenshot option or a dynamic screenshot option is selected. As detailed throughout, a static screenshot can be generated via selection of an instance of a candidate screenshot and a dynamic screenshot can be generated using multiple candidate screenshots, such as a subset of candidate screenshots selected by a user. FIG.8illustrates a flow chart depicting an example method800for enabling automated selection of a screenshot in accordance with one or more implementations. The method800, for instance, is performed in conjunction with the methods400-700. At802candidate screenshots of visual content are captured. As described above, for example, the screenshot module112captures candidate screenshots of visual content, such as static visual content and/or dynamic visual content. At804the candidate screenshots are compared to determine a visual quality for each candidate screenshot of the candidate screenshots. For instance, the screenshot module112implements an image quality algorithm and/or set of algorithms to determine image quality for each of the candidate screenshots. Generally, determining image quality considers various image quality parameters, such as image focus (e.g., blurriness), color contrast, image luminance, visual object positioning (e.g., centering), and so forth. In example implementations where candidate screenshots include an image of a human, image quality parameters include human appearance attributes such as eye position (e.g., whether human eyes are open or closed), smile detection, eye gaze direction, and so forth. In at least one implementation, for each candidate screenshot, an image quality score is generated by assigning values for a set of image quality parameters for the candidate screenshot. For instance, a machine learning algorithm is trained using a training set of images to process input digital images and generate quality scores for the input digital images. Thus, a set of candidate screenshots can be input to the machine learning algorithm and the machine learning algorithm can output a quality score for each candidate screenshot. At806a static screenshot is generated using a candidate screenshot that exhibits a highest visual quality of the candidate screenshots. For instance, in a scenario where a quality score is generated for each candidate screenshot, a candidate screenshot with a highest quality score is selected and used to generate a static screenshot. Alternatively or additionally a subset of candidate screenshots with the highest quality scores are output and a user selects a preferred candidate screenshot and/or set of candidate screenshots from the subset for generating a static screenshot and/or set of static screenshots. Accordingly, the described implementations provide automated techniques for capturing screenshots based on content type and that overcome deficiencies experienced in conventional techniques, such as the inability to accurately capture screenshots of dynamic content. The example methods described above may be performed in various ways, such as for implementing different aspects of the systems and scenarios described herein. Generally, any services, components, modules, methods, and/or operations described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), manual processing, or any combination thereof. Some operations of the example methods may be described in the general context of executable instructions stored on computer-readable storage memory that is local and/or remote to a computer processing system, and implementations can include software applications, programs, functions, and the like. Alternatively or in addition, any of the functionality described herein can be performed, at least in part, by one or more hardware logic components, such as, and without limitation, Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SoCs), Complex Programmable Logic Devices (CPLDs), and the like. The order in which the methods are described is not intended to be construed as a limitation, and any number or combination of the described method operations can be performed in any order to perform a method, or an alternate method. FIG.9illustrates various components of an example device900in which aspects of screenshot capture based on content type can be implemented. The example device900can be implemented as any of the devices described with reference to the previousFIGS.1-8, such as any type of mobile device, mobile phone, mobile device, wearable device, tablet, computing, communication, entertainment, gaming, media playback, and/or other type of electronic device. For example, the client device102as shown and described with reference toFIGS.1-8may be implemented as the example device900. The device900includes communication transceivers902that enable wired and/or wireless communication of device data904with other devices. The device data904can include any of device identifying data, device location data, wireless connectivity data, and wireless protocol data. Additionally, the device data904can include any type of audio, video, and/or image data. Example communication transceivers902include wireless personal area network (WPAN) radios compliant with various IEEE 902.15 (Bluetooth™) standards, wireless local area network (WLAN) radios compliant with any of the various IEEE 902.11 (Wi-Fi™) standards, wireless wide area network (WWAN) radios for cellular phone communication, wireless metropolitan area network (WMAN) radios compliant with various IEEE 902.16 (WiMAX™) standards, and wired local area network (LAN) Ethernet transceivers for network data communication. The device900may also include one or more data input ports906via which any type of data, media content, and/or inputs can be received, such as user-selectable inputs to the device, messages, music, television content, recorded content, and any other type of audio, video, and/or image data received from any content and/or data source. The data input ports may include USB ports, coaxial cable ports, and other serial or parallel connectors (including internal connectors) for flash memory, DVDs, CDs, and the like. These data input ports may be used to couple the device to any type of components, peripherals, or accessories such as microphones and/or cameras. The device900includes a processing system908of one or more processors (e.g., any of microprocessors, controllers, and the like) and/or a processor and memory system implemented as a system-on-chip (SoC) that processes computer-executable instructions. The processor system may be implemented at least partially in hardware, which can include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon and/or other hardware. Alternatively or in addition, the device can be implemented with any one or combination of software, hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits, which are generally identified at910. The device900may further include any type of a system bus or other data and command transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures and architectures, as well as control and data lines. The device900also includes computer-readable storage memory912(e.g., memory devices) that enable data storage, such as data storage devices that can be accessed by a computing device, and that provide persistent storage of data and executable instructions (e.g., software applications, programs, functions, and the like). Examples of the computer-readable storage memory912include volatile memory and non-volatile memory, fixed and removable media devices, and any suitable memory device or electronic data storage that maintains data for computing device access. The computer-readable storage memory can include various implementations of random access memory (RAM), read-only memory (ROM), flash memory, and other types of storage media in various memory device configurations. The device900may also include a mass storage media device. The computer-readable storage memory912provides data storage mechanisms to store the device data904, other types of information and/or data, and various device applications914(e.g., software applications). For example, an operating system916can be maintained as software instructions with a memory device and executed by the processing system908. The device applications may also include a device manager, such as any form of a control application, software application, signal-processing and control module, code that is native to a particular device, a hardware abstraction layer for a particular device, and so on. Computer-readable storage memory912represents media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Computer-readable storage memory912do not include signals per se or transitory signals. In this example, the device900includes a screenshot module918that implements aspects of screenshot capture based on content type and may be implemented with hardware components and/or in software as one of the device applications914. For example, the screenshot module918can be implemented as the screenshot module112described in detail above. In implementations, the screenshot module918may include independent processing, memory, and logic components as a computing and/or electronic device integrated with the device900. The device900also includes screenshot data920for implementing aspects of screenshot capture based on content type and may include data from the screenshot module918, such as instances of captured screenshots. In this example, the example device900also includes a camera922and motion sensors924, such as may be implemented in an inertial measurement unit (IMU). The motion sensors924can be implemented with various sensors, such as a gyroscope, an accelerometer, and/or other types of motion sensors to sense motion of the device. The various motion sensors924may also be implemented as components of an inertial measurement unit in the device. The device900also includes a wireless module926, which is representative of functionality to perform various wireless communication tasks. For instance, for the client device102, the wireless module926can be leveraged to scan for and detect wireless networks, as well as negotiate wireless connectivity to wireless networks for the client device102. The device900can also include one or more power sources928, such as when the device is implemented as a mobile device. The power sources928may include a charging and/or power system, and can be implemented as a flexible strip battery, a rechargeable battery, a charged super-capacitor, and/or any other type of active or passive power source. The device900also includes an audio and/or video processing system930that generates audio data for an audio system932and/or generates display data for a display system934. The audio system and/or the display system may include any devices that process, display, and/or otherwise render audio, video, display, and/or image data. Display data and audio signals can be communicated to an audio component and/or to a display component via an RF (radio frequency) link, S-video link, HDMI (high-definition multimedia interface), composite video link, component video link, DVI (digital video interface), analog audio connection, or other similar communication link, such as media data port936. In implementations, the audio system and/or the display system are integrated components of the example device. Alternatively, the audio system and/or the display system are external, peripheral components to the example device. Although implementations of screenshot capture based on content type have been described in language specific to features and/or methods, the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the features and methods are disclosed as example implementations, and other equivalent features and methods are intended to be within the scope of the appended claims. Further, various different examples are described and it is to be appreciated that each described example can be implemented independently or in connection with one or more other described examples. Additional aspects of the techniques, features, and/or methods discussed herein relate to one or more of the following: In some aspects, the techniques described herein relate to a computing device including: one or more display devices; and one or more modules implemented at least in part in hardware of the computing device to: receive an indication to capture a screenshot of visual content displayed on the one or more display devices; determine whether the visual content represents static visual content or dynamic visual content; capture the visual content as a static screenshot in an event that the visual content represents static visual content; and capture the visual content as a dynamic screenshot in an event that the visual content represents dynamic visual content. In some aspects, the techniques described herein relate to a computing device, wherein to determine whether the visual content represents static visual content or dynamic visual content includes to communicate a query to a functionality of the computing device and receive a query response identifying whether the visual content represents static visual content or dynamic visual content. In some aspects, the techniques described herein relate to a computing device, wherein functionality of the computing device includes one or more of a display manager, an operating system, or an application. In some aspects, the techniques described herein relate to a computing device, wherein to determine whether the visual content represents static visual content or dynamic visual content includes to: capture multiple candidate screenshots of the visual content; compare the multiple candidate screenshots to determine whether the multiple candidate screenshots include duplicate visual content or varying visual content; determine that the visual content represents static visual content in an event that the multiple candidate screenshots include duplicate visual content; and determine that the visual content represents dynamic visual content in an event that the multiple candidate screenshots include varying visual content. In some aspects, the techniques described herein relate to a computing device, wherein to capture the visual content as a static screenshot includes to: capture multiple candidate screenshots of the visual content; present a graphical user interface that includes the multiple candidate screenshots; receive a selection of an instance of a candidate screenshot from the multiple candidate screenshots; and generate the static screenshot using the instance of the candidate screenshot. In some aspects, the techniques described herein relate to a computing device, wherein to capture the visual content as a static screenshot includes to: capture multiple candidate screenshots of the visual content; compare the multiple candidate screenshots to determine a visual quality for each candidate screenshot of the multiple candidate screenshots; and generate the static screenshot using a candidate screenshot that exhibits a highest visual quality of the multiple candidate screenshots. In some aspects, the techniques described herein relate to a computing device, wherein to capture the visual content as a dynamic screenshot includes to capture multiple candidate screenshots of the visual content including one or more candidate screenshots of one or more portions of the visual content presented on the one or more display devices prior to receipt of the indication to capture the screenshot. In some aspects, the techniques described herein relate to a computing device, wherein to capture the visual content as a dynamic screenshot includes to: capture multiple candidate screenshots of the visual content; present a graphical user interface that includes the multiple candidate screenshots; receive a selection of a subset of candidate screenshots from the multiple candidate screenshots; and generate the dynamic screenshot using the subset of candidate screenshots. In some aspects, the techniques described herein relate to a computing device, wherein the one or more modules are further implemented to: capture multiple candidate screenshots of the visual content; present a guidance query requesting whether to generate the static screenshot using an instance of a candidate screenshot of the multiple candidate screenshots or to generate the dynamic screenshot using multiple candidate screenshots; and generate one of the static screenshot or the dynamic screenshot based on a response to the guidance query. In some aspects, the techniques described herein relate to a method, including: receiving an indication to capture a screenshot of content displayed on one or more display devices of a computing device; determining whether the content represents static visual content or dynamic visual content; capturing a screenshot of the content based on whether the content represents static visual content or dynamic visual content, including: capturing the content as a static screenshot in an event that the content represents static content; or capturing the content as a dynamic screenshot in an event that the content represents dynamic content. In some aspects, the techniques described herein relate to a method, wherein the determining whether the visual content represents static visual content or dynamic visual content includes communicating a query to a functionality of the computing device and receiving a query response identifying whether the visual content represents static visual content or dynamic visual content. In some aspects, the techniques described herein relate to a method, wherein the determining whether the visual content represents static visual content or dynamic visual content includes: capturing multiple candidate screenshots of the visual content; comparing the multiple candidate screenshots to determine whether the multiple candidate screenshots include duplicate visual content or varying visual content; determining that the visual content represents static visual content in an event that the multiple candidate screenshots include duplicate visual content; or determining that the visual content represents dynamic visual content in an event that the multiple candidate screenshots include varying visual content. In some aspects, the techniques described herein relate to a method, wherein the capturing the content as a static screenshot includes: capturing multiple candidate screenshots of the visual content; comparing the multiple candidate screenshots to determine a visual quality for each candidate screenshot of the multiple candidate screenshots; and generating the static screenshot using a candidate screenshot that exhibits a highest visual quality of the multiple candidate screenshots. In some aspects, the techniques described herein relate to a method, wherein the capturing the visual content as a dynamic screenshot includes capturing multiple candidate screenshots of the visual content including one or more candidate screenshots of the visual content presented on the one or more display devices prior to receipt of the indication to capture the screenshot. In some aspects, the techniques described herein relate to a method, further including: capturing multiple candidate screenshots of the visual content; presenting a guidance query requesting whether to generate the static screenshot using an instance of a candidate screenshot of the multiple candidate screenshots or to generate the dynamic screenshot using multiple candidate screenshots; and generating one of the static screenshot or the dynamic screenshot based on a response to the guidance query. In some aspects, the techniques described herein relate to a system including: one or more processors implemented at least partially in hardware; and one or more computer-readable storage media storing instructions that are executable by the one or more processors to: receive an indication to capture a screenshot of visual content displayed on one or more display devices of a computing device; capture multiple candidate screenshots of the visual content; compare the multiple candidate screenshots to determine whether the multiple candidate screenshot include duplicate visual content or varying visual content; generate a static screenshot of the visual content in an event that the multiple candidate screenshots include duplicate visual content; and generate a dynamic screenshot of the visual content in an event that the multiple candidate screenshots include varying visual content. In some aspects, the techniques described herein relate to a system, wherein to capture the multiple candidate screenshots of the visual content includes to capture one or more of the candidate screenshots using one or portions of the visual content output one the one or more display devices prior to receipt of the indication to capture the screenshot of the visual content. In some aspects, the techniques described herein relate to a system, wherein to capture the multiple candidate screenshots of the visual content includes to capture one or more of the candidate screenshots using one or portions of the visual content output one the one or more display devices during a predetermined time duration prior to receipt of the indication to capture the screenshot of the visual content. In some aspects, the techniques described herein relate to a system, wherein the instructions that are further executable by the one or more processors to: capture multiple candidate screenshots of the visual content; present a guidance query requesting whether to generate the static screenshot using an instance of a candidate screenshot of the multiple candidate screenshots or to generate the dynamic screenshot using multiple candidate screenshots; and generate one of the static screenshot or the dynamic screenshot based on a response to the guidance query. In some aspects, the techniques described herein relate to a system, wherein to generate the dynamic screenshot includes to: present a graphical user interface that includes the multiple candidate screenshots; receive a selection of a subset of candidate screenshots from the multiple candidate screenshots; and generate the dynamic screenshot using the subset of candidate screenshots. | 50,134 |
11861142 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components and steps illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope of the invention is defined by the appended claims. Embodiments of the present disclosure are directed to computer-implemented systems and methods configured for providing information through a web-browser plugin. The disclosed embodiments provide innovative technical features that enable acquiring secured product data associated with a product displayed on an online web page by using a web-browser plugin by an employer or an employee associated with the online commercial web page based on his accessibility. For example, the disclosed embodiments receive a set of attributes associated with a user device and a request to provide data associated with a target object presented on a web page, determine a data accessibility of the user device based on the received set of attributes, retrieve an object identifier associated with the target object based on the determined data accessibility, transmit the retrieved object identifier to a plurality of systems, wherein the plurality of systems are configured to provide data corresponding to the received object identifier, and provide the received data to the user device via a web-browser plugin implemented in the web page. Moreover, the disclosed embodiments may select data based on the received set of attributes before providing the data to the user device. Referring toFIG.1A, a schematic block diagram100illustrating an exemplary embodiment of a system comprising computerized systems for communications enabling shipping, transportation, and logistics operations is shown. As illustrated inFIG.1A, system100may include a variety of systems, each of which may be connected to one another via one or more networks. The systems may also be connected to one another via a direct connection, for example, using a cable. The depicted systems include a shipment authority technology (SAT) system101, an external front end system103, an internal front end system105, a transportation system107, mobile devices107A,107B, and107C, seller portal109, shipment and order tracking (SOT) system111, fulfillment optimization (FO) system113, fulfillment messaging gateway (FMG)115, supply chain management (SCM) system117, warehouse management system119, mobile devices119A,119B, and119C (depicted as being inside of fulfillment center (FC)200), 3rdparty fulfillment systems121A,121B, and121C, fulfillment center authorization system (FC Auth)123, and labor management system (LMS)125. SAT system101, in some embodiments, may be implemented as a computer system that monitors order status and delivery status. For example, SAT system101may determine whether an order is past its Promised Delivery Date (PDD) and may take appropriate action, including initiating a new order, reshipping the items in the non-delivered order, canceling the non-delivered order, initiating contact with the ordering customer, or the like. SAT system101may also monitor other data, including output (such as a number of packages shipped during a particular time period) and input (such as the number of empty cardboard boxes received for use in shipping). SAT system101may also act as a gateway between different devices in system100, enabling communication (e.g., using store-and-forward or other techniques) between devices such as external front end system103and FO system113. External front end system103, in some embodiments, may be implemented as a computer system that enables external users to interact with one or more systems in system100. For example, in embodiments where system100enables the presentation of systems to enable users to place an order for an item, external front end system103may be implemented as a web server that receives search requests, presents item pages, and solicits payment information. For example, external front end system103may be implemented as a computer or computers running software such as the Apache HTTP Server, Microsoft Internet Information Services (IIS), NGINX, or the like. In other embodiments, external front end system103may run custom web server software designed to receive and process requests from external devices (e.g., mobile device102A or computer102B), acquire information from databases and other data stores based on those requests, and provide responses to the received requests based on acquired information. In some embodiments, external front end system103may include one or more of a web caching system, a database, a search system, or a payment system. In one aspect, external front end system103may comprise one or more of these systems, while in another aspect, external front end system103may comprise interfaces (e.g., server-to-server, database-to-database, or other network connections) connected to one or more of these systems. An illustrative set of steps, illustrated byFIGS.1B,1C,1D, and1E, will help to describe some operations of external front end system103. External front end system103may receive information from systems or devices in system100for presentation and/or display. For example, external front end system103may host or provide one or more web pages, including a Search Result Page (SRP) (e.g.,FIG.1B), a Single Detail Page (SDP) (e.g.,FIG.1C), a Cart page (e.g.,FIG.1D), or an Order page (e.g.,FIG.1E). A user device (e.g., using mobile device102A or computer102B) may navigate to external front end system103and request a search by entering information into a search box. External front end system103may request information from one or more systems in system100. For example, external front end system103may request information from FO System113that satisfies the search request. External front end system103may also request and receive (from FO System113) a Promised Delivery Date or “PDD” for each product included in the search results. The PDD, in some embodiments, may represent an estimate of when a package containing the product will arrive at the user's desired location or a date by which the product is promised to be delivered at the user's desired location if ordered within a particular period of time, for example, by the end of the day (11:59 PM). (PDD is discussed further below with respect to FO System113.) External front end system103may prepare an SRP (e.g.,FIG.1B) based on the information. The SRP may include information that satisfies the search request. For example, this may include pictures of products that satisfy the search request. The SRP may also include respective prices for each product, or information relating to enhanced delivery options for each product, PDD, weight, size, offers, discounts, or the like. External front end system103may send the SRP to the requesting user device (e.g., via a network). A user device may then select a product from the SRP, e.g., by clicking or tapping a user interface, or using another input device, to select a product represented on the SRP. The user device may formulate a request for information on the selected product and send it to external front end system103. In response, external front end system103may request information related to the selected product. For example, the information may include additional information beyond that presented for a product on the respective SRP. This could include, for example, shelf life, country of origin, weight, size, number of items in package, handling instructions, or other information about the product. The information could also include recommendations for similar products (based on, for example, big data and/or machine learning analysis of customers who bought this product and at least one other product), answers to frequently asked questions, reviews from customers, manufacturer information, pictures, or the like. External front end system103may prepare an SDP (Single Detail Page) (e.g.,FIG.1C) based on the received product information. The SDP may also include other interactive elements such as a “Buy Now” button, a “Add to Cart” button, a quantity field, a picture of the item, or the like. The SDP may further include a list of sellers that offer the product. The list may be ordered based on the price each seller offers such that the seller that offers to sell the product at the lowest price may be listed at the top. The list may also be ordered based on the seller ranking such that the highest ranked seller may be listed at the top. The seller ranking may be formulated based on multiple factors, including, for example, the seller's past track record of meeting a promised PDD. External front end system103may deliver the SDP to the requesting user device (e.g., via a network). The requesting user device may receive the SDP which lists the product information. Upon receiving the SDP, the user device may then interact with the SDP. For example, a user of the requesting user device may click or otherwise interact with a “Place in Cart” button on the SDP. This adds the product to a shopping cart associated with the user. The user device may transmit this request to add the product to the shopping cart to external front end system103. External front end system103may generate a Cart page (e.g.,FIG.1D). The Cart page, in some embodiments, lists the products that the user has added to a virtual “shopping cart.” A user device may request the Cart page by clicking on or otherwise interacting with an icon on the SRP, SDP, or other pages. The Cart page may, in some embodiments, list all products that the user has added to the shopping cart, as well as information about the products in the cart such as a quantity of each product, a price for each product per item, a price for each product based on an associated quantity, information regarding PDD, a delivery method, a shipping cost, user interface elements for modifying the products in the shopping cart (e.g., deletion or modification of a quantity), options for ordering other product or setting up periodic delivery of products, options for setting up interest payments, user interface elements for proceeding to purchase, or the like. A user at a user device may click on or otherwise interact with a user interface element (e.g., a button that reads “Buy Now”) to initiate the purchase of the product in the shopping cart. Upon doing so, the user device may transmit this request to initiate the purchase to external front end system103. External front end system103may generate an Order page (e.g.,FIG.1E) in response to receiving the request to initiate a purchase. The Order page, in some embodiments, re-lists the items from the shopping cart and requests input of payment and shipping information. For example, the Order page may include a section requesting information about the purchaser of the items in the shopping cart (e.g., name, address, e-mail address, phone number), information about the recipient (e.g., name, address, phone number, delivery information), shipping information (e.g., speed/method of delivery and/or pickup), payment information (e.g., credit card, bank transfer, check, stored credit), user interface elements to request a cash receipt (e.g., for tax purposes), or the like. External front end system103may send the Order page to the user device. The user device may enter information on the Order page and click or otherwise interact with a user interface element that sends the information to external front end system103. From there, external front end system103may send the information to different systems in system100to enable the creation and processing of a new order with the products in the shopping cart. In some embodiments, external front end system103may be further configured to enable sellers to transmit and receive information relating to orders. Internal front end system105, in some embodiments, may be implemented as a computer system that enables internal users (e.g., employees of an organization that owns, operates, or leases system100) to interact with one or more systems in system100. For example, in embodiments where network101enables the presentation of systems to enable users to place an order for an item, internal front end system105may be implemented as a web server that enables internal users to view diagnostic and statistical information about orders, modify item information, or review statistics relating to orders. For example, internal front end system105may be implemented as a computer or computers running software such as the Apache HTTP Server, Microsoft Internet Information Services (IIS), NGINX, or the like. In other embodiments, internal front end system105may run custom web server software designed to receive and process requests from systems or devices depicted in system100(as well as other devices not depicted), acquire information from databases and other data stores based on those requests, and provide responses to the received requests based on acquired information. In some embodiments, internal front end system105may include one or more of a web caching system, a database, a search system, a payment system, an analytics system, an order monitoring system, or the like. In one aspect, internal front end system105may comprise one or more of these systems, while in another aspect, internal front end system105may comprise interfaces (e.g., server-to-server, database-to-database, or other network connections) connected to one or more of these systems. Transportation system107, in some embodiments, may be implemented as a computer system that enables communication between systems or devices in system100and mobile devices107A-107C. Transportation system107, in some embodiments, may receive information from one or more mobile devices107A-107C (e.g., mobile phones, smart phones, PDAs, or the like). For example, in some embodiments, mobile devices107A-107C may comprise devices operated by delivery workers. The delivery workers, who may be permanent, temporary, or shift employees, may utilize mobile devices107A-107C to effect delivery of packages containing the products ordered by users. For example, to deliver a package, the delivery worker may receive a notification on a mobile device indicating which package to deliver and where to deliver it. Upon arriving at the delivery location, the delivery worker may locate the package (e.g., in the back of a truck or in a crate of packages), scan or otherwise capture data associated with an identifier on the package (e.g., a barcode, an image, a text string, an RFID tag, or the like) using the mobile device, and deliver the package (e.g., by leaving it at a front door, leaving it with a security guard, handing it to the recipient, or the like). In some embodiments, the delivery worker may capture photo(s) of the package and/or may obtain a signature using the mobile device. The mobile device may send information to transportation system107including information about the delivery, including, for example, time, date, GPS location, photo(s), an identifier associated with the delivery worker, an identifier associated with the mobile device, or the like. Transportation system107may store this information in a database (not pictured) for access by other systems in system100. Transportation system107may, in some embodiments, use this information to prepare and send tracking data to other systems indicating the location of a particular package. In some embodiments, certain users may use one kind of mobile device (e.g., permanent workers may use a specialized PDA with custom hardware such as a barcode scanner, stylus, and other devices) while other users may use other kinds of mobile devices (e.g., temporary or shift workers may utilize off-the-shelf mobile phones and/or smartphones). In some embodiments, transportation system107may associate a user with each device. For example, transportation system107may store an association between a user (represented by, e.g., a user identifier, an employee identifier, or a phone number) and a mobile device (represented by, e.g., an International Mobile Equipment Identity (IMEI), an International Mobile Subscription Identifier (IMSI), a phone number, a Universal Unique Identifier (UUID), or a Globally Unique Identifier (GUID)). Transportation system107may use this association in conjunction with data received on deliveries to analyze data stored in the database in order to determine, among other things, a location of the worker, an efficiency of the worker, or a speed of the worker. Seller portal109, in some embodiments, may be implemented as a computer system that enables sellers or other external entities to electronically communicate with one or more systems in system100. For example, a seller may utilize a computer system (not pictured) to upload or provide product information, order information, contact information, or the like, for products that the seller wishes to sell through system100using seller portal109. Shipment and order tracking system111, in some embodiments, may be implemented as a computer system that receives, stores, and forwards information regarding the location of packages containing products ordered by customers (e.g., by a user using devices102A-102B). In some embodiments, shipment and order tracking system111may request or store information from web servers (not pictured) operated by shipping companies that deliver packages containing products ordered by customers. In some embodiments, shipment and order tracking system111may request and store information from systems depicted in system100. For example, shipment and order tracking system111may request information from transportation system107. As discussed above, transportation system107may receive information from one or more mobile devices107A-107C (e.g., mobile phones, smart phones, PDAs, or the like) that are associated with one or more of a user (e.g., a delivery worker) or a vehicle (e.g., a delivery truck). In some embodiments, shipment and order tracking system111may also request information from warehouse management system (WMS)119to determine the location of individual products inside of a fulfillment center (e.g., fulfillment center200). Shipment and order tracking system111may request data from one or more of transportation system107or WMS119, process it, and present it to a device (e.g., user devices102A and102B) upon request. Fulfillment optimization (FO) system113, in some embodiments, may be implemented as a computer system that stores information for customer orders from other systems (e.g., external front end system103and/or shipment and order tracking system111). FO system113may also store information describing where particular items are held or stored. For example, certain items may be stored only in one fulfillment center, while certain other items may be stored in multiple fulfillment centers. In still other embodiments, certain fulfilment centers may be designed to store only a particular set of items (e.g., fresh produce or frozen products). FO system113stores this information as well as associated information (e.g., quantity, size, date of receipt, expiration date, etc.). FO system113may also calculate a corresponding PDD (promised delivery date) for each product. The PDD, in some embodiments, may be based on one or more factors. For example, FO system113may calculate a PDD for a product based on a past demand for a product (e.g., how many times that product was ordered during a period of time), an expected demand for a product (e.g., how many customers are forecast to order the product during an upcoming period of time), a network-wide past demand indicating how many products were ordered during a period of time, a network-wide expected demand indicating how many products are expected to be ordered during an upcoming period of time, one or more counts of the product stored in each fulfillment center200, which fulfillment center stores each product, expected or current orders for that product, or the like. In some embodiments, FO system113may determine a PDD for each product on a periodic basis (e.g., hourly) and store it in a database for retrieval or sending to other systems (e.g., external front end system103, SAT system101, shipment and order tracking system111). In other embodiments, FO system113may receive electronic requests from one or more systems (e.g., external front end system103, SAT system101, shipment and order tracking system111) and calculate the PDD on demand. Fulfilment messaging gateway (FMG)115, in some embodiments, may be implemented as a computer system that receives a request or response in one format or protocol from one or more systems in system100, such as FO system113, converts it to another format or protocol, and forward it in the converted format or protocol to other systems, such as WMS119or 3rdparty fulfillment systems121A,121B, or121C, and vice versa. Supply chain management (SCM) system117, in some embodiments, may be implemented as a computer system that performs forecasting functions. For example, SCM system117may forecast a level of demand for a particular product based on, for example, based on a past demand for products, an expected demand for a product, a network-wide past demand, a network-wide expected demand, a count products stored in each fulfillment center200, expected or current orders for each product, or the like. In response to this forecasted level and the amount of each product across all fulfillment centers, SCM system117may generate one or more purchase orders to purchase and stock a sufficient quantity to satisfy the forecasted demand for a particular product. Warehouse management system (WMS)119, in some embodiments, may be implemented as a computer system that monitors workflow. For example, WMS119may receive event data from individual devices (e.g., devices107A-107C or119A-119C) indicating discrete events. For example, WMS119may receive event data indicating the use of one of these devices to scan a package. As discussed below with respect to fulfillment center200andFIG.2, during the fulfillment process, a package identifier (e.g., a barcode or RFID tag data) may be scanned or read by machines at particular stages (e.g., automated or handheld barcode scanners, RFID readers, high-speed cameras, devices such as tablet119A, mobile device/PDA119B, computer119C, or the like). WMS119may store each event indicating a scan or a read of a package identifier in a corresponding database (not pictured) along with the package identifier, a time, date, location, user identifier, or other information, and may provide this information to other systems (e.g., shipment and order tracking system111). WMS119, in some embodiments, may store information associating one or more devices (e.g., devices107A-107C or119A-119C) with one or more users associated with system100. For example, in some situations, a user (such as a part- or full-time employee) may be associated with a mobile device in that the user owns the mobile device (e.g., the mobile device is a smartphone). In other situations, a user may be associated with a mobile device in that the user is temporarily in custody of the mobile device (e.g., the user checked the mobile device out at the start of the day, will use it during the day, and will return it at the end of the day). WMS119, in some embodiments, may maintain a work log for each user associated with system100. For example, WMS119may store information associated with each employee, including any assigned processes (e.g., unloading trucks, picking items from a pick zone, rebin wall work, packing items), a user identifier, a location (e.g., a floor or zone in a fulfillment center200), a number of units moved through the system by the employee (e.g., number of items picked, number of items packed), an identifier associated with a device (e.g., devices119A-119C), or the like. In some embodiments, WMS119may receive check-in and check-out information from a timekeeping system, such as a timekeeping system operated on a device119A-119C. 3rdparty fulfillment (3PL) systems121A-121C, in some embodiments, represent computer systems associated with third-party providers of logistics and products. For example, while some products are stored in fulfillment center200(as discussed below with respect toFIG.2), other products may be stored off-site, may be produced on demand, or may be otherwise unavailable for storage in fulfillment center200. 3PL systems121A-121C may be configured to receive orders from FO system113(e.g., through FMG115) and may provide products and/or services (e.g., delivery or installation) to customers directly. In some embodiments, one or more of 3PL systems121A-121C may be part of system100, while in other embodiments, one or more of 3PL systems121A-121C may be outside of system100(e.g., owned or operated by a third-party provider). Fulfillment Center Auth system (FC Auth)123, in some embodiments, may be implemented as a computer system with a variety of functions. For example, in some embodiments, FC Auth123may act as a single-sign on (SSO) service for one or more other systems in system100. For example, FC Auth123may enable a user to log in via internal front end system105, determine that the user has similar privileges to access resources at shipment and order tracking system111, and enable the user to access those privileges without requiring a second log in process. FC Auth123, in other embodiments, may enable users (e.g., employees) to associate themselves with a particular task. For example, some employees may not have an electronic device (such as devices119A-119C) and may instead move from task to task, and zone to zone, within a fulfillment center200, during the course of a day. FC Auth123may be configured to enable those employees to indicate what task they are performing and what zone they are in at different times of day. Labor management system (LMS)125, in some embodiments, may be implemented as a computer system that stores attendance and overtime information for employees (including full-time and part-time employees). For example, LMS125may receive information from FC Auth123, WMA119, devices119A-119C, transportation system107, and/or devices107A-107C. The particular configuration depicted inFIG.1Ais an example only. For example, whileFIG.1Adepicts FC Auth system123connected to FO system113, not all embodiments require this particular configuration. Indeed, in some embodiments, the systems in system100may be connected to one another through one or more public or private networks, including the Internet, an Intranet, a WAN (Wide-Area Network), a MAN (Metropolitan-Area Network), a wireless network compliant with the IEEE 802.11a/b/g/n Standards, a leased line, or the like. In some embodiments, one or more of the systems in system100may be implemented as one or more virtual servers implemented at a data center, server farm, or the like. FIG.2depicts a fulfillment center200. Fulfillment center200is an example of a physical location that stores items for shipping to customers when ordered. Fulfillment center (FC)200may be divided into multiple zones, each of which are depicted inFIG.2. These “zones,” in some embodiments, may be thought of as virtual divisions between different stages of a process of receiving items, storing the items, retrieving the items, and shipping the items. So while the “zones” are depicted inFIG.2, other divisions of zones are possible, and the zones inFIG.2may be omitted, duplicated, or modified in some embodiments. Inbound zone203represents an area of FC200where items are received from sellers who wish to sell products using system100fromFIG.1A. For example, a seller may deliver items202A and202B using truck201. Item202A may represent a single item large enough to occupy its own shipping pallet, while item202B may represent a set of items that are stacked together on the same pallet to save space. A worker will receive the items in inbound zone203and may optionally check the items for damage and correctness using a computer system (not pictured). For example, the worker may use a computer system to compare the quantity of items202A and202B to an ordered quantity of items. If the quantity does not match, that worker may refuse one or more of items202A or202B. If the quantity does match, the worker may move those items (using, e.g., a dolly, a handtruck, a forklift, or manually) to buffer zone205. Buffer zone205may be a temporary storage area for items that are not currently needed in the picking zone, for example, because there is a high enough quantity of that item in the picking zone to satisfy forecasted demand. In some embodiments, forklifts206operate to move items around buffer zone205and between inbound zone203and drop zone207. If there is a need for items202A or202B in the picking zone (e.g., because of forecasted demand), a forklift may move items202A or202B to drop zone207. Drop zone207may be an area of FC200that stores items before they are moved to picking zone209. A worker assigned to the picking task (a “picker”) may approach items202A and202B in the picking zone, scan a barcode for the picking zone, and scan barcodes associated with items202A and202B using a mobile device (e.g., device119B). The picker may then take the item to picking zone209(e.g., by placing it on a cart or carrying it). Picking zone209may be an area of FC200where items208are stored on storage units210. In some embodiments, storage units210may comprise one or more of physical shelving, bookshelves, boxes, totes, refrigerators, freezers, cold stores, or the like. In some embodiments, picking zone209may be organized into multiple floors. In some embodiments, workers or machines may move items into picking zone209in multiple ways, including, for example, a forklift, an elevator, a conveyor belt, a cart, a handtruck, a dolly, an automated robot or device, or manually. For example, a picker may place items202A and202B on a handtruck or cart in drop zone207and walk items202A and202B to picking zone209. A picker may receive an instruction to place (or “stow”) the items in particular spots in picking zone209, such as a particular space on a storage unit210. For example, a picker may scan item202A using a mobile device (e.g., device119B). The device may indicate where the picker should stow item202A, for example, using a system that indicate an aisle, shelf, and location. The device may then prompt the picker to scan a barcode at that location before stowing item202A in that location. The device may send (e.g., via a wireless network) data to a computer system such as WMS119inFIG.1Aindicating that item202A has been stowed at the location by the user using device119B. Once a user places an order, a picker may receive an instruction on device119B to retrieve one or more items208from storage unit210. The picker may retrieve item208, scan a barcode on item208, and place it on transport mechanism214. While transport mechanism214is represented as a slide, in some embodiments, transport mechanism may be implemented as one or more of a conveyor belt, an elevator, a cart, a forklift, a handtruck, a dolly, a cart, or the like. Item208may then arrive at packing zone211. Packing zone211may be an area of FC200where items are received from picking zone209and packed into boxes or bags for eventual shipping to customers. In packing zone211, a worker assigned to receiving items (a “rebin worker”) will receive item208from picking zone209and determine what order it corresponds to. For example, the rebin worker may use a device, such as computer119C, to scan a barcode on item208. Computer119C may indicate visually which order item208is associated with. This may include, for example, a space or “cell” on a wall216that corresponds to an order. Once the order is complete (e.g., because the cell contains all items for the order), the rebin worker may indicate to a packing worker (or “packer”) that the order is complete. The packer may retrieve the items from the cell and place them in a box or bag for shipping. The packer may then send the box or bag to a hub zone213, e.g., via forklift, cart, dolly, handtruck, conveyor belt, manually, or otherwise. Hub zone213may be an area of FC200that receives all boxes or bags (“packages”) from packing zone211. Workers and/or machines in hub zone213may retrieve package218and determine which portion of a delivery area each package is intended to go to, and route the package to an appropriate camp zone215. For example, if the delivery area has two smaller sub-areas, packages will go to one of two camp zones215. In some embodiments, a worker or machine may scan a package (e.g., using one of devices119A-119C) to determine its eventual destination. Routing the package to camp zone215may comprise, for example, determining a portion of a geographical area that the package is destined for (e.g., based on a postal code) and determining a camp zone215associated with the portion of the geographical area. Camp zone215, in some embodiments, may comprise one or more buildings, one or more physical spaces, or one or more areas, where packages are received from hub zone213for sorting into routes and/or sub-routes. In some embodiments, camp zone215is physically separate from FC200while in other embodiments camp zone215may form a part of FC200. Workers and/or machines in camp zone215may determine which route and/or sub-route a package220should be associated with, for example, based on a comparison of the destination to an existing route and/or sub-route, a calculation of workload for each route and/or sub-route, the time of day, a shipping method, the cost to ship the package220, a PDD associated with the items in package220, or the like. In some embodiments, a worker or machine may scan a package (e.g., using one of devices119A-119C) to determine its eventual destination. Once package220is assigned to a particular route and/or sub-route, a worker and/or machine may move package220to be shipped. In exemplaryFIG.2, camp zone215includes a truck222, a car226, and delivery workers224A and224B. In some embodiments, truck222may be driven by delivery worker224A, where delivery worker224A is a full-time employee that delivers packages for FC200and truck222is owned, leased, or operated by the same company that owns, leases, or operates FC200. In some embodiments, car226may be driven by delivery worker224B, where delivery worker224B is a “flex” or occasional worker that is delivering on an as-needed basis (e.g., seasonally). Car226may be owned, leased, or operated by delivery worker224B. According to an aspect of the present disclosure, a web-browser plugin for providing information may be implemented as part of a tangible storage medium readable by a processing unit and storing instructions for execution by the processing unit for performing a method. The web-browser plugin may comprise a software component that adds a specific feature to an existing computer program such as a web browser displaying a web page. The web browser may provide services which the web-browser plugin can use, including a way for the web-browser plugin to register itself with the web browser and a protocol for the exchange of data with the web-browser plugin. In some embodiments, the web-browser plugin may operate one or more of external front end system103or internal front end system105. The preferred embodiment comprises implementing the web-browser plugin to operate internal front end system105for providing information, but one of ordinary skill will understand that other implementations are possible. FIG.3Ais an exemplary flow chart of process300for providing information through a web-browser plugin, consistent with the disclosed embodiments. WhileFIG.3Ais described with respect to a web-browser plugin operating internal front end system105, one of ordinary skill in the art will recognize that other configurations are possible. For example, a web-browser plugin may operate external front end system103or any other computer-implemented systems capable of performing processes described below. In step301, a web-browser plugin may receive, from a user device, a request to provide data associated with a target object presented on a web page. The user device may transmit the request when a user associated with the user device configures a web browser displayed on the user device. For example, a web browser may provide a web page (such as SDP inFIG.1Cor SDP400inFIG.4) presenting a target object and a web-browser plugin (such as interactive user interface element401inFIG.4), and a user device may configure the web-browser plugin to request information associated with the presented target object. The web-browser plugin may be presented in the form of a button that the user device can receive a press or a click on the button to request the information. In step302, the web-browser plugin may access a set of attributes associated with a user of the user device. The user associated with the user device may be an employer or an employee of an enterprise owning or operating system100ofFIG.1A. The set of attributes may vary based on a user's role, status or responsibilities in the enterprise. For example, if a user requesting object information manages an inventory, a set of attributes may provide inventory indicators and comprise permissions limited to inventory. By way of further example, if a user requesting object information manages sales, a set of attributes may provide sales indicators and comprise permissions limited to sales. In some embodiments, a user device may incorporate a set of attributes associated with a user. For example, a web-browser plugin may automatically access a set of attributes from a user device when a user configures a web page to request target object data. In another embodiment, a web-browser plugin may access a set of attributes associated with a user of the user device based on login credentials entered by the user. The login credentials, when properly entered, may provide a set of attributes associated with the login credentials. For example, a user may use any computing devices and enter his/her login credentials on a web browser. When the user is properly logged in, a web-browser plugin may access a set of attributes associated with the login credentials provided by the web browser. In step303, the web-browser plugin, by transmitting the set of attributes to internal front end system105, may determine data accessibility of the user based on the received set of attributes. Internal front end system105, in response to the received set of attributes, may determine data accessibility of the user. Data accessibility can be determined by parsing a permission incorporated in the received set of attributes. The received set of attributes may comprise a permission to access data stored in a variety of systems included in system100ofFIG.1A. In some embodiments, a permission can be limited to data stored in a specific system. For example, if a user manages inventory, his permission to access data associated with a target object may be limited to target object inventory data. While it is described that a permission can be limited to a specific system, it is appreciated that the permission may grant an access to one or more systems. In step304, the web-browser plugin may retrieve an object identifier associated with the target object based on the determined data accessibility in step303via the web browser. The object identifier may be incorporated in the web page (SDP). For example, external front end system103may receive information from systems or devices in system100(FIG.1) to display and present a Single Detail Page (SDP) (e.g.,FIG.1Cand SDP400inFIG.4) and external front end system103may implement an object identifier in the SDP which can be accessed by a web browser displaying the SDP. If a data accessibility was determined to be granted for accessing data in one or more systems, the web-browser plugin may retrieve an object identifier incorporated in the web page (SDP). The object identifier may be in the form of a Stock-keeping unit (SKU) ID. In step305, the web-browser plugin, via internal front end system105, may transmit the retrieved object identifier from step304to a plurality of systems storing data associated with the target object. The plurality of systems can be configured to provide data corresponding to the received object identifier in response to the received object identifier and update data daily. For example, internal front end system105may retrieve inventory data associated with a target object by transmitting an object identifier associated with the target object to FO system113. As described above with respect toFIG.1A, FO system113may store information describing where particular items are held or stored. In another example, internal front end system105may retrieve sales data associated with a target object by transmitting an object identifier associated with the target object to FO system113. As described above with respect toFIG.1A, FO system113may store information for customer orders from other systems (e.g., external front end system103and/or shipment and order tracking system111). It is appreciated that internal front end system105may transmit the retrieved object identifier to other systems such as SAT system101storing order status and delivery status, SCM system117storing forecasted demand data, etc. In some embodiments, internal front end system105may transmit the retrieved object identifier to one or more target systems storing data associated with the target object. The target systems can be determined by parsing the received set of attributes comprising one or more indicators and permissions. For example, if a permission or an indicator only grants access to data stored in FO systems113, internal front end system105may only retrieve stored data associated with a target object from FO system113. When internal front end system105receives data from the target systems, internal front end system105may provide the received data to the web-browser plugin, wherein the web-browser plugin formats the received data and provide the formatted data on the user device. (Exemplary data presented by the web-browser plugin are discussed further below with respect toFIGS.4and5A-C.) The provided data may comprise a selectable element, wherein the selectable element generates another user interface providing detailed data when the user device configures the selectable element. The provided data may also comprise sales and inventory data presented in the form of list. The provided data may be displayed in any way that statistics can be visually displayed, such as in graph(s), chart(s), line(s), text, number(s), etc. In another embodiment, the web-browser plugin, instead of determining target systems, may collect data from a plurality of systems and filter accessible data from the collected data based on a set of attributes via internal front end system105. The web-browser plugin may begin executing the filtering process and processes for providing information inFIG.3B. FIG.3Bdepicts an exemplary process310for providing data received from a plurality of systems to a user device by the web-browser plugin, consistent with disclosed embodiments. As discussed above with respect to step305inFIG.3A, internal front end system105may receive data from a plurality of systems by transmitting an object identifier to the plurality of systems. Process310may operate various processes to provide the received data to the user device. Referring toFIG.3B, exemplary process310may begin at block311. In step311, the web-browser plugin may parse the received set of attributes to select accessible data from the received data. The received set of attributes may comprise one or more indicators and permissions for selecting the accessible data. For example, if a permission or an indicator only grants access to data stored in FO systems113, a web-browser plugin may only select received data from FO system113and remove other data. Systems may incorporate an origin indicator into data to enable the web-browser plugin to filter data based on its origin. When the web-browser plugin selects accessible data from the received data, in some embodiments, the web-browser plugin may format the selected accessible data and provide the formatted accessible data on the user device. (Exemplary data presented by the web-browser plugin are discussed further below with respect toFIGS.4and5A-C.) The provided data may comprise a selectable element, wherein the selectable element generates another user interface providing detailed data when the user device configures the selectable element. The provided data may also comprise sales and inventory data presented in the form of list. The web-browser plugin may provide data in any way that statistics can be visually displayed, such as in graph(s), chart(s), line(s), text, number(s), etc. In another embodiment, the web-browser plugin may operate processes312-314before providing the selected accessible data to the user device. In step312, the web-browser plugin may use internal front end system105to determine an amount of interaction of each of the selected accessible data over a predetermined period of time. For example, a receive set of attributes may comprise interaction indicators providing an amount of interaction a user device or a user associated with the received set of attributes have with each of selected accessible data over a predetermined time. In step313, the web-browser plugin may generate a list comprising the selected accessible data by automatically moving the most interacted selected accessible data to a top of the generated list based on the determined amount of interaction. For example, when a user device or a user associated with the received set of attributes interacted mostly with sales data associated with a target object among selected accessible data, a web-browser plugin may generate a list and automatically move the sales data to the top of generated list. In step314, the web-browser plugin may provide the generated list from step313to the user device. The provided data may comprise a selectable element, wherein the selectable element generates another user interface providing detailed data when the user device configures the selectable element. The provided data may also comprise sales and inventory data. The web-browser plugin may provide data in any way that statistics can be visually displayed, such as in graph(s), chart(s), line(s), text, number(s), etc. FIG.4depicts an exemplary web browser incorporating web-browser plugin401and Single Display Page ofFIG.1C, consistent with disclosed embodiments. Single display page (SDP)400, similar to SDP depicted inFIG.1C, may include a product and information about the product along with interactive user interface elements. The exemplary web browser may enable web-browser plugin to interact with SDP400. Web-browser plugin401(represented inFIG.4as an interactive user interface element) may be configured to receive a click, a tap, or any other interaction from a user. Upon receiving the interaction, web-browser plug-in401may display data associated with a target object (the product included in SDP400) by operating processes300and310described inFIGS.3A and3B. If a user or a user device has permissions to access one or more data associated with the target object, web-browser plugin401may provide an interface comprising data, such as the interface depicted inFIG.5A. FIG.5Adepicts an exemplary interface500providing requested information in the form of list by a web-browser plugin, consistent with disclosed embodiments. In the exemplary interface500, the web-browser plugin provides various information associated with a product displayed on the SDP, such as identifiers (SKU ID, Product ID, Vendor Item ID, and barcode), status, sales, and a supplier of the product. The information provided by the web-browser plugin is not limited to those inFIG.5A. Interface500may also include button501which provides detailed data associated with the product when button501receives a tap or click. The web-browser plugin may provide an interface comprising the detailed data, such as the interface depicted inFIG.5B. FIG.5Bdepicts an exemplary interface510providing detailed information associated with a target object (product), consistent with disclosed embodiments. Exemplary interface510may comprise product information501, instock detail512, inventory status513, primary supplier information514, and order history515. The detailed information provided is not limited to those inFIG.5B. FIG.5Cdepicts another exemplary interface520providing requested information in the form of graph by a web-browser plugin, consistent with disclosed embodiments. In the exemplary interface520, the web-browser plugin provides inventory data521and sales data522associated with a product displayed in SDP in the form of graphs. The graphical representation of requested data enables a user associated with a user device to conveniently perceive his interests. While the present disclosure has been shown and described with reference to particular embodiments thereof, it will be understood that the present disclosure can be practiced, without modification, in other environments. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, for example, hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, or other optical drive media. Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. Various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. One or more memory devices may store data and instructions used to perform one or more features of the disclosed embodiments. For example, memory may represent a tangible and non-transitory computer-readable medium having stored therein computer programs, sets of instructions, code, or data to be executed by processor. Memory may include, for example, a removable memory chip (e.g., EPROM, RAM, ROM, DRAM, EEPROM, flash memory devices, or other volatile or non-volatile memory devices) or other removable storage units that allow instructions and data to be accessed by processor. One or more memory devices may also include instructions that, when executed by processor, perform operations consistent with the functionalities disclosed herein. Devices consistent with disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks. For example, memory may include one or more programs to perform one or more functions of the disclosed embodiments. One or more processors may include one or more known processing devices, such as a microprocessor from the Pentium™ or Xeon™ family manufactured by Intel™, the Turion™ family manufactured by AMD™, the “Ax” or “Sx” family manufactured by Apple™, or any of various processors manufactured by Sun Microsystems. The disclosed embodiments are not limited to any type of processor(s). Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. | 53,702 |
11861143 | DETAILED DESCRIPTION Aspects of the present disclosure involve systems and methods for generating a user interface for an image document based on a generated list of categories and subcategories from an original electronic document. The categories and subcategories are generated through natural language processing (“NLP”) based on machine learning. The categories and subcategories are locationally linked to text from the image document and identify the ontological content of the electronic document. Categories and subcategories are provided to a user through the interface allowing rapid and intuitive navigation of the electronic document through the displayed categories. As more documents are processed, the system may become more accurate at identifying categories and subcategories. Further, similarity groupings across multiple documents may be achieved by the system by applying clustering across a multitude of received documents. As more documents are received, the clustering may become more effective at determining which documents are similar to each other. The present disclosure illustrates aspects of the technology in relationship to an example of an oil and gas legal document. However, it is to be appreciated that the systems and methods disclosed are not limited to oil and gas legal documents but rather can be realized for a multitude of document types and for a multitude of industries. An oil and gas legal document generally refers to a legal document memorializing a transaction and defining the rights and obligations of the parties to the transaction, with such transactions including sales, leases, assignments, and divestitures, relating to the oil and gas industry. Often, only document images, such as a PDF, JPG, PNG, and the like are available and the document images may not contain directly searchable text. For any given type of transaction, the legal documents may vary in numerous ways and such variations may be from numerous sources, even across documents for the same type of transaction. For example, the terminology and organization of the documents may vary, and such variations may be based on the origin of the document (e.g., one party may use a different form of document than another) and terms and organization may change over time due to changes in the law or those drafting the documents, and other changes. Since terminology may vary in the document and related portions may be in different portions of the document, merely making text searchable does not substantially increase speed of processing and/or decrease error rates. Oilfield exploration and drilling typically involves large parcels of land and aggregation of rights from a plurality of land and/or titleholders, and the aggregation of rights can involve numerous legal documents and transaction including those contemporaneous with the aggregation as well as historical documents needed to ensure that rights over time have been properly conveyed. The systems and methods disclosed can identify the conceptual content (e.g., particular types of clauses or provisions of a legal document) of a document regardless of the exact terminology, the arrangement and organization of the document, and other differences across documents. A more detailed discussion of various systems and methods of the present disclosure follows. Generally, the system receives a document as an image file (e.g., PDF, JPG, PNG, etc.), and the system extracts text from the image file. In some embodiments, the system may receive one or more images of, for example, oil and gas documents. In some cases, the received image document may have been pre-processed to extract the text and thus includes the text level information. Text extraction can be done by various tools available on the market today falling within the broad stable of Optical Character Recognition (“OCR”) software. The extracted text may be associated or otherwise linked with the particular location in the document from which it was found or extracted. In some embodiments, the system may generate one or more graphical user interfaces to facilitate receiving and storing the images of, for example, oil and gas documents. For example, a graphical user interface may include a field that allows users to provide file locations for uploading, an area that allows users to drag and drop files, and/or any other appropriate file transfer technology, etc. In some embodiments, the file transfer may be secured using one or more security protocols. Extracted text may then be fed into a trained machine learning model. The trained machine learning model may be trained on sample data and/or previously received documents so that it can identify categories and subcategories, which it associates with particular sections of text. Thus, even if a document does not include particular section titles, spacing key-words or other identifiers, extracted text may still be associated with an appropriate category. Having identified categories, which may further include subcategories, associated with particular sections of the text, the particular locations associated with the particular sections of text can then also be associated with the identified categories and subcategories as well. A graphical user interface may then be generated and presented to a user to allow them to navigate through the processed image of the image document. The graphical user interface can include a first portion, including an image of the document, and a second portion including information related to one or more categories present and/or not present in the document. Navigation may be based on category and subcategory where the user interface automatically scrolls to the location in the document where the text associated with the selected category is located, or the text may be selected to cause the user interface to identify to the category and/or subcategory to which the text pertains. The ordering of the extracted text in the second portion may not be the same as the ordering of the source text (e.g., the portion of the image from which the text is extracted) in the image of the document. Categories and subcategories generally refer to provisions and components of provisions of a transactional document. A category may encompass one or more subcategories, and a subcategory may be related to more than one category. In the particular context of legal documents for the oil and gas industry, some example categories include, without limitation, “parties,” “term,” “lease location and description,” “shut-in” (e.g., shut-in provisions), “royalties on oil and gas,” “pooling and units,” “title,” “assignment,” “surrender,” “retained acreage,” “surface,” “subsurface,” “payments and audit,” “legal,” “lease header,” “unused” (e.g., portions unassigned to categories), etc. Subcategories for the categories “parties” and “term” may include, but are not limited to, “lessor name,” “lessee name,” “lessor address,” “lessee address,” “lease date,” “primary term,” and “secondary term.” Subcategories for the category “lease location and description” may include, but are not limited to, “gross acreage,” “state,” and “county.” Subcategories for the category “royalties” may include, but is not limited to, “royalty on oil provision,” “royalty percentage on oil,” “royalty on gas provision,” “royalty percentage on gas,” and “cost fee royalty.” Subcategories for the category “shut-in” may include, but is not limited to, “shut-in royalty provisions,” “shut-in period,” and “shut-in price.” Subcategories for the category “pooling” may include, but is not limited to, “acreage limitations,” “recordation requirements,” “special field rules,” and “antientireties.” Subcategories for the category “title, assignment, and surrender” may include, but is not limited to, “lessor warranty,” “no warranty provisions,” “lessee furnished notice provisions,” “surrender obligations,” and “surrender notice requirements.” Subcategories for the category “retained acreage” may include, but is not limited to, “continuous drilling and operation provisions,” “force majeure,” “surface Pugh clauses,” and “depth clauses.” Subcategories for the category “surface provisions” may include, but is not limited to, “no surface usage provisions,” “geophysical and seismic provisions,” and “setback provisions.” Subcategories for the category “payments and audits” may include, but is not limited to, “payments and lessor interest clause provisions,” etc. The generated graphical user interface includes a display of a list of the categories and subcategories identified by the machine learning models. Each listed category and subcategory may be selected by the user and, when selected, will navigate a view of the document to the text associated with that category or subcategory. The text may be highlighted. For example, a graphical user interface can be provided listing categories on a first portion of the screen and the image of the document may be displayed on another portion of the screen. For example, the user interface may display a “parties” category, and upon receiving a selection of the “parties” category, the user interface will display the portion of the image document containing the respective text (e.g., the text determined by the machine learning model to be pertaining to the “parties” category), and in some cases the respective text may also be highlighted. So, if the lessor and the lessee (parties to the lease) were in the first paragraph of a 10 page image document, upon receiving a selection of the category “parties,” the user interface would automatically display the first paragraph of the first page. An excerpt from an exemplary oil and gas document is presented below, to which various operations discussed below will refer in order to more clearly explain the disclosed embodiments. TABLE 1Exemplary Oil and Gas Lease Agreement Excerpt (“Example Document Text”)3. Royalty.(a) Delivery and Payment. As royalty, lessee covenants and agrees:(i) to deliver or cause to be delivered to the credit of Lessor, into the pipe line or otherreceptacle to which Lessee may connect its wells, 50% of all oil, condensate and liquidhydrocarbons produced and saved by Lessee from the Leased Premises, or from timeto time, at the option of Lessor, Lessee shall sell Lessor's share of such oil, condensateor liquid hydrocarbons with Lessee's share and shall pay Lessor 50% of the GrossProceeds (as hereafter defined) received by Lessee or any Affiliate of Lessee (ashereafter defined) from the sale of all oil, condensate and liquid hydrocarbonsproduced and saved from the Leased Premises;(ii) to pay Lessor on gas and casinghead gas produced from the Leased Premises,payable on a well by well basis:(1) when sold by Lessee in an arms-length sale to an unaffiliated third party, 75% ofthe Gross Proceeds received by Lessee from the sale of such gas and casingheadgas, or(2) when sold to an Affiliate of Lessee, 25% of the Gross Proceeds, computed at thepoint of sale, from the sale of such gas by such Affiliate of Lessee; and(3) when used by Lessee (other than for Operations on the Leased Premises ashereafter provided), 20% of the market value at the point of use. FIG.1depicts one example of a system for processing an unorganized raw image document into an interactive list and document image122accessible on a user device114.FIG.2depicts a method for generating a list of categories and subcategories related to and navigably linked to an image of an electronic document (such as an oil and gas contract). Referring toFIGS.1and2, the system100receives a provided electronic document image102(operation202) or more generally obtains electronic access to such a document through a file system, a database, and the like. In a typical implementation, the document will be one among many documents (104). In one possible example, the electronic document image102is that of an oil and gas legal document. For example, the system may receive an image of an oil and gas contract104including the Example Document Text of Table 1. In the example illustrated, the electronic document image122is stored in a system database or other memory provided by a machine learning services platform106as a remote device110(operation204). The database can be a relational or non-relational database, and it will be apparent to a person having ordinary skill in the art which type of database to use or whether to use a mix of the two. In some other embodiments, the document image may be stored in a short term memory rather than a database or be otherwise stored in some other form of memory structure. Documents stored in the system database may be used later for training new machine learning models and/or continued training of existing machine learning models through utilities provided by the machine learning services platform106. The machine learning services platform106can be a cloud platform or locally hosted. In some embodiments, machine learning services platform includes third-party commercial services (e.g., Amazon Machine Learning, Azure Machine Learning, Stanford NLP, etc.) which provide model generation and training. The system100then extracts text from the document image via, e.g., an Optical Character Recognition (“OCR”) software of a storage and machine learning support108subsystem, and the text is associated with the location in the document image from where it was extracted (operation206). The locations of the extracted text can be saved to the remote device110as image location data123specifically tied to the relevant document image122. For example, text extracted from the beginning of the document image is associated with a respective location of the navigable document image portion of a user interface113rendered at a computing device. In some embodiments, the location based association between the image document and the text may be implemented as a JSON object (as in Table 2 below) or a relational or non-relational data structure having a string variable and a collection of longs describing the distance of the area associated with the text from the edge of the document. The association may also be implemented in Python as a dictionary data structure, the location information serving as a key and the text as the value linked to that key. The above implementations are intended to be descriptive examples only and other implementations will be apparent to a person having ordinary skill in the art. TABLE 2Exemplary JSON Object (“Example Data Structure”)[{“Top”:5.53,“Bottom”:6.10,“Left”:3.99,“Right”:7.07, “RangeStart”:0,“RangeEnd”:6,“Text”:“Texas”,{“Top”:5.53,“Bottom”:6.10, “Left”:7.52, “Right”:13.02, “RangeStart”:6,“RangeEnd”:16,“Text”:“Producers”},{“Top”:5.53,“Bottom”:6.10,“Left”:13.37,“Right”:14.64,“RangeStart”:16,“RangeEnd”:19,“Text”:“88”},{“Top”:5.53,“Bottom”:6.10,“Left”:15.05,“Right”:17.42,“RangeStart”:19,“RangeEnd”:24,“Text”:“Paid”},{“Top”:5.53,“Bottom”:6.24,“Left”:17.88,“Right”:22.22,“RangeStart”:24,“RangeEnd”:32,“Text”:“Up -Arc ”}] Revisiting the Example Document Text, here the text can be extracted from the stored PDF of the associated document that includes the Text by running OCR software on the PDF and outputting, from the OCR software, a data object containing each word of text of the document, including the Example Document Text, and a relative positioning of each word (e.g., “Royalty” along with an OCR character string offset ranging 3 to 12 and Top, Bottom, Left, and Right image position in terms of percentage offset with respect to the corresponding image of the document of 5.53, 6.10, 7.01, and 12.50 respectively). Machine learning models are then applied to the text to identify categories and subcategories for the text (operation208). In one example, machine learning services106utilizes storage and machine learning support108to retrieve trained models108from remote device110, which may include a database or other data storage facility such as a cloud storage service. The machine learning models may identify categories and subcategories based on learned ontologies which are taught to the models through training on batches of text from previous documents received by the system and from training data, which may be acquired during the initial deployment of the system or otherwise. A learned ontology can allow a machine learning model to identify a category or subcategory based on relationships between words, key words, and other factors determined by the machine learning algorithm employed, and will identify concepts and information embedded in the syntax and semantics of text. Thus, where a simple key word search of extracted text may not be capable alone of identifying a “lot description,” machine learning can be used to analyze the extracted text and identify the “lot description” based on a previously identified location of the lot (e.g., via the lot state, lessor state, applicable laws state, etc.) to identify probable formats for the lot description and/or other qualities of the text (e.g., proximate categories, such as lessor name or related categories, such as state, and the like). In another example, a “shut-in” provision may not include a header titling it “shut-in” and may not use the words “shut-in” in the provision. Thus, the machine learning models may process the extracted words to identify if a portion of the extracted text is a “shut-in” provision based on the use of similar words (e.g., “gas not being sold”), the use of sets of similar words being used in proximate locations (e.g., “gas not being sold,” “capable of producing,” and “will pay”) to identify a category. The machine learning algorithm employed may be a neural network, a deep neural network, support vector machines, Bayesian network, a combination of multiple algorithms, or any other implementation that will be apparent to a person having ordinary skill in the art. Referring to the Example Document Text, the machine learning model may assign the category “royalties on oil and gas” to the Example Document Text. In such a case, the machine learning model has learned an ontology mapping the text to the category “royalties on oil and gas” but not to “other royalties” or various other categories. Subcategories may be mapped by the machine learning models having learned other ontologies as well. For example, the text of paragraph 3(a)(i) of the Example Document Text may cause the machine learning models to identify a “royalty on oil provision” subcategory. The method depicted inFIG.2may then associate identified categories and subcategories with location data related to the document image (operation210). The location data may be the earlier discussed location data associated with the particular text which caused the machine learning model to output the particular category. In other words, one or more anchors may be identified in the extracted text through visible and/or hidden tags such as a category or subcategory. This location data maps the locations within the document image to the one or more categories or subcategories. For example, the identified category “royalties on oil and gas” may be associated with the page offset values for each of the words in the Example Document Text. The word “Royalty” may be associated with the 5.53, 6.1, 7.01, and 12.5 page edge offsets discussed above; the next word, “(a)” may be associated with Top, Bottom, Left, and Right image positions of 7.10, 7.67, 6.85, and 7.15 respectively; etc. The aggregated values of the adjacent words can then be mapped to “royalties on oil and gas”, resulting in a set of location values of 5.53, 18.9, 6.10, 93.42 being associated with the category. The subcategory, “royalty on oil provision”, can be similarly mapped. Once categories and subcategories have been mapped, a navigable display of categories and subcategories in operative linkage can be displayed alongside a navigable version of the document image (operation212). Operatively linked categories and subcategories allow a user to navigate the document image by selecting the categories and subcategories rather than directly manipulating the document image. In other words, operation212can produce a graphical user interface as that depicted inFIG.5andFIG.6and discussed below. The categories and subcategories, along with their associated location data, can be stored and transmitted by various mechanisms, including a JavaScript Object Notation (“JSON”) object having a field containing the category, a field containing the document page or pages covered by the category, and another field containing the locational information described above and tied to the respective page of the document. The described JSON object is just one embodiment and is to be taken as a non-limiting example. Other embodiments will be apparent to a person of skill. FIG.3AandFIG.3Bshow a paragraph orchestration302and a sentence orchestration312respectively for processing paragraphs of text and sentences of text respectively. In one aspect, paragraph and sentence orchestrations may be run independent of each other. In another, outputs of paragraph orchestration3A may be included as inputs to the sentence models of3B. As depicted byFIG.3A, a system implementing the method302first receives extracted text (operation304). In the depicted embodiment, the text may be received following a preceding operation206extracting text data from a document image. In one aspect, the extracted text may constitute a quantity and/or organization of text making up a paragraph. Generally, the paragraph orchestration302receives the paragraph text from a parser which has processed a larger text into distinct paragraphs, as that described below in regards toFIG.4. The extracted text may then be fed into machine learning models, each model trained to identify particular categories from text inputs (operation306). The models may be support vector machines (“SVMs”), Long Short Term Memory networks (“LSTM”), convolutional neural networks (“CNNs”), recurrent neural networks (“RNNs”), Naïve Bayse, other machine learning models which will be apparent to a person of ordinary skill in the art, or a mixture of multiple and different models. In one aspect, the models output a score corresponding to a degree of confidence the model has that the ingested text reflects the category it is trained to identify (further discussed below in reference toFIG.4). Those models producing a score above a particular threshold will be considered to have identified a category from the text. In other embodiments, the model or models may produce a Boolean (i.e., true or false), associated with achieving a threshold or not in some embodiment, or other value as will be apparent to a person of ordinary skill in the art. Turning toFIG.3B, a sentence orchestration312is depicted. A sentence orchestration312operates on similar principles to the paragraph orchestration302discussed above. The sentence orchestration312first receives text extracted from a sentence (operation314). Generally, the sentence orchestration312receives extracted text from a parser specialized in processing text into constituent sentences, such as that discussed below in reference toFIG.4. The text is fed into a plurality of models which are each trained to identify particular subcategories (operation316) and then those models may output identified subcategories according to which models produce a score from the ingested text above a certain threshold (operation318). Similarly to operation306, the models of operation316may be SVMs, LSTMs, CNNs, RNNs, Naïve Bayse, various other machine learning models, or a mixture of models as will be apparent to a person of ordinary skill in the art. In some embodiments, the sentence models of the sentence orchestration312can also receive the output categories from the paragraph orchestration302performed earlier on paragraph text from which the sentence text was extracted. In some embodiments, the paragraph orchestration302can receive a paragraph of text (i.e., a contiguous block of text potentially containing multiple sentences) and the sentence orchestration312may then receive the individual sentences of that text and the output of the paragraph orchestration302at operation314. A sentence may be read, or tokenized, as a sentence based on a variety of rules such as creating a discrete sentence from all text before a period, all text on a single line, all text before a semicolon, or many other rules as will be apparent to a person of ordinary skill in the art. Once paragraph categories are received, for example, the sentence models used may be limited to particular models specialized at identifying subcategories of the received categories. Such models may be specialized by training them only on data related to a particular category such as, e.g., “royalties on oil and gas”. Processing the Example Document Text from Table 1 above, paragraph orchestration302receives the entirety of the text contained within the Example Document Text (i.e., all of “3. Royalty.” and text following within section “3”) at operation304. In some embodiments, each line break may be processed as a new paragraph. In other embodiments, various other particular characters or groupings of characters may cause the system to process the respective text as a new paragraph. In these other embodiments, the respective components of the Example Document Text from Table 1 above, such as, e.g., “3. Royalty,” may each be provided to the paragraph orchestration302one at a time and as individual paragraphs. The text is then fed into the machine learning models of operation306, a model trained to identify “royalties on oil and gas” category among them. Of the models, only “royalties on oil and gas” achieves the confidence threshold and so operation308outputs “royalties on oil and gas” as a category for the paragraph. As the Example Document Text continues into the sentence orchestration312, seven sentences may be tokenized from the paragraph based on a rule tokenizing sentences as words before a period, colon, or semicolon. Tokenized sentences of the Example Document Text may include, among other sentences, “(i) to deliver or cause to be delivered to the credit of Lessor, into the pipe line or other receptacle to which Lessee may connect its wells, 50% of all oil, condensate and liquid hydrocarbons produced and saved by Lessee from the Leased Premises, or from time to time, at the option of Lessor, Lessee shall sell Lessor's share of such oil, condensate or liquid hydrocarbons with Lessee's share and shall pay Lessor 50% of the Gross Proceeds (as hereafter defined) received by Lessee or any Affiliate of Lessee (as hereafter defined) from the sale of all oil, condensate and liquid hydrocarbons produced and saved from the Leased Premises;”. In one embodiment, the sentences may be fed by themselves into the sentence models of operation316. In any case, operation318may produce a “royalty on oil provision” subcategory after ingesting the sentence extracted from the Example Document Text. FIG.4depicts a system architecture and method for natural language processing of documents in one specific implementation geared toward legal documents, such as those for the oil and gas industry. To begin, an OCR module402A accesses a document and generates a text file404B. A paragraph parser406A then processes the text file404B into paragraph text strings406B, which are passed downstream. The paragraph parser406A extracts discrete paragraphs of text from the text file404B. Paragraph parser406A may utilize a variety of parsing rules. In one aspect, all text between empty lines may be extracted as a discrete paragraph. In another, text following a numeral, and before a sequential numeral, may be extracted as a discrete paragraph. In yet another, machine learning models may be trained to identify paragraphs in a text, which are then each extracted as discrete paragraphs. Other embodiments will be apparent to a person of ordinary skill in the art. The OCR module404A can also output a text file404C to a profiler428A, which can provide a comparative analysis of the received oil and gas legal document402against documents previously received by system400to group documents by statistical factors unlikely or impossible to be considered by a human analyst (further discussed below). A paragraph parser406A can, after receiving or otherwise accessing the text file404B, generate paragraph text strings406B for subsequent processing by a paragraph classifier408A. The paragraph parser406A can also generate and output paragraph text strings406C to a paragraph text database416A for storage and later training, and for access by the paragraph classifier. The paragraph classifier408A applies paragraph models420B to the paragraph text strings406B. The paragraph models420B can be first trained and regularly trained thereafter by a paragraph model trainer418A as will be further discussed below. Having fed the paragraph text strings406C to the paragraph models420B, paragraph classifier408A produces a data object408B containing a paragraph text string and labels of paragraph categories identified by the paragraph models420B. A sentence parser410A receives the data object408B from the paragraph classifier408A. Applying tokenization rules to the paragraph text string of the data object408B, the sentence parser generates sentence text strings410C from the paragraph text, which may be stored in a sentence text database422A. The sentence text strings may then be processed by a sentence classifier or used by a sentence model trainer424A, further discussed below. The sentence parser410A may also generate and provide to the sentence classifier412A a data object410B containing category labels identified by paragraph classifier408A and sentence text strings410C. The sentence classifier412A applies sentence models426B, which may be retrieved from sentence model database426A and may be limited to models which are specialized in the categories identified upstream by the paragraph classifier408A. The sentence classifier412A can perform operations316and318to output a data object412B. The data object412B contains the category labels produced by the paragraph classifier408A, the sentence text strings410C produced by the sentence parser410A, and subcategory labels identified by the sentence classifier412A. A document mark-up module414A may receive data object412B. The document mark-up module414A associates the categories and subcategories, held by the data object412B, with locations within an image of the received oil and gas document402from which the text causing the production of the categories and subcategories was extracted. Categories and subcategories may be consolidated into a list of unique values (i.e., no category or subcategory is listed twice) and the above mappings associated with those unique values—for example, where a category is identified twice and associated with different location information between each identification, the document mark-up module414A produces a list containing the category only once but associated with both of the locations just described. The document mark-up module414A may output a data object414B containing sentence text strings, a list of unique entries of categories and subcategories, and the above-described location mappings associated with the unique entries of categories and subcategories. A display450may receive data object414B and render an interface as depicted inFIGS.5and6and detailed below. The system400depicted inFIG.4can include data flows for training both the paragraph models420B and the sentence models426B. Paragraph text database416A may receive paragraph text strings406C from the paragraph parser406A and stores the paragraph text strings406C for use by a paragraph model trainer418A. The paragraph model trainer418A can train the paragraph models420B which are used to identify categories by Paragraph Classifier408A. This training may be supervised or unsupervised, as will be apparent to a person of ordinary skill in the art. Supervised training generally includes human subjects reviewing the paragraph text strings416B and providing category labels to them based on their individual experience and knowledge. Training may also include human subjects reviewing categories generated by the paragraph models420B and assigning a success or failure value to the model based upon the categories identified. For example, were the system to categorize a term duration, e.g., “the term of the lease shall be 10 years from this date”, as “party”, the human subject would assign a failure value to the model; alternatively, were the system to categorize “the term of the lease shall be 10 years from this date” as a “term”, the human subject may assign a success value. The paragraph model trainer418A may then learn from these human-assigned values by stochastic gradient descent (“SGD”), coordinate descent, or other algorithms which will be apparent to a person of ordinary skill in the art. Once training is complete, updated paragraph models418B are provided to a paragraph model database420A. The models stored in the paragraph model database420A can then be provided as paragraph models420B to the paragraph classifier406A. The paragraph model trainer418A may be run on a verity of schedules, including only once to first train the models, nightly, weekly, monthly, or any other frequency that will be apparent to a person of ordinary skill in the art. A sentence text database422A may receive sentence text strings410C from the sentence parser410A. The sentence text strings410C may then be stored for later use by a sentence model trainer424A. The sentence model trainer424A may be run according to a range of schedules which will be apparent to a person of ordinary skill in the art. When run, the sentence model trainer424A receives sentence text strings422B from the sentence text database422A. The sentence model trainer424A may include the same training mechanisms described above in regards to the paragraph model trainer418A. The sentence model trainer424A may then produce updated sentence models424B which are then provided to the sentence model database426A for later us by the sentence classifier412A. The system400may also include a profiler428A for identifying documents across the history of documents received by the system which are similar to the received document402. The OCR module404A may transmit a text file404C containing the text data of the document402. The profiler428A may then identify a document profile428B matching the document402to other documents previously received and profiled by the system400by applying profile models434B to the received text file404C. The profiler428A may also send text file428C to a document database430A for storage and later use in generating and training profile models434B. The text file428C may be a duplicate of the text file404C received by the profiler428A from the OCR module404A. The document database430A may send a collection of all documents430B to a clustering service432A for generating updated profile models432B. The updated profile models432B may be generated on a nightly, weekly, monthly, or any other basis as will be apparent to a person of ordinary skill in the art. The clustering service432A may apply clustering algorithms to identify similar documents along a variety of factors. Various cluster analysis algorithms may be used such as K-means, though other algorithms will be apparent to a person of ordinary skill in the art. Generally, Clustering Service432A utilizes unsupervised machine learning to create groups of documents and so may determine similarity based on factors unlikely or even impossible to be considered by a human analyst such as, e.g., cosine similarity of vectorized documents. The clustering service432A may then send updated profile models432B to a profile model database434A. Profile model database434A may then provide the profile models434B to the profiler428A for use in generating a document profile428B by applying profile models434B to text file404C. The document profile428B may then be provided to the display450for presentation to the user. Returning again to the Example Document Text, an example execution of the system is described below. The Example Document Text can be received by the OCR module404A, producing a text file404B containing the text described the Example Document Text (along with the remainder of text of the document not depicted). The paragraph parser406A may then receive the text file404B of the Example Document Text and produce paragraph text strings406B and406C containing, e.g., the text provided in the Example Document Text as a single paragraph. The paragraph text strings406C will be preserved in the paragraph text database416A for use in training paragraph models420B. The paragraph classifier408A receives, e.g., the text provided in the Example Document Text and applies the paragraph models420B to the text, producing the data object408B containing the paragraph text string406B of the Example Document Text and, e.g., the category “royalties on oil and gas”. The sentence parser410A receives the data object and produces sentence text strings410C which are stored in the sentence text database422A for training the sentence models426B. The data object410B is also produced and sent to the sentence classifier412A. As the Example Document Text continues to be processed by the system400, the sentence classifier412A produces the data object412B by applying particular sentence models, the sentence models determined by the categories produced by the paragraph classifier408A above, to the sentence text strings contained in the data object410B. The sentence text strings here can include, e.g., the text of 3(a)(i) of the Example Document Text. Having applied the sentence models426B to the text of 3(a)(i) of the Example Document Text, the sentence classifier412A can include, e.g., the subcategory “royalty on oil provision” in the data object412B, among other subcategories associated with other sentence strings produced from the Example Document Text. The document mark-up module414A then receives the data object412B having, e.g., the category “royalties on oil and gas”, the subcategory “royalties on oil provision”, and the sentence text strings of the Example Document Text. The document mark-up module414A generates a data object to be sent to the display450that contains, e.g., the above recited items and mappings of the sentence text strings to an image of the Example Document Text so that an interface may be provided to a user in the vein ofFIG.6described below. Detailed descriptions ofFIG.5andFIG.6follow, wherein the Example Document Text is again used to relay greater understanding of the methods and system disclosed herein. FIG.5depicts an exemplary user interface500generated according to the various systems and methods discussed herein, and in this example, as displayed on a computing device502before any category has been selected. The user interface500includes a list of categories510. As depicted inFIG.5, three unique categories504A (royalties on oil and gas),506A (shut-in provisions),508A (retained acreage) are shown; however, by using scroll bar514, a user may move into and out of view of the entirety of the list of categories510. The section of the user interface displaying the selectable categories is independently scrollable from other parts of the user interface. An image of a received document512, e.g., oil and gas legal document402, is displayed in a section of the user interface adjacent the section displaying the list of categories510. Returning to the list of categories510, the category504A (royalties on oil and gas) includes a text snippet504B. The text snippet504B provides a partial image of the text from which the paragraph classifier408A identified the associated category504A. In particular, the text snippet504B is only partially displayed because it is not actively selected. In one embodiment, the text snippet504B may be selected by a user, e.g., with a mouse click, and the text snippet504B expands to display the entirety of the paragraph text associated with the category (not depicted). Further, the text snippet504B may be operably linked to the document image512so that selecting the text snippet504B causes the document image512to scroll to a position where the text location504C matching the text snippet504B is at the top of the visible document portion (not depicted). In some embodiments, the list of categories510may be operably linked to the document image512so that selecting, e.g., the category504A (royalties on oil and gas) causes the document image512to scroll to a position where the text location504C of the matching text snippet504B associated with the selected category504A is at the top of the visible document. As can be seen, the list of categories510may include categories not currently visible in the document image512, such as category506A (shut-in provisions), and an associated text snippet506B. As described above, upon selection of text snippet506B by a user, document image512will shift to a view having the text of506B at the top of the visible document. The list of categories510may further include multiple text snippets508B,508C,508D which each caused, e.g., the paragraph classifier408A to identify the respective category. As depicted inFIG.5, the category508A (retained acreage) was identified by three or more paragraph text strings406B, causing the category508A to include the text snippets508B,508C,508D displaying the same text content as that contained in the respective paragraph text strings406B, for example. FIG.6depicts another embodiment of an exemplary interface600generated according to the various systems and methods discussed herein, and in this example, as displayed on a computing device602. The user interface600includes a list of categories610. As depicted inFIG.6, two unique categories606A (Parties) and608A (Term) are shown. An image of a received document612, e.g., oil and gas legal document402, is displayed in a section of the user interface adjacent the section displaying the list of categories610. The interface600may allow navigation of the list of categories610and/or the document image612individually or jointly. For example, scrolling through the list of categories610may not scroll through pages of the document image612. The identified categories606A (Parties) and608A (Term) also contain lists of subcategories609A,609B respectively nested within each. As depicted inFIG.6, the lists of subcategories are among those identified by the sentence classifier412A of the system400. The nested subcategory609A contains all identified subcategories606B(i) (lessor name),606C(i) (lessee name),606D(i) (lessor address), and606E(i) (lessee address) associated with the category606A (Parties). The nested subcategory609B contains identified subcategories608B(i) (lease date) and608C(ii) (primary term) within the current view of the list of categories610. A user may scroll down the list to reveal more subcategories using a scroll bar620. Associated with each subcategory is a text snippet606B(ii),606C(ii),606D(ii),606E(ii),608B(ii),608C(ii) that displays the text from which the sentence classifier412A generated the respective subcategory. Categories606A (Parties) and608A (Term) are interactable and a user may click on them to collapse the respective nested subcategory lists609A,609B. Upon being collapsed, the collapsed category label is visible, e.g., “Parties”, but the respective subcategories and associated text snippets are not, e.g., “lessor name”, “Alfred Landowner”, “lessee name”, “Oil Company, Inc.”, etc. Each text snippet606B(ii),606C(ii),606D(ii),606E(ii),608B(ii),608C(ii) may be operably linked to the document image612and, upon being selected by a user by clicking on it, cause a portion of the document image612scroll to a position where the text location of the respective snippet is in view. Further, selecting the text snippets may cause the associated text in the document image612to be highlighted or otherwise denoted by a marking overlay. Here, the text snippet608B(ii) (lease date) has been selected and so a portion of the document containing the text618B (20th day of December), matching the contents of the relevant text snippet608B(ii), has been highlighted. Similarly, a portion of the document616E (910 Location Ave, Metroburg, DE, 11121)616E may be highlighted by the user clicking on the mapped text snippet606E(ii) associated with subcategory606E(i) (lessee address) containing the same text content. FIG.7depicts one embodiment of an architecture700implementing the system400over a cloud resource provider708. As depicted, a terminal702uploads a document704to the cloud resource provider708where a version710is sent to a server712running OCR714. The OCR714may extract text from the document710and send it back to the cloud service provider708. Once the cloud service provider708receives text716, a text copy720may be sent to a server722running a paragraph classifier724. The paragraph classifier724can then provide a list of identified categories back to the cloud resource provider708. The cloud resource provider708may also provide a text copy730to a server732running a sentence classifier734. The sentence classifier734may then provide a list of identified subcategories736back to the cloud resource provider708. The paragraph classifier722and the sentence classifier734may be run in sequence or in parallel because they are run by separate servers718,732. In another embodiment (not depicted), multiple servers may each run an instance of a sentence classifier734or a paragraph classifier722so that each instance may receive, e.g., a sentence text string410C or a paragraph text string406B respectively and distribute the computing task of the methods302,312across multiple devices in order to speed up completion of the method200. The cloud resource provider708may provide an interface706to a terminal702. The interface706may be, e.g., a data object414B containing text, lists of categories and subcategories, and mappings for an image of the document. Once received, the terminal702may perform the operation212and render, e.g., the interface600for the user. In various embodiments, the terminal702may be a personal computer, thin computer, laptop, mobile device, or other computing device as will be apparent to a person of skill in the art. FIG.8illustrates an embodiment of a user interface800to facilitate queries and report generation based on a plurality of documents. Allowing searching across a plurality of documents based on the extracted text and/or associated categories may facilitate review (e.g., by users) of a plurality of documents in a quick and efficient manner. For example, unlike conventional systems that rely on OCT and key word searches, a user may not need to search each possible term that might identify a provision but rather select the category for a provision through the user interface800. As depicted, categories and/or properties of categories (e.g., key words, values, subcategories, etc.) can be provided by a user in the form of one or more filters812and840. In response, a query may be generated which will retrieve all documents matching an aggregation of the filters812and840. The user can specify the aggregation method (e.g., Boolean AND, Boolean OR, etc.) via a Boolean tab830. For example, where a first filter812and a second filter840are applied with respective Boolean tabs830set to “and,” only those documents satisfying both of the first filter812and the second filter840will be retrieved and provided to the user. In comparison, where the Boolean tab830is set to “or,” so long as a document satisfies either of the applied filters812and840, the document will be retrieved. A directory selection field802informs the system where to look for the specified documents. Here, the directory selection field802is set to a directory named “2017 Leases.” In some embodiments, a user may select a directory by typing an associated address into the field, such as “C:\Users\admin\Documents\” and the like. In some embodiments, a user may open a browser window by selecting the directory selection field802and navigate to the correct folder by mouse clicks (not depicted). A user can enter exact keyword matches through a document text field810. Only documents containing text identical to that entered into the document text field810may be returned. Where no content is provided to the document text field810, the generated query may ignore exact text matches. Where text is provided along with filters, the document text field810can operate as another filter included in the resultant query, returning documents that only satisfy the filters812,840as well as contain text matching that entered into the document text field810. In some embodiments, the document text field810can allow for Boolean search arguments. The filters812,840may each include a field label806, an operator label808, and a value label820. The field label806denotes a search field, such as a document type826, category, or subcategory, which to query on. Where multiple types of documents are available for searching, the document type826selection may be used to limit the search to only a single type of document as defined by the value label820. The value label820may be responsive to the field label806selection. For example, where document type826is selected, only values associated with the type of documents may be selected in the value label820, such as “Oil and Gas Lease”832. In some embodiments, aesthetic text814may be included in a filter812,840to increase the intuitiveness of the interface. Here, “Where” is provided as aesthetic text814so that users may be aware that the filter812applies “where” a document type826is an oil and gas lease832. The operator label808may be selectable responsive to the field label806selection. For example, where document type826is the selected field label806, no operator label808is available for selection and instead a dummy value, such as “−,” is selected. In the case of a different field label806being selected, such as “Surface Pugh Clause”818, the operator label808may provide a selection including the “Exists” selection. Further restriction can be applied to the filters812,840using the label806. A “Surface Pugh Clause” may be selected as an additional field label806. In some embodiments, multiple field labels806can be selected. Here, a selection832includes “Surface Pugh,” “Depth,” and “Continuous Operations.” Responsive to the field label806selection, an “Exists” operator836can be selected. The value label820can determine a treatment of the operator label808. For example, the “Exists” operator836may cause the vale label820to allow selection of either “True”834or “False” (not depicted). The selection of the “True”834value label808may cause results of a search including just the filter840would to include only oil and gas lease documents containing text categorized and subcategorized as Surface Pugh, Depth, or Continuous Operations. In some embodiments, an inverse treatment can be selected by selecting “False” for the value label808, in which case only documents not containing the selected categories and/or subcategories may be returned. Further, each filter812,840may be removed by selecting a delete icon816,828. In some embodiments, the delete icon816,828can remove the entire filter, irrespective of the number of, for example, field label806selections included (e.g., “Surface Pugh Clause”818or “Surface Pugh, Depth, Continuous Operations”832). In some other embodiments, the delete icon816,828can remove the most recently added selection every time the delete icon is selected. In some embodiments, documents may include conflicting information associated with a category in different portions of the document.FIG.9depicts a method900for performing a conflict check on a list of categories and subcategories and can be included as part of the machine learning services platform106.FIG.10depicts a system1000for the ordering and processing of a plurality of documents including an original document and one or more documents that amend or modify the original document. The list of categories and subcategories may be generated from an original document and one or more amending documents. The system1000includes the system400as a subsystem. The method900can be performed in addition to and concurrent with the method200and the operations of the method900may be interleaved with the operations of the method200, as will be apparent to a person having ordinary skill in the art. Thus, the system1000may identify portions of an amendment as the same category and/or subcategory as portions of the main body of a respective document. Referring now toFIGS.9and10, a conflict check module1050may receive an original document and one or more amending documents along with a list of categories and subcategories identified in the received documents (operation902). An ordering module1010may chronologically sort the documents, as further discussed below in reference toFIG.11, beforehand to provide ordered documents1005to the document processing system400to identify categories and subcategories contained within the documents. In some embodiments, the identified categories and subcategories may already be associated with the text locations of the respective sections of text upon which they were identified (operation210). In some other embodiments, where the identified categories of the received documents have yet to be associated with locations of text within the documents, the method900may be performed before operation210so that locations in the document images can then be associated with the identified categories and subcategories after the conflict check method900has provided an updated and accurate list of categories and subcategories. The conflict check module1050may then identify categories and subcategories which appear multiple times across the set of received documents (operation904). In some embodiments, this identification can be accomplished by incrementing a value associated with a category or subcategory every time that category or subcategory is seen for the first time in a document. For example, “royalties on oil and gas” may be identified in the original contract document and repeated in a third amending document, which causes a value associated with “royalties on oil and gas” to increment. Categories and subcategories that are repeatedly identified across multiple documents may represent amendments and/or modifications. Categories and subcategories that are not identified as repeating across multiple documents (i.e., the associated value discussed above is “1”) may be provided to the document mark-up414A (operation908), where associated text locations may be identified and rendered to a user through the display415as discussed above in reference toFIG.4. In contrast, categories and subcategories that are identified as repeating across multiple documents (i.e., the associated value discussed above is “2” or higher) may be further processed by the conflict check module1050to identify categories and subcategories, among the repeated categories and subcategories, which are associated with conflicting language in the respective text which caused the system400to identify the category or subcategory each time in the first place (operation906). For example, the category “royalties on oil and gas” may be associated with text in the original document and also be associated with text in the third amending document, as discussed above. The associated text of the original document and of the third amending document may explicitly conflict (i.e., language expressly describes the text of the third amending document as replacing the text of the original document) or the associated text may implicitly conflict due to contradictory language (e.g., the language each document describes identical royalties but at different percentages). Implicitly conflicted language may be identified by a trained machine learning model which has been trained to identify certain ontologies and which may identify the associated text as ontologically overlapping or, in other words, containing conflicting semantic content. In some embodiments, this overlap may be identified by vectorized words, sentences, and/or paragraphs occupying largely similar coordinate space in a SVM. It is to be understood that various tools and utilities for recognizing conflicting semantic content may be utilized, including, but not limited to, machine learning models, traditional rules-based techniques, or a mixture of the two. For example, a rule may exist that any provisions identified in an amendment that includes the words “The provisions found herein replace entirely and absolutely all prior versions” necessarily identify a conflict for all categories and subcategories identified for that document. In another embodiment, machine learning can be applied to detect statements having largely similar semantic content to “The provisions found herein replace entirely and absolutely all prior versions,” such as “If there is conflict between this amendment and the Agreement or any earlier amendment, the terms of this amendment will prevail,” and thus can apply the rule even when the language is not identical. In the case that identified repetitive categories and/or subcategories are not associated with conflicting language in the respective source text, those categories and subcategories, along with the associated text, are provided to the document mark-up414A (operation908). For example, categories and subcategories can repeat and not be associated with conflicting text where a later document provides additional parties to an agreement or other material has been added in addition to the original document rather than replacing it. Where the repeated categories and subcategories are associated with conflicting language, the text from less recent document (e.g., the original document) is disassociated from the category or subcategory and the association of the text of the most recent document and the category or subcategory is maintained (operation910). The updated categories and/or subcategory are then provided to document mark-up414A (operation908). In some embodiments, the previous versions of the identified categories and subcategories may also be provided to document mark-up414A in order to provide a version history of a provision to the user. With respect toFIG.10, a collection1002of contract documents1004A-D is first received by an ordering module1010. Documents1004A-D may be received in any order and the ordering module1010will sort and order them into a chronological order. In some embodiments, a rule-based sorting may be employed whereby ordering module1010recognizes key words or characters associated with timing such as, for example, “10/23/2017” or “October 23, 2017” and organize the documents according to the recognized key words or characters. In some embodiments, the ordering may be based on machine learning models trained to recognize a time component embedded, semantically or otherwise, into the text of the document. Other embodiments may include a mixture of rule-based and machine learning model approaches. For example, an amendment may make reference to an original contract as being the most recent (prior to the amendment) source of terms to the agreement, in which case the amendment may identified as immediately following the original contract, though no mention of a calendar date may be included in the amendment. The ordering module1010may output a chronologically ordered set of documents1005. The ordered documents1005may be organized differently than they are first received. For example, the original contract1004B may be sorted to the front of the received documents (thus denoting an earlier date), even though it was received after addendum1040A. As can be seen, the received documents are organized such that original contract1004B precedes addendum1004C, which precedes addendum1004A, which precedes addendum1004D. The document processing system400, discussed above, can then receive the ordered documents in their correct sequence. However, where in some embodiments document processing system400may provide output directly to the document mark-up414A and display415, the conflict check module1050here can receive the data object412B from the document processing system400provide a modified data object (containing categories and subcategories associated with only the most recent text) to document mark-up414A. In this way, the ordering module1010and the conflict check module1050may be inserted into the architecture depicted inFIG.4to further enhance the value and utility of the system to users. The conflict check module1050can perform the method900to identify conflicts and provide accurate categories and mappings to the document mark-up414A. In some embodiments, the conflict check module1050may receive exemplar documents as depicted inFIG.11. The conflict module1050may review an exemplary oil and gas document1004B which is identified as a leading document1102. Here, a paragraph1104has been identified as part of a provision describing royalty. A related paragraph1106, which may be associated with the category “royalty on oil and gas,” is also provided to the conflict check module1050. Another paragraph1108is also identified and associated with a category that, as depicted here, does not cause a conflict for its respective category. Amendment1004C may be received as a sequential document1112. Here, document1112includes a header element1114describing the respective document as an amendment and, using a rule-based logic, the conflict check module1050identifies the document1112as an amendment. In some embodiments, the conflict check module1050may apply machine learning techniques or a combination of machine learning and rule-based logic to identify documents as amendments. The amendment1004C includes paragraphs1116and1118. The conflict check module1050may identify paragraph1118as being in conflict with the previously processed paragraph1106as a response to processing the language “Provision 3(a)(i) is hereby replaced with the following language:” of paragraph1116. As a result, the category “royalties on oil and gas” is associated with the text of paragraph1118and replaces the association of the text of paragraph1106to the same category (i.e., “royalties on oil and gas”). The category may still be associated with the text of other documents, however, its associations have now been updated to conform with the text processed in the amendment1004C. The conflict check module1050may receive amendment1004A as a next sequential document1122. The document1122may lack an apparent identifier to inform the conflict check module1050that it is an amendment. In cases where there is no explicit identification that the document being processed is an amending document, the conflict check module1050can use machine learning models, rule-based logic, or a mix of the two to determine whether the document is an amendment. Here, the conflict check module1050identifies the document1122as an amendment and the paragraph1126is categorized as “royalties on oil and gas.” Applying a rule-based logic to the language “Royalty addendum” of the paragraph1124which immediately precedes paragraph1126, the conflict check module1050may identify paragraph1126as causing a conflict for any categories. As a result, the text of paragraph1126may be associated with the category “royalties on oil and gas” along with the text of paragraph1118and may be presented to a user as in a provided list of categories “royalties on oil and gas” (for example, within list510depicted inFIG.5). FIG.12an example computing system1200that may implement various systems and methods discussed herein. The computer system1200includes one or more computing components in communication via a bus1202. In one implementation, the computing system1200includes one or more processors1204. The processor1204can include one or more internal levels of cache (not depicted) and a bus controller or bus interface unit to direct interaction with the bus1202. The processor1204can include the OCR404A, paragraph parser406A, paragraph classifier408A, sentence parser410A, sentence classifier412A, document mark-up414A, paragraph model trainer418A, sentence model trainer424A, profiler428A, and/or clustering service432A and specifically implements the various methods discussed herein. Main memory1206may include one or more memory cards and a control circuit (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor1204, implement the methods and systems set out herein. Other forms of memory, such as a storage device1208and a mass storage device1212, may also be included and accessible, by the processor (or processors)1204via the bus1202. The storage device1208and mass storage device1212can each contain any or all of the paragraph text database416A, paragraph model database420A, sentence text database422A, sentence model database426A, document database430A, and profile model database434A. The computer system1200can further include a communications interface1218by way of which the computer system1200can connect to networks and receive data useful in executing the methods and system set out herein as well as transmitting information to other devices. The computer system1200can include an output device1216by which information is displayed, such as the display450. The computer system1200can also include an input device1220by which information, such as oil and gas legal document402, is input. Input device1220can be a scanner, keyboard, and/or other input devices as will be apparent to a person of ordinary skill in the art. The system set forth inFIG.12is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure. It will be appreciated that other non-transitory tangible computer-readable storage media storing computer-executable instructions for implementing the presently disclosed technology on a computing system may be utilized. In some embodiments, the system can identify amendments, addendum, and other later added or rewritten material relative to some original document or set of original documents. For example, many transactions in the oil and gas industry involve an original contract and multiple addenda, amendments, and other modifications to the agreement occurring after signing of the original contract, generally referred to simply as amendments. In such cases, particularly with older agreements having a long history between many and varying parties, it can be difficult and time-consuming to ascertain the current status of provisions. Amendments may alter or eliminate original provisions, and entirely new provisions may first appear in an amendment. Amendments may be written by different attorneys and according to different practices. The ordering of the amendments may also be unclear—some amendments may be individually dated and others may be part of a batch of documents dated by a cover sheet since lost or misplaced. In the situations described above, a module or other additional component may be utilized to chronologically order an original contract and later amendments as well as correctly update identified categories and subcategories so as to both avoid conflicting provisions and ensure the list of categories and subcategories provided to a user are not out of date and are linked to the correct text location within the document image. The module or component can be run alone or as part of the system depicted byFIG.4. In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented. The described disclosure may be provided as a computer program product, or software, that may include a computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A computer-readable storage medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a computer. The computer-readable storage medium may include, but is not limited to, optical storage medium (e.g., CD-ROM), magneto-optical storage medium, read only memory (ROM), random access memory (RAM), erasable programmable memory (e.g., EPROM and EEPROM), flash memory, or other types of medium suitable for storing electronic instructions. The description above includes example systems, methods, techniques, instruction sequences, and/or computer program products that embody techniques of the present disclosure. However, it is understood that the described disclosure may be practiced without these specific details. While the present disclosure has been described with references to various implementations, it will be understood that these implementations are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, implementations in accordance with the present disclosure have been described in the context of particular implementations. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow. | 70,588 |
11861144 | BEST MODE The above-mentioned objects, features, and advantages of the present disclosure are described in detail with reference to accompanying drawings. Therefore, the skilled person in the art to which the present disclosure pertains may easily embody the technical idea of the present disclosure. In the description of the present disclosure, a detailed description of a well-known technology relating to the present disclosure may be omitted if it unnecessarily obscures the gist of the present disclosure. Hereinafter, preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings. In the drawings, same reference numerals are used to refer to same or similar components. Terms such as first, second, and the like may be used herein to describe various elements of the present disclosure. These elements are not limited by these terms. These terms are intended to distinguish one element from another element. A first element may be a second element unless otherwise stated. Further, the terms “connected,” “coupled,” or the like are used such that, where a first component is connected or coupled to a second component, the first component may be directly connected or able to be connected to the second component, or one or more additional components may be disposed between the first and second components, or the first and second components may be connected or coupled through one or more additional components. Singular expressions used in the present disclosure include plural expressions unless the context clearly indicates otherwise. In the present disclosure, terms such as “including” or “comprising” should not be construed as necessarily including all of the various components, or various steps described in the present disclosure, and terms such as “including” or “comprising” should be construed as not including some elements or some steps or further including additional elements or steps. Unless otherwise stated, each component may be singular or plural throughout the disclosure. Hereinafter, a method for controlling an electronic shelf label according to some embodiments of the present disclosure is described. Hereinafter, a user refers to any person using the electronic shelf label. In addition, a manager refers to a person using the electronic shelf label to manage products sold in a store among users and a consumer refers to a person using the electronic shelf label to obtain product information or purchase products in a store among users. FIG.2shows an electronic shelf label that may use a method for controlling the electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.2, an electronic shelf label100that may use the method for controlling the electronic shelf label according to an embodiment of the present disclosure includes a display110, a controller120, and a barcode reader130. In addition, according to an embodiment of the present disclosure, the electronic shelf label100that may use the method for controlling the electronic shelf label may further include a camera140or a microphone150. According to an embodiment of the present disclosure, the electronic shelf label100that may use the method for controlling the electronic shelf label may have a bar shape and a horizontal length thereof is longer than a vertical length thereof when the electronic shelf label100is coupled to a display stand. With this bar shaped structure of the electronic shelf label100, an electronic shelf label100may display product information on a plurality of products. The display110receives a touch input from a user and displays information to the user on a screen. That is, the display110receives the touch input from the user, transmits the touch input to the controller120, and displays the information to the user according to a command received from the controller120. The display110may display information to the user on one or more virtual cells. In an embodiment of the present disclosure, the display110may be a touch screen capable of displaying information to the user on the screen while receiving the touch input from the user. The controller120controls the screen displayed by the display110based on the input received through the display110. The controller120may include a memory to store instructions for controlling the screen displayed by the display110based on the input received through the display110, a processor to process the instructions stored in the memory, and a communication module to communicate with a management server200described below. The controller120may control the display110to be operated in a consumer mode or a manager mode. The consumer mode is a mode in which product information is displayed to the consumer. When the electronic shelf label100is operated in the consumer mode, the controller120may control the display110to display a different screen according to the input received through the display110. In addition, when the electronic shelf label100is operated in the consumer mode, the controller120may transmit data to the management server200described below based on the input received through the display110. The manager mode is a mode in which the user may set the screen displayed in the consumer mode. When the electronic shelf label is operated in the manager mode, the user applies the input to the display110to divide the screen displayed on the display110into virtual cells and match the divided virtual cell to the product. A method for controlling the electronic shelf label according to an embodiment of the present disclosure described below relates to a method of setting product information displayed on the electronic shelf label using various methods in the manager mode. The barcode reader130recognizes a barcode displayed on a product. The controller120may obtain a product identification code based on barcode information received at the barcode reader130and control the display110to display the product information. The camera140receives product picture information from the user. In this case, the product picture information includes an actual appearance of the product. The controller120may obtain the product identification code based on the product picture information received at the camera140and control the display110display the product information. The microphone150receives a voice input from a user. The controller120may obtain a product name or a product identification code based on the voice input received at the microphone150and control the display110to display product information. The management server200stores product information on products displayed on a display stand of a store. That is, the management server200stores a product identification code of each of all products displayed in the store and product information corresponding thereto. In addition, the management server200may store various pieces of information necessary for the operation of the electronic shelf label100. The management server200communicates with the controller120of the electronic shelf label100. The management server200may transmit product information to be displayed on the screen of the display110of the electronic shelf label100and receive product information currently displayed on the electronic shelf label100. In addition, the management server200may process various calculations based on data received from the electronic shelf label100. A more detailed method for controlling the electronic shelf label100as described above may be described with reference toFIGS.3to10. FIG.3is a flowchart of a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.3, a controller120provides an input request window including a barcode recognition request phrase on a virtual cell selected by a user (S310). In more detail, the user may apply one or more virtual cell selection inputs displayed on the display110. In this case, the virtual cell selection input may be an input of selecting any one of a plurality of virtual cells displayed on the display110and may be set as an input of touching any one of the virtual cells. When the virtual cell selection input is received through the display110, the controller120may provide an input request window including a barcode recognition request phrase to the virtual cell selected based on the virtual cell selection input. Providing, by the controller120, the input request window including the barcode recognition request phrase to the virtual cell selected by the user (S310) may be described in more detail with reference toFIG.4. FIG.4shows an input request window including a barcode recognition request phrase displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.4, a display110displays a plurality of virtual cells111,112,113,114, and115. The user may apply a virtual cell selection input to the electronic shelf label100by touching any one of the plurality of virtual cells111,112,113,114, and115. Hereinafter, it is assumed that the user applies a virtual cell selection input of selecting a third virtual cell113among the plurality of virtual cells111,112,113,114, and115. As the controller120receives the virtual cell selection input of selecting the third virtual cell113, the controller120provides an input request window including a barcode recognition request phase to the third virtual cell113. The user may check the input request window including the barcode recognition request phrase and input barcode information of product to be displayed using a barcode reader130. In this case, based on reception of the barcode information of the product, the controller120displays the product name corresponding to the barcode information on an input confirmation window located below the barcode recognition request phrase to determine whether the input applied by the user is a correct input. Referring back toFIG.3, after the controller120provides the input request window including the barcode recognition request phrase as a default value, the controller120determines reception or non-reception of an input method selection input from the user (S320). The input method selection input is an input applied when a user wants to apply a product identification input using a method other than barcode recognition. Based on the non-reception of the input method selection input, the controller120continuously provides an input request window including the barcode recognition request phrase to the virtual cell. The user may check the input request window including the barcode recognition request phrase and input barcode information of the product to be displayed using the barcode reader130. Subsequently, the controller120receives the barcode information of the product obtained by the barcode reader130as a product identification input (S350). In this case, as shown inFIG.4, the controller120displays the product name corresponding to the barcode information on the input confirmation window located below the barcode recognition request phrase to determine whether the input applied by the user is a correct input. After the controller120provides the input request window including the barcode recognition request phrase, based on the reception of the input method selection input from the user, the controller120provides an input method list including a plurality of input methods to the selected virtual cell (S330). The input method refers to a method of inputting a product identification code by the user, and may include a barcode input method, a product identification code input method, a product list selection method, a product picture input method, and a voice input method. In this case, when the barcode input method is set as a default value, the input method list may be a list in which methods other than the barcode input method are listed among the above-described input methods. This configuration may be described in more detail with reference toFIG.5. FIG.5shows an input method list displayed in a virtual cell using a method for controlling an electronic shelf label according to an embodiment. Referring toFIG.5, it may be seen that a display110displays the input method list on a third virtual cell113. In an embodiment of the present disclosure, the input method list includes a product identification code input method, a product list selection method, a product picture input method, and a voice input method. The user may select a desired input method by touching an area in which a desired input method is displayed among the input methods included in the input method list. Referring back toFIG.3, after providing the input method list, the controller120provides an input request window based on the input method selected by the user among the plurality of input methods on the display110(S340). Subsequently, the controller120receives the product identification input based on the input method selected by the user through the display110(S350). That is, in an embodiment of the present disclosure, the controller120provide an input request window based on the input method selected by the user among the product identification code input method, the product list selection method, the product picture input method, and the voice input method included in the input method list and receives a product identification input corresponding to the input request window through the display110. The input request window based on each input method and the product identification input may be described in more detail with reference toFIGS.6to9. FIG.6shows an input request window including a virtual keypad displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.6, when a user selects a product identification code input method, an embodiment of an input request window displayed on a third virtual cell113of a display110may be confirmed. If the selected input method is a product identification code input method, a controller120provides an input request window including a virtual keypad. The virtual keypad is a tool that helps the user to input letters, numbers, and the like, and the user may input the letters by touching a specific area of the virtual keypad displayed on the display110. The controller120provides the virtual keypad on the display110in order for the user to input a product identification code. The product identification code may distinguish a specific product from other products and may consist of a combination of letters, numbers, and the like. In one embodiment of the present disclosure, if the product identification code consists of the combination of numbers, the virtual keypad may be a keypad capable of inputting the numbers. When the controller120provides the input request window including the virtual keypad, the controller120receives a product identification code as a product identification input from the user, which is input using the virtual keypad. In this case, the controller120may display the product identification code input by the user on an input confirmation window located above the virtual keypad and determine whether the input applied by the user is a correct input. FIG.7shows an input request window including a product list displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.7, when a user selects a product list selection method, an embodiment of the input request window displayed on a third virtual cell113of a display110may be confirmed. If the selected input method is a product list selection method, a controller120provides an input request window including a product list. The product list includes all products that may be displayed on a display stand of a store. After the user finds a desired product in the product list displayed on the display110, the user may apply an input of selecting one of the products included in the product list by touching an area where the corresponding product is displayed. In this case, if the selected input method is the product list selection method, an input of selecting any one of the products included in the product list is a product identification input. In other words, the controller120provides the product list on the display110in order for the user to apply the product identification input. In addition, when the controller120provides the input request window including the product list, the controller120receives an input of selecting any one of the products included in the product list from the user. FIG.8shows an input request window including a product recognition request phrase displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.8, when a user selects a product picture input method, an embodiment of the input request window displayed on a third virtual cell113of a display110may be confirmed. If the selected input method is the product picture input method, a controller120provides an input request window including a product recognition request phrase. The user may check the input request window including the product recognition request phrase and input an actual appearance of the product to be displayed using a camera140. In addition, the controller120receives product picture information obtained by the camera140. In this case, the controller120may display a product picture on the input confirmation window located below the product recognition request phrase to determine whether the input applied by the user is a correct input. FIG.9shows an input request window including a voice input request phrase displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.9, when a user selects a voice input method, an embodiment of the input request window displayed on a third virtual cell113of a display110may be confirmed. If the selected input method is the voice input method, a controller120provides the input request window including the voice input request phrase. The user may check the input request window including the voice input request phrase and input an identification code or the name of the product to be displayed to a microphone150. In addition, the controller120receives the voice input including the product identification code or the product name obtained by the microphone150. In this case, the controller120may display the product identification code or the product name included in the voice input on the input confirmation window located below the voice input request phrase to determine whether the input applied by the user is a correct input. As described with reference toFIGS.5to9, an administrator may set displayed product information even without an additional terminal device using various methods for controlling the electronic shelf label according to the present disclosure and may set the product information using the various methods. Referring back toFIG.3, after receiving the product identification input, the controller120obtains product information based on the received product identification input (S360). The product information may include information on the product name, product price, the manufactured date of the product, and a related product to a product. In an embodiment of the present disclosure, the product information based on the product identification input may be stored in the management server200. In this case, the controller120transmits a product identification input to the management server200to receive product information from the management server200. Then, the controller120receives the product information based on the product identification input from the management server200. That is, the controller120may obtain the product information from the management server200. In another embodiment of the present disclosure, the product information based on the product identification code may be stored in a memory of the controller120. In this case, the controller120may obtain product information by searching the memory without communication with the management server200. After obtaining the product information, the controller120displays the obtained product information on the selected virtual cell113(S370). An embodiment in which product information is displayed on the selected virtual cell113may be confirmed inFIG.10. FIG.10shows product information displayed on a virtual cell using a method for controlling an electronic shelf label according to an embodiment of the present disclosure. Referring toFIG.10, it may be seen that a picture of a pet bottle-shaped product is displayed on the left side of a selected virtual cell113and product information such as the product name of ‘orange juice by company B” and the price of4,000 is displayed in the right table. The product picture, the product name, the product volume, the price, and the product calorie are displayed in the selected virtual cell113as the product information in an embodiment ofFIG.10, but the present disclosure is not limited thereto and other pieces of information may be displayed as the product information. In addition, it may be seen that menus that consumers may select such as additional information, related product, purchase, and event are additionally displayed in the table on the right side of the selected virtual cell113ofFIG.10. When the electronic shelf label100is operated in the consumer mode, the electronic shelf label100may perform other operations based on reception of an input of touching additional information, related product, purchase, and event from the user. As described above, an administrator may set displayed product information by various methods even without the additional terminal device using the method for controlling the electronic shelf label according to the present disclosure. The present disclosure has been described as described above with reference to exemplary drawings, but the present disclosure is not limited to the embodiments and drawings disclosed herein, and various modifications can be made by those skilled in the art within the scope of the technical idea of the present disclosure. Further, even if working effects obtained based on configurations of the present disclosure are not explicitly described in the description of embodiments of the present disclosure, effects predictable based on the corresponding configuration have to be recognized. | 22,830 |
11861145 | DETAILED DESCRIPTION Embodiments described herein provide technical solutions to various technical problems via improvements to existing technologies and the creation of wholly new technologies. Among the technical problems addressed by embodiments discussed herein include inefficiencies of conventional user interfaces and difficulties in integrating disparate portions of a process workflow. Improvements to user interfaces discussed herein provide practical applications of technical solutions to problems in conventional user interfaces related to user inefficiency, accuracy, repeatability, and computing inefficiency. The technical solutions provided herein improve each of these aspects through the use of inventive user interface methods and techniques. In particular, technical solutions provided by user interfaces disclosed herein provide users with more efficient means of navigating through menu systems for complex processes. User interfaces for electronic devices, implemented for human-computer interactions or communications, often include a series of menus or like choice options, which a user selects (e.g., choose a series of options in a hierarchical manner) in order to have a computer or like device perform a desired function. In some embodiments, depending on types of applications, the amount of information or the number of menu choices presented to the user can become overwhelming. A wide range of available menu options can cause the user to try different choices or navigate to various menu selection hierarchies, before finding a correct or desired series of choices. In some instance, out of 100% of user interface choice and functionality options available to the user, only about 10% are used. However, presented with all of the 100% of the options, the user may have difficulty in deciding where to navigate to in order to find that 10% which is relevant to the user. Also, because a selected menu choice affects the next choice to be made down a path of menu choices, a user switching between choices will mean that the user also navigates to a number of different paths leading from that choice. Such trial and error, in scrolling and paging through many different options, which may occur during user interface navigation, is time consuming, costly and inefficient. Systems, methods and techniques in the present disclosure may provide a user interface that guides a user through choice options to be selected via a user interface display or another presentation device, with less time to find a correct selection. In this way, fewer attempts are made at incorrect selections, and shorter amounts of time in user navigation is taken to complete a desired computing function or goal. In aspects, a user interface in the present disclosure may present the user with a selective limited number of options out of all available options in a specific manner, and guide the user through those options, streamlining operations and providing the user to be able to focus on reaching a desired computing functionality more efficiently. In another aspect, a user interface in the present disclosure can more directly connect the user to an application. The embodiments and technical solutions provide practical applications of specific visual principles to aid users in navigating the menus and systems described herein. Such visual principles include the minimization of visible content and maximization of background or void space so as to reduce visual clutter and emphasize the area of interest. By providing a dark or otherwise uniform background and increasing contrast between the content and background, the user's attention can be drawn to the appropriate areas. The embodiments and technical solutions provide practical applications of specific design principles to aid users in navigating the menus and systems described herein. Design principles embodied herein include, for example, minimizing a number of menus and/or selections a user must navigate at any one time. Further design principles include presenting a user with a single new choice at any given time while providing optionality for revisiting previously made choices with ease. This principle may be implemented via a two portion display system. An active portion may be configured to display a current user choice, while an historical portion is configured to display information related to previous choices. Together, the active portion and the historical portion may provide a “direct workflow mode.” The active portion presenting the current user choice may have a hard limit on the number of menu items displayed, e.g., seven, five, three (or any other number), while other potential items from the same menu are displayed elsewhere. Previously selected choices (and menus from which those selections were made) may be displayed to a user in a nested fashion or a stacked fashion. A nested fashion series of previously navigated menus may be presented in the manner of Russian nesting dolls (matryoshka), with each previously selected menu item being expanded upon in a displayed submenu. The nested or stacked previously selected menu items may also provide a breadcrumb trail illustrating to a user the pathway taken to arrive at the current menu. Embodiments herein maintain a consistent look throughout the use of an interface, regardless of a task or process to be completed, for example by maintaining consistent screen locations for menus so a user does not have to search different locations for menu. In other words, relevant menus are moved to active portions of the screen to bring them to the user's attention as they are needed. In embodiments, the active portion of the screen remains centered top to bottom and left to right. In further embodiments, the size and shape of the menuing interface is altered according to a device or screen on which it is viewed. Menus may be spread horizontally on wider screens and/or spread vertically on taller/narrower screens. Embodiments discussed herein improve user productivity by providing efficiency and accuracy improvements through enhancement of several aspects of the user experience. User interfaces described herein focus the user on the most-likely use cases while minimizing distractions caused by lesser utilized options. Such a focus permits the user interface to minimize visual distractions and keep the user focused on the most relevant menu choices. User interfaces described herein seek to lead the user through the user interface from one step to the next while eliminating sticking points where a user may wonder what to do next. In embodiments herein, the navigational path of the user through the interface system remains transparent to the user to facilitate selecting alternative options or backing out of a current menu. Throughout the process of using the user interface, a user may have the option of viewing, in a non-distracting way, alternative pathways through the process. Accordingly, a core function of the user interface software as provided herein is to reduce the total amount of information presented to the user at any one time while increasing the total amount of relevant information presented to the user at any one time. Additional information and options, for low use cases, remain available in a non-distracting presentation style. Such decisions, regarding what information to present through the user interface at any given time may be guided in advance through predetermined menu workflows and/or may be influenced and updated through analysis of prior user actions and choices. Computer functionality may also be improved via embodiments provided herein. For instance, by focusing on a limited number of options, resource usage of devices (e.g., user devices and/or server devices) which may be involved in running the user interface can be reduced. For instance, memory usage, processor resources usage such as a central processing unit (CPU) usage, hard drive or like persistent storage usage, bandwidth needed for communications between devices (e.g., device to device, device to server, server to server), may be reduced. An ability to directly navigate to or reach correct selections or a path of selections, for example, without many trial and error navigations, can also increase communications efficiency between devices and servers, for instance, decrease internet communications and cost associated with such communications. Further embodiments discussed herein relate to the integration of various process workflow aspects. As discussed herein, “process workflow” may relate to instrumentation (including bioinstrumentation) testing workflows, manufacturing workflows, analysis workflows, and/or any workflow that may involve one or more pieces of equipment controlled, at least partially, by one or more computing systems. In additional embodiments, process workflows consistent with embodiments discussed herein may include the use of one or more consumables. Computing systems consistent with the user interfaces and process workflow management systems discussed herein may include various architectures, including but not limited to single computing device systems, desktop computing systems, laptop computing systems, tablet computing systems, mobile device computing systems, thin client computing systems, cloud based computing systems, server computing systems, multiple device computing systems, device/printer systems, device/server computing systems, systems including multiple devices and server(s), or any other suitable computing system. The process interface systems described herein serve to increase user accuracy, efficiency, and satisfaction by providing a user interface that is faster to use, reduces time to find correct menu items, reduces selection of incorrect menu items, decreases overall workflow time. As compared to traditional systems that may provide immediate access to 100% of options, of which only 10% are frequently used, systems described herein may provide immediate access to only those functions that are frequently used (e.g., in 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 95+%, 70-95+%, 80-95+% of use cases.) In turn, the solutions provided herein serve to increase computing efficiency, decrease memory usage, decrease utilization of CPU, hard drive, power, and communications resources. User interface systems discussed herein may be provided in the form of graphical user interfaces (GUIs), text-based user interface systems, virtual, augmented, or mixed reality (VAMR) interface systems, projection based systems, gesture controlled systems, and/or any other type of visual user interfaces. Collectively, user interface systems, consistent with embodiments hereof may be referred to as “methodical user interfaces” (MUIs). MUIs may include graphical user interfaces (GUIs), text-based user interface systems, virtual, augmented, or mixed reality (VAMR) interface systems, projection based systems, gesture controlled systems, and/or any other type of visual user interfaces. Although some of the principles discussed herein are discussed specifically with respect to, for example, a GUI, no limitation is intended, and the principles discussed herein may equally be applied to other interface systems. MUIs described herein refer to “displays,” “interfaces,” and “user interfaces.” As used herein, unless stated otherwise, the terms “display,” “interface,” and “user interface,” refer to the text, images, visual components, interactive elements, and any other visual aspects that are shown or displayed on a screen, projection, or other visual display hardware. It is thus understood that “displays” and “interfaces,” as used herein, may be provided via any type of visual display hardware, screen(s) and/or projector. For convenience, menus, interfaces, and other visual items are referred to herein as being viewed on a MUI or displayed by a MUI. It is understood that such references indicate that the MUI is visually presented via hardware devices as discussed herein. As described in greater detail below, user interface systems described herein may use various visual components for presenting menu items. For example, visual components may include vertical “wheels” or horizontal wheels that rotate through various menu items. The use of a “wheel” as a visual component, as described herein, refers to the way in which prominent (emphasized) and receded (deemphasized) options are presented to the user. Wheel-type visual components can be understood as a virtual wheel with the rim facing the user and with multiple menu items disposed on the rim of the virtual wheel. Wheel-type visual components may or may not include any visual indicators of the presence of a wheel. Wheel-type visual components may present a prominent option to the user in a way that draws attention (i.e., on the portion of the wheel “closest” to the user) while other, receded options, are presented in a way that does not draw attention. Prominent menu items may be highlighted in a different color, presented in a different font, presented in a larger font, or otherwise visually marked to draw attention. As the virtual wheel is rotated, the currently prominent menu item rotates away from the user (either clockwise or counterclockwise) and a currently receded menu item becomes the new prominent option. In embodiments, the receded menu items closest to the prominent menu item may be displayed to draw more attention than receded menu items further from the prominent menu item. For example, menu items may decrease in size or brightness based on their distance from the currently prominent menu item. As the “wheel” is “rotated,” receded menu items may fade from view. In this fashion, the virtual wheel provides the user with the sense and feel that the menu items are all disposed on an actual wheel. Visual components may further include horizontal or vertical sliders that slide through various menu items. Similarly, to wheels as discussed above, sliders may be used to provide a prominent menu item and receded, or less prominent menu items. In embodiments, sliders may differ from wheels in that receded menu items do not appear to fade from view as the options in the slider are slid through. Further embodiments of wheels and sliders are discussed further herein with respect to specific embodiments. As discussed herein, menu items may variously be “selected,” “highlighted,” and/or “clicked.” As used herein, “highlighting” a menu item means that the “highlighted” option is prominently displayed to the user, for example, as a prominent menu item in the center of a wheel. “Highlighting” may include changing the color, size, font, etc., of a menu item to visually emphasize the menu item to the user. “Dehighlighting” a user option may include changing the color, size, font, etc., of a menu item to visually deemphasize the menu item to the user. A menu item may be highlighted or dehighlighted in response to user action (e.g., via clicking a mouse, touching a touch screen, spinning a wheel, etc.) and/or may be highlighted or dehighlighted based on an action of the interface (e.g., by presenting a highlighted default option). As used herein, “selecting” a menu item means that a menu item has been chosen by the user and that the user interface has proceeded with one or more menu steps in accordance with the selection. “Selecting” a menu item causes the computer system to execute computer instructions to advance the menu beyond simply “highlighting” the menu item. For example, “selecting” a menu item may cause a new menu to be displayed based on the selection. Selected menu items may be highlighted after selection but highlighting of a menu item does not necessarily include selecting the menu item. In some embodiments, a menu item may be selected or highlighted via clicking on the menu item. As used herein, “clicking” refers to the user action of clicking, tapping, or otherwise using an interface device (e.g., mouse, touchscreen, etc.) to indicate or choose a menu item. “Clicking” a menu item, as used herein, differs from “selecting” a menu item. Clicking refers to the user action of indicating a menu item, while selecting refers to the computer functionality associated with the selection of the menu item. In some embodiments of a system in accordance herewith, a menu item may be selected through clicking. Clicking on a menu item may cause the system to advance to the next series of menu items. In other aspects of the disclosed system, clicking a menu item serves to highlight the menu item, but does not select it to advance the system to the next menu item. Menu items may be described herein as “selectable.” A “selectable” menu item refers to a menu item that a user can interact with, either through selecting it or highlighting it. Selectable menu items may be displayed in a fashion that indicates that they are selectable, through changes in coloring, highlighting, fonts, etc. Menu items may be described herein as “unselectable.” “Unselectable” menu items refer to menu items that a user cannot currently interact with through selection or highlighting. Unselectable menu items may be displayed in a fashion that indicates that they are unselectable, through changes in coloring, highlighting, fonts, etc. Menu items may also be described as “past selected” and “past unselected.” A “past selected” menu item refers to a menu item that was selected to arrive at the current menu interface display. It is not required that a “past selected” menu item have been actively selected by a user. If the system, by programmed default, brings a user to a menu level below a top level, a menu item or choice in the current pathway may be indicated as “past-selected,” even if a user has not actively selected it during the current session. A “past unselected” menu item refers to a menu item that was not selected to arrive at the current menu interface display. For example, where a user has selected a first menu item and has not selected a second menu item, the system may proceed to display a subsequent menu or submenu responsive to the selection of the first menu item in an active portion of the MUI. In a historical portion of the MUI, the system may display the first menu item as a past selected menu item and the second menu item as a past unselected menu item. The past unselected menu item may be displayed as selectable. For example, a user may scroll a slider or spin a wheel through various menu items. A user may settle the wheel or slider such that a specific menu item has been highlighted. In embodiments, the specific menu item may require further user interaction (e.g., a single or double click) to be “selected,” which causes the MUI to present a new set of menu items or submenu items responsive to the selection. In such an embodiment, a user would spin a wheel or scroll a slider to move a desired menu item to be the highlighted prominent menu item. Then, the user would click, double click, or otherwise indicate approval of the highlighted menu item as a selection to cause the presentation of the next menu or submenu. In embodiments, the specific menu item may be “selected” at the same time that it is highlighted. In such an embodiment, spinning the wheel or scrolling the slider to move the desired menu item to the highlighted prominent menu position would cause the associated submenu to be presented as soon as the desired menu item is highlighted. Selection or highlighting a menu item, as discussed herein, may be caused by directly choosing (i.e., clicking, touching, etc.) on the menu item, wherever it may be on a wheel, slider, and/or list of items, regardless of whether it is a prominent or receded menu item. Selection or highlighting a menu item may also occur responsive to user manipulation of various visual components to cause the menu item to move to a position where it is to be highlighted or selected. For example, a user may spin a wheel or move a slider until a particular menu item is prominent and highlighted. Manipulation of visual components and/or direct choosing may be implemented through the use of any suitable user input device, including touchscreens, mice, keyboards, arrow keys, gaze detection system, motion detection systems, gesture detection systems, etc. Features of embodiments of the interface may be referred to as a “first portion” and a “second portion.” These terms refer to specific portions of the displayed user interface at various times and are not required to be fixed to specific places on the screen. As used herein, a “first portion” may also be referred to as an “active portion.” The “first portion” or “active portion” represents the portion of the MUI displaying the most current or newest set of menu items. “First portion” and “active portion” may be used interchangeably herein. The “second portion” may also be referred to as an “historical portion.” The “second portion” or “historical portion” represents the portion of the interface displaying previously viewed menus and previously selected and unselected menu items. “Second portion” and “historic” portion may be used interchangeably herein. FIG.1illustrates a method of interactively navigating a user through a path of menu choices on a user interface in one embodiment. The method may be performed automatically by at least one hardware processor. The method facilitates moving a user through a system by asking questions, showing past choice or choices the user has made along with other option(s) that were not chosen while drilling down through additional choice(s) based on the initial choice. As used herein, “asking questions” refers to presenting a user with one or more menu choices to select from. The method allows the user to continue down a path or jump to a different path, going back in time to a choice made in one or more earlier step(s) or going back to the latest point at which the user has made a choice. The user interface in one embodiment presents and allows the user to see the past or prior choice(s) that have been made and not made, for example at every step of the path, regardless of where the user is on the path, all on the same screen. The user interface for example, presents an outline of the user's menu choice path that also includes menu item(s) not chosen. The user interface methodology allows for more efficient navigation, leading the user along a path, allowing the user to see the path the user is going through, and allowing the user to deviate from a path that has been set for the user to a different path. The user interface methodology allows the user to be able to see backward and forward breadcrumb(s), and where the user is going and where the user could go. As discussed herein, menus are presented as a series of hierarchical menu trees. Each level of the menu tree includes multiple menus leading to other menus. Accordingly, a first level of the menu tree includes a plurality of first menus, a second level of the menu tree includes a plurality of second menus, a third level of the menu tree includes a plurality of third menus, etc. This structure continues to an execution menu level. In some discussions herein, a first menu is referred to simply as a menu, while subsequent menu layers in the tree are referred to as submenus, sub-submenus and so on. At time, multiple layers of menus below a current menu may be collectively referred to as submenus. Thus, the submenus of a first menu may include a plurality of second menus, a plurality of third menus, a plurality of fourth menus, a plurality of execution menus, and so on. An example of a hierarchical menu tree structure is illustrated inFIG.2K. As used herein, with reference to the hierarchical menu tree, each level is referred to as a “menu” even where it does not present a literal menu to the user. For example, a “menu” may present only an “execute” button to implement a process designed throughout other portions of the menu. Another “menu” may present a tutorial, for example. Each of the numbered menus includes multiple menu items or choices, with each item or choice pointing to a new menu at a lower level. Thus, the items in a first menu may each point to one of the plurality of second menus. In some embodiments, a menu layer may be skipped. For example, an option in a first menu may point to one of the plurality of third menus. In embodiments, each menu may also include, for display in the MUI, additional information. Additional menu information may provide a user information about items in the menu and/or general context regarding the menu. For example, where a menu presents a user with save file options, additional information may be provided that indicates remaining disk space. In another example, where a menu presents a user with options pertaining to assays to be run, additional information may be provided on available consumables related to the displayed assays. At the execution menu level, i.e., a last level in a series of menus, a user may select execution menu choices or items. These choices or items do not lead to further menus, but instead represent selections of parameters for the process the menu tree is intended to facilitate. Selection of execution menu choices or items causes the system to perform a function related to the selected menu choices or items. For example, when using an assay design menu tree, execution menu choices may include options such as file name, assay parameters, reagent choices, etc. In embodiments, execution menus may facilitate the interface between the MUI software and the physical world. Execution menus may provide, for example, execute commands that are output by the methodical user interface control system1102to connected systems or instruments to implement processes that were designed through use of the MUI. In examples, such execute commands may cause manufacturing systems to begin manufacturing parts, may cause assay instruments to begin conducting assays, may cause design systems to transmit design specifications, etc. In embodiments, execution menus may provide user walkthroughs or tutorials. For example, after designing a workflow or process, an execution menu may provide a walkthrough or tutorial coinciding with the workflow, offering text based, audio based, video based, and image based tutorial steps to walk the user through each step of the designed workflow or process. In embodiments, execution menus may provide walkthroughs and/or tutorials in conjunction with execution commands issued to physical world instruments and machines. For example, in a modular laboratory system, such a combination may provide instructions to a user to load a machine (e.g., with assay plates and reagents) and then provide execution commands to the machine to run the process. As new steps in the process require physical intervention by the user (moving assay plates, etc.), the MUI, at the execution level, may provide the user with additional instructions (text based, video based, image based, audio based, etc.) to advance the process. In embodiments, user instructions and notifications to implement a user intervention portion of a process may be provided via various communication means, including, for example, text (SMS, MMS), e-mail, phone call, instant message, slack message, and any other type of messaging protocol. Such various communication means may be useful, for example, when portions of the machine processing take some time to complete and a user may not wish to remain at the process location during processing. Accordingly, where a user has initiated a process that takes several hours, they may receive a text message indicating that their intervention is required to advance the process. These types of “cobot” interactions, wherein the MUI integrates the physical world actions of both human operators and automated machines may be applied to various processes or workflows, including laboratory workflows, manufacturing workflows, food production workflows (e.g., beer production, bread production, etc.), shipping and logistic workflows (e.g., box filling and picking, packaging, etc.). As used herein, “display” of a menu includes display, within the MUI, of one or more items in the menu. Display of a menu does not require display of all items or options in the menu. The menu items or items that make up the first menu remain the same, regardless of whether each menu item is displayed. As discussed in greater detail below, certain menu items may be excluded or limited for various reasons. As discussed herein, a specified “first menu” or “second menu” may be relocated to various portions of the screen. When relocated, the first menu may continue to display the same set of first menu items and/or may display a different set of first menu items. As discussed herein, menus may also be referred to based on their temporal status. A “current menu” refers to a menu that is currently active in the active portion of the MUI from which a user is prompted to select an option. A “past menu” refers to a menu from which a user has previously selected options. Past menus may be displayed in the historical portion of the MUI. A “subsequent menu” refers to a next menu that becomes active after the current menu becomes a past menu. For example, a first menu may be displayed as a current menu. After a selection has been made from the first menu, the first menu may then be relocated to become a past menu. A subsequent menu, a second menu indicated by the selection made from the first menu, may then be displayed as a current menu. Current menus may be displayed in the first or active portion of a user interface while past menus may be displayed in the second or historical portion of a user interface. In the historical portion, the menu items of each past menu may be displayed in the MUI in a linear fashion. All of the menu items from that menu level are displayed in a single line (horizontal or vertical). Each set of past menu items may be displayed in such a linear fashion while the menus as a whole may be displayed in a stacked or nested fashion. This feature is shown, e.g., inFIG.2Cwhich shows MENU ITEMS displayed in a linear fashion and SUBMENU ITEMS displayed in a linear fashion. The relationship between the MENU ITEMS and the SUBMENU ITEMS is a stacked or nested relation. Accordingly, among a single menu level, the menu items are adapted to be displayed in a linear fashion while the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion. A menu of choices may be displayed in a graphical wheel that rotates the choices in a direction, for example, horizontal or vertical (for example, left and right, up and down) or another direction. In another aspect, a menu of choices may be displayed as a graphical slider that slides the choices in a direction, for example, horizontal or vertical (for example, left and right, up and down) or another direction. For instance, an initial menu level (first level) may be displayed in the horizontal and slide left and right and the next menu level (second level) may be displayed in the vertical and rotate up and down. Yet in another aspect, menus of choices may be displayed as a series of concentric circles, each menu level displayed as a circle with menu choices (also referred to as options or menu items). For instance, an initial menu level (first level) may be displayed in the center circle, the next menu level (second level) may be displayed in the next circle (second circle) that surrounds the center circle, further next menu level (third level) may be displayed in yet another circle that surrounds the second circle, and so forth. Still yet, menus of choices may be displayed or visualized as a graphical decision tree with nodes and edges. Each level of the graphical decision tree may represent a menu level with choices. In one embodiment, the wheel and/or the slider need not rotate fully, for example, do not make a full revolution or circle around. For instance, the wheel and/or the slider rotates or slides from a beginning menu item to an ending menu item, and back from the ending menu item to the beginning menu item. In this way, for example, the beginning and end of the menu are always apparent because the two do not merge or come together. This technique decreases processing time because the wheel and/or the slider is able to convey (and a user is able to immediately understand) the full menu of choices with clear indication as to where or which is the first menu item and where or which is the last menu item in the choices presented by the wheel and/or the slider. In further embodiments, the wheel and/or sliders may rotate fully to permit a user to easily access the beginning of a menu after reviewing the entire menu. In such embodiments, a visual indicator may be provided to indicate that the menu has been rotated through a full rotation and back to the beginning. In various embodiments, the terms “software protocol” and “computer instructions” are used to describe software instructions or computer code configured to carry out various tasks and operations. As used herein, the term “manager” refers broadly to a collection of software instructions or code configured to cause one or more processors to perform one or more functional tasks. For convenience, the various managers, computer instructions, and software protocols will be described as performing various operations or tasks, when, in fact, the managers, computer instructions, and software protocols program hardware processors to perform the operations and tasks. Although described in various places as “software” it is understood that “managers,” “software protocols,” and “computer instructions,” as used herein, may equally be implemented as firmware, software, hardware, or any combination thereof for instructing a computer or other electronic device for performing and/or carrying out a series of steps and/or instructions. Furthermore, embodiments herein are described in terms of method steps, functional steps, and other types of occurrences, such as the display of menus, the selection of options, etc. Although not explicitly stated in every instance, it will be understood that these actions occur according to computer instructions or software protocols executed by one or more computer processors. Functionality of the managers and software protocols discussed herein may be provided by the issuance of one or more commands. As discussed herein, “commands” issued by managers and software protocols refer to the signals and instructions provided to various aspects of the computing system to cause various actions to occur. Commands may be issued from one manager to another manager and/or may be issued to other components of the system. For example, a manager may provide a command to cause display of certain visual components within a menuing interface. Such a command may be directed towards a physical display screen and may include the required signals and instructions to generate the visual components. As used herein, when a manager is described as performing an action or carrying out certain functionality, it is to be understood that the manager has issued a command to cause such action or functionality to occur. In various embodiments, the term “module” is used herein to refer to a specific suite of software protocols and computer instructions to generate, maintain, and operate the multiple components of a MUI as described herein. The one or more processors described herein may be configured to execute multiple software protocols so as to provide a methodical user interface module. As used herein, “methodical user interface module” refers to any of a subset of modules providing specific user interfaces. For example, an admin console module, audit trail module, and reader module are provided as specific methodical user interface modules to carry out tasks related to system administration, auditing, and plate reading, respectively. Each MUI module may be understood to include at least a hierarchical menu tree including multiple layered menus. Each module may further include preferred default visual components, preferred default exclusion and limitation lists, and other features specific to the module. Other modules are discussed in greater detail below and throughout the present disclosure. Throughout the present disclosure, multiple aspects of various MUI modules are discussed. The discussed aspects of any specific MUI module are non-exclusive and non-limiting and may equally be applied to any other MUI module. Accordingly, any MUI feature discussed herein, either broadly or with respect to a specific module, may also be applied broadly to the MUI in general and/or to any other specific MUI module discussed herein. Referring now toFIG.56, a methodical user interface control system1102consistent with embodiments hereof is illustrated. The methodical user interface control system1102includes one or more processors1110(also interchangeably referred to herein as processors1110, processor(s)1110, or processor1110for convenience), one or more storage device(s)1120, and/or other components. The CPU2(seeFIG.19) and the hardware processor1804(seeFIG.18) may be examples of a processor1110configured as described herein. In other embodiments, the functionality of the processor may be performed by hardware (e.g., through the use of an application specific integrated circuit (“ASIC”), a programmable gate array (“PGA”), a field programmable gate array (“FPGA”), etc.), or any combination of hardware and software. The storage device1120includes any type of non-transitory computer readable storage medium (or media) and/or non-transitory computer readable storage device. Such computer readable storage media or devices may store computer readable program instructions for causing a processor to carry out one or more methodologies described here. The memory4(seeFIG.19) and the memory device1802(seeFIG.18) may be examples of a storage device1120. Examples of the computer readable storage medium or device may include, but is not limited to an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof, for example, such as a computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, but not limited to only those examples. In embodiments, the storage device1120may include multiple storage devices1120. Multiple storage devices1120consistent with embodiments hereof may be collocated and/or non-collocated. For example, one physical system my contain a first memory storage device1120and a second physical system may contain a second memory storage device1120. In embodiments, the processor1110and the storage device1120may be implemented via a cloud computing platform or other form of distributed computing. In such implementations, the processor and the storage device may each include a plurality of processors and storage devices for carrying out the tasks and functions described herein. The processor1110is programmed by one or more computer program instructions and/or software protocols, referred to as “managers” stored on the storage device1120. For example, the processor1110is programmed by a display manager1050, an input manager1052, a menu manager1054, a user manager1056, an exclusion manager1058, a network manager1060, and a data storage manager1064. It will be understood that the functionality of the various managers as discussed herein is representative and not limiting. Furthermore, the functionality of the various managers may be combined into one or more modules, applications, programs, services, tasks, scripts, libraries, applications, or executable code, as may be required. The managers as discussed herein may be implemented to manage a MUI in various embodiments to complete various tasks that require process workflows. Although various software implementations of the MUI are described herein with respect to one or more specific embodiments, the methods and functionalities provided by the aforementioned managers may be implemented to provide MUIs for any process workflow. The aforementioned managers may be functionally implemented through software libraries The various components of the methodical user interface control system1102work in concert to provide a user with a methodical user interface display via any type of display hardware, including screens, projections, touchscreens, headsets, etc. In embodiments, the methodical user interface control system1102implements one or more software protocols for interactively navigating a user through path(s) of menu items, options, or choices in a MUI. The software managers described above may include sets of computer instructions, software libraries, dynamic link libraries, application program interfaces, function libraries and other compilations of executable code. The methodical user interface control system1102may further include appropriate graphics libraries containing the graphics required to implement and instantiate the various visual components described herein. The managers may be customized for use in a specific implementation through the use of various data structures representative of module information, including tables, linked lists, databases, b-trees, binary trees, heaps, stacks, queues, hash tables, red-black trees, binomial heaps, Fibonacci heaps, and any other suitable data structure. Accordingly, managers of the MUI may be provided as customizable code libraries configured to interface, interact, and otherwise integrate with additional computer instructions and data structures for the purpose of providing a MUI module capable of performing specific tasks. The display manager1050is a software protocol in operation on the methodical user interface control system1102. The display manager1050is configured to manage the methodical user interface display, including all visual components thereof. Display manager1050may be configured to issue commands to cause the display of various menu items as required. The input manager1052is a software protocol in operation on the methodical user interface control system1102. The input manager1052is configured to manage all inputs received by the methodical user interface control system1102, including, but not limited to, user inputs and inputs from other systems. The input manager1052may be configured to issue commands to other managers of the methodical user interface control system1102according to inputs received. User actions, such as clicking and other screen interactions cause the input manager1052to receive a signal indicative of the user interaction. Receipt of such a signal causes the appropriate manager of the methodical user interface control system1102to provide a command in response to thereby cause one or more actions, including MUI navigation, menu display, etc., as discussed herein. For ease of explanation, such interactions and user inputs may be referred to as causing a specific response, when in fact the specific response is caused by the methodical user interface control system1102responsive to the interaction or user input. The menu manager1054is a software protocol in operation on the methodical user interface control system1102. The menu manager1054is configured to manage the hierarchical menu trees and all menu items associated with the menu trees. The menu manager1054is configured to select appropriate menu items for display, to determine a next menu to display, and otherwise manage all aspects of navigation through a menu tree. The menu manager1054may be configured to issue commands to other managers of the methodical user interface control system1102according to menu navigational requirements. The user manager1056is a software protocol in operation on the methodical user interface control system1102. The user manager1056is configured to manage user access to the methodical user interface control system1102. The user manager1506, for example, manages user authorization, including the maintenance of user authorization records, the validation of user credentials, and other required user authentication tasks. The exclusion manager1058is a software protocol in operation on the methodical user interface control system1102. The exclusion manager1058is configured to manage menu item exclusions and limitations. As discussed herein, menu items may be excluded or limited based on various factors. The exclusion manager1058may be configured to issue commands to implement such exclusions and limitations. The network manager1060is a software protocol in operation on the methodical user interface control system1102. The network manager1060is configured to establish, maintain, and manage all network communications between the methodical user interface control system1102and various other system components discussed herein. The established communications pathway may utilize any appropriate network transfer protocol and provide for one way or two-way data transfer. The network manager1060may establish as many network communications as required to communicate with all system components as necessary. The data storage manager1064is a software protocol in operation on the methodical user interface control system1102. The data storage manager1064is configured to store, retrieve, archive, manipulate, and manage all data structures and data storage devices that the methodical user interface control system1102may interface with. The data storage manager1064is configured to issue commands to any of the various data storage devices discussed herein to manage the storage and retrieval of data. The above descriptions of the display manager1050, input manager1052, menu manager1054, user manager1056, exclusion manager1058, network manager1060, and data storage manager1064provide an overview of the capabilities and tasks of these managers. The managers are not limited by the above description, and, in various embodiments as discussed below, may have additional, different, and/or more capabilities. The described structure of the methodical user interface control system1102is by way of example only, and it is to be understood that the various functionalities and capabilities of the computer instruction programmed processors described herein may be carried out, implemented, or effected by a software system of alternative structure. The methodical user interface control system1102may present menu choices among one or more hierarchical menu levels, wherein each menu level can include one or more menu items or choices. Hierarchical menu levels, as described herein, refer to the multiple levels in a menuing system. A selection in a first, highest, menu level causes navigation to a lower hierarchical level, i.e., second menu, a submenu, or sub-level. Selection within the second menu or submenu causes navigation to a still lower hierarchical level, i.e., a third menu or sub-submenu or sub-sub-level. Hierarchical menu structures may include any suitable number of levels. In some embodiments, selection at one level may cause navigation to a level one, two, three or more levels below the current level. Each menu may present options in an active portion of the interface. The menu choices may be selectable, representing options for the user to select. Selection of a menu choice or option may trigger the display or presentation of a subsequent, following, or submenu, which may include several menu choices or submenu choices of its own. As the user selects menu options, that lead to new menus, the menu items of the old menu may be moved from the active portion to a historical portion of the interface, allowing the user to easily move to new menu choices while retaining knowledge of previous menu choices. These features are described in greater detail below with respect toFIGS.2A-2O,FIG.3, andFIG.57. FIG.57is a flow chart showing a process5200of navigating a path of hierarchical menu levels adapted for output to a user interface, such as a GUI, MUI, and or any other type of UI discussed herein. The process5200is performed on a computer system having one or more physical processors programmed with computer program instructions that, when executed by the one or more physical processors, cause the computer system to perform the method. The one or more physical processors are referred to below as simply the processor. In embodiments, the process5200is carried out via the methodical user interface control system1102as described herein. The methodical user interface control system1102represents an example of a hardware and software combination configured to carry out the process5200, but implementations of the process5200are not limited to the hardware and software combination of the methodical user interface control system1102. The process5200may also be carried out and/or implemented by any other suitable computer system as discussed herein. Description of the process5200is not limiting, and the various operations may be altered or revised in accordance with embodiments described herein. In an operation5202, the process5200includes providing a first display command. The display manager1050provides the first display command for the display of a first menu having one or more user-selectable items to be displayed on a first portion of a UI display. The first menu may be displayed in the first portion according to any of the visual components disclosed herein, for example, a wheel-type visual component. The selectable items of the first menu may be determined, for example, by the menu manager1054as discussed herein. In an operation5204, the process5200includes receiving a selection. The input manager1052receives a selection of a menu item from the first menu according to an input provided to the system. The input may be a user selection and/or may be an automated selection as discussed herein. A user selection may be received, for example, from a user clicking on a highlighted or emphasized menu item. Upon selection, the menu item may be a past-selected menu item. In an operation5206, the process5200includes providing a relocation command. The menu manager1054provides a relocation command for the first menu to be relocated from the first portion of the UI display to the second portion of the UI display. The relocation command may be provided responsive to the selection received. Upon relocation, the menu items of the first menu include the one or more past-selected menu item(s) and past-unselected menu item(s) that were not selected to cause the relocation. Display of the first menu in the second portion may be provided according to any of the visual components disclosed herein, for example, a slider-type visual component. The relocation command of the menu manager1054may be sufficient to cause an update to the UI display. In another embodiments, the relocation command may be combined with and/or include a display command provided by the display manager1050. In an operation5208, the process5200includes providing a second display command. The second display command is provided by the display manager1050responsive to the selection of the menu. The second display command causes a second menu of one or more user-selectable items to be displayed on the first portion of the UI display, i.e., after the first menu has been relocated. The second menu may be displayed according to any of the visual components disclosed herein, for example, a wheel-type visual component. In embodiments, the second display command may incorporate information received from the menu manager1054related to hierarchical menu tree navigation. After relocation of the first menu and display of the second menu, the first menu, containing one or more past-selected and past-unselected menu items of the hierarchical menu tree, may be viewed in the second portion concurrently to the second menu being viewed in the first portion. The process5200may further include additional or different operational steps as described throughout the present disclosure. Referring toFIG.1, at an operation102, a current menu of choices (e.g., a list of menu items) may be displayed on a first portion of the user interface display. At an operation104, the user interface allows the user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through level(s) of menu choices based on selecting a menu item from a prior level of menu choices. At an operation106, past selected and past unselected menu item(s) of the drilled-down levels are displayed on a second portion of the user interface display. The past unselected menu items are displayed as selectable options. The past selected menu item (or choice) may be also displayed as a selectable option. At an operation108, the user interface allows the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display. The user interface displays both the first portion and the second portion so that they are both viewable on the same screen of the user interface, for example, viewable concurrently. In one embodiment, the first portion and the second portion are shifted to substantially center the first portion displaying the current menu of choices on the user interface display while fitting both the first portion and the second portion on the user interface display. Thus, for example, the first portion and the second portion need not remain in a fixed location of the user interface display during the navigating or drilling down (or up) through different levels of menu choices. In one embodiment, the user interface responsive to detecting a selection of a menu item from the current menu of choices, relocates the current menu of choices to the second portion of the user interface display, and displays on the first portion of the user interface display a next level of menu choices based on the selection of the menu item. The relocated current menu of choices is shown on the second portion of the user interface display and becomes the past selected and past unselected menu items of a past menu level. The next level of menu choices is shown on the first portion as the current menu of choices. As described above, a menu of choices may be displayed as a rotatable graphical wheel showing menu items (choices or options) in which the menu items on the wheel can be shown as the wheel is rotated. A like graphical slider in which the menu items on the slider can be shown as the slider is slid. The action of rotating or sliding may be performed responsive to a motion of a finger on a touch screen or an input from a pointing device or another input device. In another aspect, the action of rotating or sliding may be performed automatically by the user interface (or a hardware executing the user interface) in a timed manner. In one embodiment, the rotating or sliding direction may switch to different orientation as the menu of choices is relocated from the first portion to the second portion. The current menu of choices may be displayed in first visual orientation on the first portion of the user interface display and the drilled-down levels of menu choices that include the past selected and past unselected menu items may be displayed on the second portion of the user interface display in second visual orientation. In one embodiment, the current menu of choices is displayed as a graphical rotating wheel or a slider that rotates or slides the choices in a direction of the first visual orientation. In one embodiment, a drilled-down level in the drilled-down levels of menu choices is displayed as a graphical rotating wheel or a slider that rotates or slides choices of the drilled-down level in a direction of the second visual orientation. In one embodiment, the second visual orientation is substantially orthogonal to the first visual orientation. In one embodiment, the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. In another embodiment, the first visual orientation is a horizontal orientation and the second visual orientation is a vertical orientation. In one embodiment, the drilled-down levels of menu choices relocated to the second portion are displayed as a stack of menu levels. In another embodiment, the first portion and the second portion may be displayed as a series of concentric circles. For instance, the first portion may be displayed as the center circle of the series of concentric circles, and the past menu levels as the circles outside or surrounding the center circle. Each circle representing a menu level may include menu items (choices or options) that are rotatable, for instance, in order for the user to be able to view all options present on that menu level. Upon selecting a menu item from the current menu of choices, that current menu of choices is relocated to an outer circle and the center circle displays the next menu of choices based on the menu item that is selected. For instance, a circle (e.g., dial) may include window(s) that show the active option and turning the circle (e.g., dial) shows other options in the window(s). While the dial options seem finite, the dial options may be infinite. For example, the dial keeps spinning until the last option (or beginning option, if turning backward) is shown. In another aspect, the window may be opened up to show the selected option as lit up, with one (or more) option to the left and another (or more) option to the right. In yet another embodiment, the first portion and the second portion may be displayed as a graphical decision tree. In one embodiment, the past selected menu items in the drilled-down levels displayed on the second portion of the user interface display are displayed highlighted relative to the past unselected menu items of the drilled-down levels displayed on the second portion of the user interface display. In an embodiment, upon encountering the last level in a chosen path of menu levels, and for example, upon performing a function related to the chosen item in the last menu level, the user interface may return the current menu view to another item in an upper level, for example, the first menu list. For instance, the current menu of choices may again be the first initial menu level and may be displayed in the first portion. In an embodiment, the first and second portions are not independent, but linked to each other to make navigation more efficient, leading the user along a path, allowing the user to see the path the user is going through, and allowing deviation from the path that has been set for the user to a different path, for example, being able to see backwards and forwards breadcrumbs to be able to see where the user has been and where the user may go in a path of menu choices. The user interface in one embodiment is able to guide the user, through efficient path choices such that the user need not wander about the user interface trying to find the next appropriate path or action. Such efficient path guidance allows for saving computer resources, for instance, in central processing unit (CPU) cycles and memory usage spent in swapping in and out of processor threads and memory elements in a computer running the user interface. Referring now toFIGS.18-19, additional example systems for carrying out the methods described with respect toFIG.1are provided. As discussed above, aspects of the systems presented inFIGS.18and19may be embodiments and/or implementations of the methodical user interface control system1102shown inFIG.56. FIG.18illustrates components of a graphical user interface (GUI) system in one embodiment. One or more hardware processors1804may execute a graphical user interface module and perform the graphical user interface functions described above, displaying the graphical elements as described above on a user interface display device1806coupled to the one or more hardware processors1804. A memory device1802may store a list of menus and a list of menu items or choices available for each of the list of menus, which the graphical user interface module may access to display on the display device1806. The display device1806may include a screen device and/or a touchscreen device. One or more pointing devices1808may be coupled to the one or more hardware processors1804for allowing input via the display device1806. The memory device1802may be any type of computer readable storage media as described herein. AlthoughFIG.18specifically refers to a GUI system, this is by way of example only. It is understood that methods and techniques described herein may also be carried out via other MUIs, including text based, virtual reality based, augmented reality based, mixed reality based, and others. For instance, a hardware processor1804coupled to the memory device1802and the display device1806may display a current menu of choices on a first portion of a user interface display, allow a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices. The hardware processor1804may also display on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options. The hardware processor1804may also allow the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display. The hardware processor1804, for instance, may perform the method described with respect toFIGS.1and3. The GUI techniques described above may be implemented using computer languages such as JAVA, and JavaScript, but not limited to those languages. In an embodiment, the functionalities and modules of the system and methods of the present disclosure may be implemented or carried out in distributed manner on different processing systems or on any single platform, for instance, accessing data stored locally or in distributed manner on a computer network. Similarly, software protocols and managers of the present disclosure may be implemented or carried out in distributed manner on different processing systems or on any single platform, for instance, accessing data stored locally or in distributed manner on a computer network. The GUI techniques may be carried out on any type of computing device, e.g., a desktop computer, laptop computer, mobile device (e.g., android or Apple IOS), tablet, and use any type of interface, e.g., mouse, touchscreen, etc. The GUI technique may also be carried out on an instrument, e.g., an assay instrument for performing biological assays such as immunological or nucleic acid assays. In some embodiments, the instrument performs electrochemiluminescence assays. In some embodiments, the instrument is an automated assays system, for example, comprising, (a) a single robotic controlled 8-channel pipettor, (b) a single robotic controlled assay plate gripper arm, (c) a single 96-channel channel assay plate washer, (d) a single plate reader, (e) one or more plate shakers with a total capacity of at least 5 plate shaking locations, and (f) a processor adapted to execute an assay process for analyzing a plurality of samples in 96-well plates. Various embodiments may be program, software, or computer instructions embodied or stored in a computer or machine usable, readable or executable medium, which causes the computer or machine to perform the steps of the method when executed on the computer, processor, and/or machine. For instance, a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform various functionalities and methods described in the present disclosure may be provided. The system and method of the present disclosure may be implemented and run on a general-purpose computer or special-purpose computer system (or device). The computer system may be any type of known or will be known systems and may include a hardware processor, memory device, a storage device, input/output devices, internal buses, and/or a communications interface for communicating with other computer systems in conjunction with communication hardware and software, etc. The GUI techniques of the present disclosure may also be implemented on a mobile device or the like. Implementing the various computer instructions, software protocols, and modules as described herein on a general purpose computer may serve to transform a general purpose computer into a special-purpose computer system configured to carry out the specific methods, tasks, operations, and actions described herein. FIG.19illustrates an example computer system100that may implement the system and/or method of the present disclosure. One or more central processing units (e.g., CPUs)2may include one or more arithmetic/logic unit (ALU), fast cache memory and registers and/or register file. Registers are small storage devices; register file may be a set of multiple registers. Caches are fast storage memory devices, for example, comprising static random access (SRAM) chips. Caches serve as temporary staging area to hold data that the CPU2uses. Shown is a simplified hardware configuration. CPU2may include other combination circuits and storage devices. One or more central processing units (CPUs)2execute instructions stored in memory4, for example, transferred to registers in the CPU2. Buses6, for example, are electrical wires that carry bits of data between the components. Memory4may include an array of dynamic random access memory (DRAM) chips, and store program and data that CPU2uses in execution. The system components may also include input/output (I/O) controllers and adapters connected to the CPU2and memory4via a bus, e.g., I/O bus and connect to I/O devices. For example, display/graphic adapter connects8a monitor28or another display device/terminal; disk controller10connects hard disks24, for example, for permanent storage; serial controller12such as universal serial bus (USB) controller may connect input devices such as keyboard22and mouse20, output devices such as printers26; network adapter14connects the system to another network, for example, to other machines. The system may also include expansion slots to accommodate other devices to connect to the system. For example, a hard disk24may store the program of instructions and data that implement the above described methods and systems, which may be loaded into the memory4, then into the CPU's storage (e.g., caches and registers) for execution by the CPU (e.g., ALU and/or other combination circuit or logic). In another aspect, all or some of the program of instructions and data implementing the above described methods and systems may be accessed, and or executed over the network18at another computer system or device.FIG.19is only one example of a computer system. The computer system that may implement the methodologies or system of the present disclosure is not limited to the configuration shown inFIG.19. Rather, another computer system may implement the methodologies of the present disclosure, for example, including but not limited to special processors such as field programmable gate array (FPGA) and accelerators. In one embodiment, the present invention may be embodied as a computer program product that may include a computer readable storage medium (or media) and/or a computer readable storage device. Such computer readable storage medium or device may store computer readable program instructions for causing a processor to carry out one or more methodologies described here. In one embodiment, the computer readable storage medium or device includes a tangible device that can retain and store instructions for use by an instruction execution device. Examples of the computer readable storage medium or device may include, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof, for example, such as a computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, but not limited to only those examples. The computer readable medium can comprise both computer readable storage media (as described above) or computer readable transmission media, which can include, for example, coaxial cables, copper wire, and fiber optics. Computer readable transmission media may also take the form of acoustic or light waves, such as those generated during radio frequency, infrared, wireless, or other media including electric, magnetic, or electromagnetic waves. The terms “computer system” as may be used in the present application may include a variety of combinations of fixed and/or portable computer hardware, software, peripherals, mobile, and storage devices. The computer system may include a plurality of individual components that are networked or otherwise linked to perform collaboratively or may include one or more stand-alone components. The hardware and software components of the computer system of the present application may include and may be included within fixed and portable devices such as desktop, laptop, and/or server. A module may be a component of a device, software, program, or system that implements some “functionality”, which can be embodied as software, hardware, firmware, electronic circuitry, or etc. The storage device1120, of which memory4and memory device1802represent examples, may be implemented as one or more computer readable storage media as described herein and may be employed to store various data and information with respect to the computer system100. In an embodiment, the storage device1120may store registration information such as a user identifier and a user account number. Registration information may be stored via data storage commands issued by the data storage manager1064. In an embodiment, the registration information is stored in the storage device1120. The registration information may be stored as one or more data structures. These data structures can include linked lists, b-trees, binary trees, heaps, stacks, queues, hash tables, red-black trees, binomial heaps, Fibonacci heaps, etc. In one example, the registration information may be stored in a registration table. The registration information includes at least a user identifier associated with the user and an account number. Since multiple users may be assigned to the same account number, the system may track this using a shared account flag, such as a semaphore, bit, or the like. When multiple users are assigned to the same account number the shared account flag may be set to a first specific value. Otherwise, the shared account flag may be set to a different specific value. Using a shared account flag is one way of tracking a shared account and this disclosure is not limited to this example. Other methods may be used. The shared account flag may be a column of the registration table. For each user identifier having the same account number, the shared account flag is set to the specific value and associated with the user identifier. In other aspects, multiple account numbers may be linked together. In embodiments, the user manager1056may issue commands for managing user account numbers. In an embodiment in accordance therewith, the multiple account numbers may represent a team such as a research, project, corporate, university, or experiment team. The system may track the multiple account numbers and team using a multiple account flag. When different account numbers are linked, the multiple account flag may be set to a first specific value otherwise, the multiple account flag may be set to a different specific value. Using a multiple account flag is one way of tracking the linking of the different account numbers and this disclosure is not limited to this example. Other methods may be used. In one embodiment, the multiple account flag may be a column of the registration table. For each linked account number, the multiple account flag is set to the specific value and associated with the account numbers. In other embodiments, the storage device1120may also store login historical data. The login historical data may be received via the input manager1052, organized via the user manager1056, and stored via the data storage manager1064. The login historical data may include the user identifier/account number and time/date information for each time a user (or different users) logs into the system. The login historical data may be maintained in the storage device1120for a predetermined or indeterminate period of time. The predetermined period of time may be based on a specific application being executed or to be executed. In other embodiments, the storage device1120may also store user selection history. The user selection history may be received via input manager1052, organized via user manager1056, and stored via data storage manager1064. The user selection history may include a selected menu item, the user identifier/user account associated with the selection and time/date of selection. The user selection history may also be stored in the storage device1120for a predetermined or indeterminate period of time. The predetermined period of time may be selected according to the MUI module from which the user selection was initially made. The predetermined period of time for stored user selection history and the login historical data may be the same. In other embodiments, the storage device1120may include exclusion information. The exclusion information may include menu items and/or choices that are to be excluded from display in hierarchical menu levels on the MUI for one or more users, devices or interfaces. The exclusion information may be managed by commands issued via the exclusion manager1058and stored by commands issued via the data storage manager1064. The commands that are issued or provided by the menu manager1054of the methodical user interface control system1102allow for a user to move bi-directionally between hierarchical menu levels (backward and forward), backward being to a higher hierarchical menu level and forward being to a lower hierarchical menu level including being able to view past or prior menu items that have been selected or not selected. For example, various menu levels and/or choices from one or more levels of a given path of hierarchal menus, can be viewed concurrently on the MUI. In an embodiment, a display command may be provided by the display manager1050for a specific set of hierarchical menu level(s) to be displayed on a specific portion of the MUI. The display command is configured to cause display of one or more menus in one or more portions of the MUI. The specific hierarchical menu level may include one or more menu items (or choices). The display command may include the one or more menu items, a specific display order, a display orientation, display size (and format) and manner in which the choices are displayed, such as scrolling method, although other manners in which to arrange and/or display the choices are contemplated as well. In an embodiment, the scrolling method may define the display orientation and thus, the display command does not necessarily include a separate display orientation and scrolling method. In an embodiment, each menu item in a specific hierarchical menu level may be displayed in the same size. In other embodiments, one or more specific menu items may be displayed larger or smaller than other menu items. The display command may specify the scrolling method. For example, the display command may specify that the menu items are to be displayed in a graphical wheel that rotates the items in a direction, for example, horizontal or vertical (e.g., left and right or up and down) or another direction. In another embodiment, the display command may specify that the menu items are to be displayed as a graphical slider that slides the items in a direction, for example, horizontal or vertical (e.g., left and right, up and down) or another direction. Different display commands may specify different scrolling methods or orientations, or different commands can employ the same or similar scrolling methods or orientations. In an embodiment, the orientations in the different commands (such as the first command and the second command) may specify that the orientations are substantially orthogonal to each other. In other embodiments, orientations can be horizontal, substantially horizontal, vertical, substantially vertical, concentric, and substantially concentric vis-à-vis one another. As used herein, substantially may be + or −5°. In other aspects, substantially may be + or −10°. In other aspect, substantially may be + or −15°. In other aspects, substantially may be determined by percentage, such as 80% or 90%. FIGS.2A-2Oshow examples of user interface displays in different embodiments, details of which are further described below. FIG.3is a flow diagram illustrating a method of interactively displaying interactive items on a user interface display for computer-user interaction in another aspect, for example, details of the method wherein a vertical and horizontal switching of menu levels may take place. The method may be performed automatically by at least one hardware processor. At operation302, a list of menu items may be displayed on a first portion of the user interface display. The list of menu items is displayed in a first visual orientation on the first portion. For instance, the first visual orientation may be a vertical orientation. The list of menu items may include one or more menu items from a first menu and may be displayed in response to a first display command provided by the display manager1050. FIG.2Ashows an example of a user interface display in one embodiment. As shown, the menu items202are displayed in one orientation, for example, vertically, in a first portion204of the display206. The menu items are interactive, for example, in that the items are selectable, and a selection (e.g., a user selecting a menu item by clicking on a user interface menu item) causes a computer to execute programmed functions. As illustrated inFIG.2A, the menu items202of a first menu are provided in a first portion204of the interface in a wheel oriented vertically, i.e., a first orientation. The MUI includes a display206. The first portion204may display the menu items202in response to a first display command for a first menu of user-selectable choices to be displayed on the first portion204of the MUI. As discussed above, the first display command may be provided by the display manager1050. The first display command includes the menu items for the first menu (which, in one embodiment, are stored in the storage device1120, the scrolling method/orientation and size (and format). For example, the orientation for the menu items for the first menu (to be displayed in the first portion) may be vertical. The first display command may also include a location of display, e.g., location of the first portion. The first portion may be in a central location on the MUI. Each menu item may be selectable by the user. In an embodiment, the first portion may include a decision-making zone. The decision-making zone may be located at a central location within the first portion. The decision-making zone may be a location in the first or active portion wherein a prominent or highlighted menu item is displayed for immediate selection. For example, inFIG.2A, MENU ITEM 4 is shown in a decision-making zone and is shown in a larger size font than the remaining menu items so as to be a prominent or highlighted menu item displayed for immediate selection. The first display command for causing provision of the first menu may specify that menu items displayed within the decision-making zone be emphasized or highlighted, such as being displayed in a larger size font than other menu items not in the decision-making zone. In other aspects, the menu item(s) displayed within the decision-making zone may be bolded, italicized, or highlighted using a different color than the background, or underlined. In other embodiments, the first display command may specify that menu items displayed outside the decision-making zone be deemphasized, such as making the menu items smaller, faded with respect to the other menu items in the decision-making zone. The first display command is executed by the hardware processor and causes the first menu to be displayed on the first portion of the MUI. The MUI allows the user to select one or more menu items from the displayed menu items on the first portion204and to drill down through hierarchical menu level(s) of menu items based on selecting a menu item from a prior and/or subsequent hierarchical menu level(s) of menu items. When a menu item(s) is selected from the first menu displayed on the first portion204of the MUI, the input manager1052receives and interprets the selection. As shown inFIG.2A, all first menu items202displayed in the first portion204are selectable. MENU ITEM 4 is shown as a prominent menu item and is highlighted as being immediately selectable. As used herein, “immediately selectable” means that a single action, such as clicking by a user, causes the selection of the menu item. MENU ITEM 4 is selectable and highlighted as a prominent menu while the other MENU ITEMS (1, 2, 3, 5, and N) are unhighlighted as receded menu items. The receded menu items are non-immediately selectable, meaning that they require more than one user action for selection. Clicking on the highlighted immediately selectable menu item by the user causes it to be selected. The other menu items may be highlighted for immediate selection through rotation of the wheel or clicking on them. Receipt of a signal, by the input manager1052, indicative of clicking on the immediately selectable menu item causes the input manager1052executing on the processor1110to detect the selection of the prominent immediately selectable menu item. Responsive to the selection, the input manager1052issues a command to the menu manager1054indicative of the selection. The menu manager1054then determines the new menu arrangement to be displayed according to the selection and provides a relocation command to the display manager1050to cause a change in the MUI. Referring back toFIG.3, at operation304, responsive to detecting a selection of a menu item from the list of menu items, the list of menu items is relocated to a second portion of the user interface display. The list of menu items is displayed in a second visual orientation on the second portion, the second visual orientation being substantially orthogonal (e.g., perpendicular) to the first visual orientation. For instance, the second visual orientation may be a horizontal orientation. A relocation command causes the first menu of menu choices202to be relocated from the first portion204to the second portion208of the MUI display206.FIG.2Billustrates the results of the relocation command. The relocation command may include the menu choices of the first menu to be displayed in the second portion208, the size and orientation of display, the necessary visual components for display, an indication as to which menu item was selected to cause relocation, and any other information discussed herein with respect to the display command. The relocated first menu, displayed now in the historical or second portion208as a past menu, may include one or more or all of the menu items202and choices previously made available to the user. The menu item202selected by the user to cause the relocation becomes a past-selected menu item while the unselected menu items from menu items202become past-unselected menu items. The past-unselected menu items are representative of previously navigated hierarchical menu levels. After relocation of the menu items202of the first menu, the display manager1050causes the MUI to display, in the active or first portion204, submenu items210of the second menu responsive to the first menu selection as a new current or subsequent level of menu choices for the user to interact with. As illustrated inFIG.2B, the subsequent or second level of menu choices includes second submenu items210displayed in the active or first portion204of the MUI display206. In a method in accordance with an embodiment, upon receiving a signal from the input manager1052indicating that a menu item202has been selected from the first portion204, the relocation command is issued. For example, the menu manager1054provides the relocation command to the display manager1050. The relocation command instructs the display manager1050to move the first menu from the first portion204of the MUI display206to the second portion208of the MUI display206in a second menu. The second portion208is at a different location on the MUI display206than the first portion204. Since a menu item was selected from the first menu of menu items202, the relocated first menu of menu items202, as displayed in the second portion208, will now have both a past-selected menu item(s) and past-unselected menu item(s) (e.g., one or more menu items that the user could have selected, but did not). The relocation command may include the first menu items, scroll method and/or orientation, display size (and format) and the location of the second portion. In an embodiment, the second portion208is located farther from a central location of the MUI display206than the first portion204. In an embodiment, the orientation of displaying the menu items202in the first menu in the second portion208is different from the orientation of the display of the submenu items210in the second menu in the first portion204. For example, orientation of the menu items202in the second portion208may be substantially orthogonal to the orientation of the submenu items210in the first portion204. The relocation command may specify that the orientation of the menu items202is horizontal (whereas the first display command specified that the orientation of the menu items202was vertical). In other embodiments, the orientation may be reversed, where menu items210in the first portion204are horizontal and the menu items202in the second portion208are vertical. In embodiments, the first portion204is located in a central lower portion of the MUI display206and the second portion208is located in an upper portion of the MUI display206. The relocation command may also specify different sizes for menu items. For example, the selected menu item(s) from the first menu (which triggered the relocation) may be specified to be displayed in an emphasized manner, such as being displayed in a larger font size than unselected menu items. In other aspects, the selected menu item(s) may be bolded, italicized, or highlighted using a different color than the background, or underlined. In another aspect, the relocation command may also specify a relative position of the menu items202within the second portion208. For example, a selected menu item(s) may be positioned in a central location within the second portion relative to other menu items (non-selected menu items). In other aspects, non-selected menu items from the hierarchical menu level may be displayed in a deemphasized manner. For example, the relocation command may specify the non-selected menu items to be a smaller font size or faded relative to the selected menu items. In other aspects, the relocation command may specify the non-selected menu items to be displayed further from the central portion of the second portion than a selected menu item(s) for the same hierarchical menu level. The first portion and the second portion may be displayed on the user interface display such that they are non-overlapping. The menu items relocated to the second portion are selectable from that position or location, and the selected menu item may be graphically highlighted, for instance, to provide a visual indication of which item from the list has been selected. The selected menu item may also be centered with other menu items to the left and/or right of the selected menu item. The submenu items that are displayed where the relocated menu items were displayed (before the relocation) are also selectable items.FIG.2Bshows an example of the user interface display in one embodiment with a relocated list of menu items. As shown, the menu items202are relocated to a second portion208of the display206, for instance, above the first portion204, and displayed horizontally. As described in more detail below, the second portion of the display may include many levels of menu items, for example, levels of past decisions and also the options in those levels that were not chosen. Thus, the number of levels of past decisions may be, e.g., 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, or more, e.g., 1-2, 1-3, 1-4, 1-5, 1-6, 1-7, 1-8, 1-9, 1-10, 1-11, 1-12, 1-13, 1-14, 1-15, 1-16, 1-17, 1-18, 1-19, 1-20, and nested ranges therein, e.g., 2-20, 2-19, 2-18, 3-20, 3-19, 3-18, etc. The second portion, for instance, visualizes a representation of a path of past decisions taken and other decisions (other paths) not taken. In an embodiment, the past decisions taken (menu item chosen) may be aligned, e.g., vertically, e.g., in the center. The second portion208may be caused to display the first menu items202in response to a relocation command for a first menu of user-selectable choices to be displayed on the second portion208of the MUI display206. As discussed above, the relocation command may be provided by the menu manager1054to the display manager1050. The first menu of user-selectable choices may include both past-selected and past-unselected menu items. The first menu can include one or more of the first menu items202that are selectable by the user. The menu items202may be immediately selectable or non-immediately selectable. The second portion208may also include one or more decision-making zones. The relocation command may also specify that menu items displayed within the decision-making zone be emphasized or highlighted. In other aspects, the menu item(s) displayed within the decision-making zone may be bolded, italicized, or highlighted using a different color than the background, or underlined. In other embodiments, the relocation command may specify the same. In other embodiments, the relocation command may specify that menu items displayed outside the decision-making zone be deemphasized or dehighlighted. The first portion204and the second portion208are displayed on the MUI display206so that they are both viewable, for example, viewable concurrently. The MUI display206may be presented via one or more physical display screens. The second portion208may contain one or more menus, each menu including both past-selected and past-unselected menu items from previously navigated hierarchical menus. In the representation shown inFIG.2C, the second portion208(historical portion) includes menu items202and submenu items210, each of which were included in the first portion204in a previous MUI representation. The menu items202and sub-menu items210which were past-selected, i.e., those menu items that led to the sub-submenu items212being displayed in the first portion, may be highlighted or emphasized to indicate that they were previously selected. As shown inFIG.2C, MENU ITEM 4 and SUBMENU ITEM 3 are highlighted to indicate that they were previously selected. The past-unselected menu and submenu items are displayed as selectable options. The past-selected menu item (or choice) also may be displayed as a selectable option, where both are displayed on the second portion208(e.g., a historical portion, which can include one or more menu items previously made available to a user). The historical portion contrasts with an active portion, which can include current, user-selectable choices (e.g., located on the first portion of the display) for the current hierarchal menu level. The historical portion can allow users to make selections as well, e.g., by making a selection among previously selected hierarchal levels and/or menus. In this manner, the historical second portion208may represent a “trail of breadcrumbs” showing to a user the ordered path of selections made to arrive at the current menu as displayed in the active first portion204. Further details on selections made in the second portion208are provided below. In some embodiments, the first portion204may be adapted to take up a larger portion of the display area of the MUI than the second portion208. The second portion208may be displayed across a smaller area than the first portion204. The first portion204and second portion208may be adapted for display in a manner that provides contrast against a background on which they are displayed. For example, the first portion204and second portion208may be displayed in bright pixels against a dark background or dark pixels against a bright background. In other embodiments, a command (such as, for example, a relocation command) may be provided by the menu manager1054to move or relocate a menu from a portion of the MUI display206to another portion of the MUI display206. In one embodiment, the moving or relocating of a menu and/or menu item(s) can include providing a command to move a menu from one portion of the display to another. In another embodiment, the moving or relocating of a menu can include issuing multiple commands, for example, one command to remove the menu from the first portion204of the display and another command to display the menu (either in the same format and/or orientation or in a different format and/or orientation) on a second portion208of the display. This relocation can occur, for example, in response to a user's selection from a menu (e.g., a first menu). Referring back toFIG.3, at operation306, on the first portion of the user interface display, where the list of menu items was previously displayed before being relocated to the second portion, a first list of submenu items associated with the selected menu item is displayed in the first visual orientation. Shown inFIG.2B, a first list of submenu items210is displayed in the first portion204, for instance, vertically. Referring back toFIG.3, at operation308, responsive to detecting a selection of a submenu item from the first list of submenu items, the first list of submenu items is relocated to the second portion, wherein the first list of submenu items is displayed in the second visual orientation and stacked with the list of menu items displayed on the second portion. At operation310, on the first portion of the user interface display, a second list of submenu items associated with the selected submenu item is displayed in the first visual orientation, for example, vertically.FIG.2Cshows an example of the user interface display in one embodiment with a second list of submenu items. As shown, the first list of submenu items210is relocated to the second portion208, stacked for instance below the relocated list of menu items202, for example, stacked horizontally. The second list of submenu items212, i.e., sub-submenu items associated with the selected submenu item, is displayed in the first portion204. Depending on the depth of the menus or submenus navigated, the horizontal menu structure in the second portion208may accumulate a number of menu levels that exceed the number that can be displayed together on the display portion at the second portion208(e.g., the number of levels stacked exceed the screen portion allocated for the horizontal menu structure of the second portion208). In one embodiment, the horizontal menu structure of the second portion208may show n number of menu levels, e.g., the last 3 submenus, allowing for scroll capability. For example, scrolling up allows the user to see the other menu items. The number n may be any number, not limited to 3, e.g., 2, 3, 4, 5, etc. In another embodiment, the top m (e.g., 2) menus may be displayed along with the bottom 1 sub-menu to provide top-level context to the last decision. The number m may be any number, not limited to 3, e.g., 2, 3, 4, 5, etc. Scroll capability allows for displaying other menu items, e.g., user can scroll to see other menu items. The user may also expand the entire multi-level of menus and submenus. As shown inFIG.2C, a subsequent level of menu choices, e.g., sub-submenu items212, may be at least one hierarchical menu level (from a third menu) or more than one hierarchical menu level below (from a fourth, fifth, sixth, etc., menu) the first menu of menu items202. In the example ofFIG.2C, sub-submenu items212represent a third menu that is two hierarchical levels below the first menu of menu items202. The process of relocating, for example, menu items from one portion to another portion of the user interface display as menu items are selected may continue up or down the levels of the menu items. For instance, the processing at operations308and310inFIG.3may repeat for additional levels of submenus. In another aspect, selecting menu items from the relocated list of menu items may function as a “back” button, without a user having to explicitly click on a back button to return to the previous list of menu items. Yet in another aspect, if a number of relocated lists of menu/submenu items that are stacked reach a predefined number or threshold, for example, such that stacked list in that the area of the second portion becomes too large and encroaches into the area of the first portion, the stacks themselves may be displayed as a rotating wheel or slider, for instance, in the first visual orientation. Thus, for example, menu items in each of the stacked list may be displayed in the second visual orientation (items are slidable in that direction, e.g., horizontally), while each list in the stacked lists is slidable in the direction of the first visual orientation (e.g., vertically). In this way, a vertical bread crumb may be provided on the horizontal sliders and contextualized by the other options to the left and/or right of center (a selected item). Any layer may be adjusted in real time while not having to go back. Such displaying of the vertical and horizontal sliders allows for proceeding through a tree of options and picking desired leaf options. In another aspect, the number of menu and/or submenu items can be collapsible and expandable. For instance, the bottom or last ‘n’ levels (e.g., 3 levels) which are the most recent may be displayed with the rest of the levels collapsed. Those collapsed levels are made expandable, for example, by user input. As another example, the top ‘m’ levels (e.g., 2 levels) and the bottom level (e.g., ‘1’ level) may be displayed, representing the top-level context with the most recent option or decision the user is working on (i.e., bottom level). WhileFIGS.2A-2Cshow the first visual orientation as vertical and the second visual orientation as horizontal, the orientations may be switched. For instance, the first visual orientation may be horizontal and the second visual orientation may be vertical. In another aspect, the first visual orientation and the second visual orientation may be of any other positional display orientations. As described above, the menu items and associated submenu items may be displayed as a slider graphical element, a rotating wheel graphical element, or another graphical user interface element. For example, concentric wheel elements, as described below with respect toFIGS.2H-2Jmay be employed. In embodiments, an ordering or arrangement of menu items within their menu levels may be determined according to attributes of the menu items. The manner in which the menu items are displayed may be based on attributes selected from whether a menu item is a previously selected or previously unselected item, whether a menu item is selectable or unselectable, whether a menu item includes one or more characters typed by a user, whether a menu item is part of an advanced context menu (described in greater detail below), and/or whether a menu item has a position in a list more central relative to other items in the list. In embodiments, the way menu items are adapted to be displayed, i.e., the ordering, arrangement, coloring, and presentation of menu items, may be determined according to several different factors. For example, the menu manager1054and display manager1050, in conjunction, may be configured to emphasize menu items that are selected or are past selected, are currently available to the user (i.e., selectable), and/or are positioned in a decision making zone of a first portion204or a second portion208. The menu manager1054and display manager1050may further be configured to deemphasize menu items that are not selected or are past-unselected, that are currently unavailable to the user, and/or that are positioned away from the decision making zone. In some embodiments, immediately selectable menu items may be emphasized while non-immediately selectable items may be deemphasized. In some embodiments, emphasizing or deemphasizing a menu item may include highlighting or dehighlighting the menu item, as discussed herein. Highlighting or emphasizing may include, for example, bolding, increasing in font size, changing fonts, underlining, changing brightness or contrast, or adjusting position on the display relative to other items. Dehighlighting or deemphasizing may include decreasing in font size, changing fonts, fading, changing brightness or contrast, or adjusting position on the display relative to other items. The MUI allows the user to jump to a different path of menu items (e.g., by selecting one or more additional menu items at the same, higher, or lower hierarchical level of a menu) by allowing the user to select a past-unselected menu item from a previously navigated menu level displayed on the second portion208of the MUI display206and a newly displayed menu item(s) on the first menu displayed a current menu being displayed on the first portion. As discussed above and with respect toFIG.2C, previously navigated menu items (including submenu items, sub-submenu items, etc.) may be relocated to the second portion208after a menu item is selected. The previously selected menu items in the second portion208may be highlighted or emphasized to visually indicate the menuing path that has been taken to arrive at the menu or submenu currently displayed in the first portion204. Previously unselected menu items from the second portion may be selected to permit the user to jump to that branch of a menu. In the example ofFIG.2C, a user has previously selected MENU ITEM 4 and SUBMENU ITEM 3. Selection of a new and previously unselected submenu item210from the second portion208would cause the menu manager1054to issue commands for a new list of sub-submenu items212associated with the newly selected submenu item210to be displayed as the current menu being displayed in the first portion204. Selection of a new and previously unselected menu item from the menu items202would cause the menu manager1054to issue commands to cause the display of a new list of submenu items associated with the newly selected menu item202as the current menu being displayed in the first portion204. In this way, a user may actively jump between various portions of a menuing tree without having to navigate back through the previous decisions. When a previously unselected menu item (or submenu item, or sub-submenu item, etc.) is selected, a save command may be issued to store a state of the current menu in the first portion before the subsequent menu in the first portion is displayed. In embodiments, as disclosed in greater detail below, navigating through the menu items to a final branch in the menuing tree at the level of an executable menu allows a user to make one or more parameter selections. Should a user navigate away from an execution level menu, the parameters that were currently selected at the time the user navigated away may be stored via a save command issued by the menu manager1054to the data storage manager1064. Accordingly, if a user should later wish to return to the execution level menu, the last selected parameters will be displayer. Previously unselected menu items may be selectable within the past menu of previously navigated menu items. In embodiments, previously unselected menu items may be immediately selectable, requiring only a click for selection, or may be non-immediately selectable, requiring another step to highlight the menu item prior to selection. In embodiments, the previously selected menu items may be unselectable, as the user has already selected them. In further embodiments, only the previously selected menu item from the lowest hierarchical level in the past menu (i.e., the menu immediately previous to the current first menu) is unselectable, while the previously selected menu items from higher hierarchical levels remain selectable. In the example provided byFIG.2C, SUBMENU ITEM 3 may be unselectable while MENU ITEM 4 may be selectable. In embodiments, the various menus are displayed on a background. In an embodiment, the menus are superimposed over the background. The background may consist of one or more colors. In an embodiment, at least a preset percentage of the background pixels may be monochromatic. For example, at least a preset percentage of the background pixels may be black. For instance, 75% of the background may be monochromic (e.g., black, white, gray, etc.). The specific percentage has been described by way of example and other percentages may be used. In embodiments, display commands and relocation commands may specify the background, including the preset percentage and color, e.g., black, white, gray, etc. In certain embodiments, the background may also include areas of the menus other than text (e.g., menu items). In an embodiment, the text of the menus is displayed in a color to contrast or emphasis the text with the background. For example, when a black background is used, white or yellow may be used for the color of the text, although other colors may be used as well. In other embodiments, the backgrounds and/or text may be comprised of more than one color. In some embodiments, an initial or first menu, i.e., the starting current menu, may be a default menu that is displayed upon a login of a registered user. In an embodiment, a default menu may be customized for a specific user identifier. In other aspects, the default menu may be specific to a MUI module. For example, the default menu may include as menu items a list of assays, tests, runs, clinical trials, etc. In embodiments in accordance herewith, the default menu is determined according to one or more of the following: a MUI module being run, a location of a device running the MUI module, a user identifier, and an application of the menu. For example, a device located at a user's desktop may run a MUI module that defaults to a default menu suitable for selecting options for experimental design or experimental analysis. In another example, a device located at a clinical instrument may run a MUI module to provide a default menu suitable for selecting options to run an experiment and collect data. In embodiments, the default menu may be a first menu, second menu, third menu, and/or any other menu from a level in a hierarchical menu tree. In an embodiment, any menu provided in any portion of the MUI display may include search functions. The search function enables a user to enter keywords or other inputs associated with menu items (options). A user input is received via the input manager1052and transferred to the menu manager1054for searching purposes. The searching allows for the functions (menu items) to be filtered using the entered keywords or other inputs, which shortens a time needed to find a desired menu item. An interface for the search function may be positioned in a central location of respective portions of the MUI display206, or in the alternative, other portions of the MUI display206. In further embodiments, no visual interface is provided for the search function. In such an embodiment, a user may access the search function merely by typing. In an embodiment, any menu item(s) that match or partially match the keyword(s) may be displayed to emphasize the menu item(s). For example, the menu item(s) may be displayed in a larger size than other menu items that do not match or partially match. In other embodiments, the menu item(s) may be bolded, italicized, or highlighted using a different color than the background, or underlined. In other embodiments, menu item(s) not matching or partially matching the keyword(s) may be deemphasized, such as the menu item(s) being smaller or fading the text with respect to the text of menu item(s) that match or partial match. In embodiments hereof, sliders or wheels may be automatically advanced and/or rotated to display menu items matching search terms. In an embodiment, a first menu selection may operate as a filter on a second menu. In a hierarchical tree, each of several items in a first menu may lead to the same second menu. However, the first menu selection that is made determines the menu items shown when the second menu is displayed. In a simple example, the first menu may include menu items pertaining to team roles while a second menu may include a menu pertaining to team responsibilities. The selection of a specific team role at the first menu may filter the second menu to only show team responsibilities that correspond to the selected role. In some embodiments, such filtering is performed by making specific items of the second menu unselectable. In an embodiment, any selection made in any menu operates as a filter on the menu items displayed in any other menu. For example, in an embodiment, a series of items in the first menu may be a series of category filters that each lead to a second menu. Each second menu leads to a series of submenus and, eventually, one or more execution menus, permitting the user to select parameters for the selected category filter. After selecting category filters in one or more of the category filter submenus, a user may then select another first menu item that provides a list of second menu items filtered according to the category filters that have previously been selected. In an embodiment, one or more menus or menu levels may be presented as exceptions to the hierarchical menu tree standard discussed herein. For example, a menu level may include a visual display and/or a video display rather than a text based visual component. Exceptions may be implemented, for example, in situations where information may better be conveyed through alternate means. For example, as discussed above, an execution level menu may include a walkthrough, which may be best presented via a video or series of images. In another example, an execution level menu may be presented for data analysis, and may provide any combination of graphs, charts, tables, etc. to assist in data analysis. In an embodiment, an advanced context menu may be provided via one or more commands issued by the menu manager1054.FIG.2Pillustrates an example of a methodical user interface including an advanced context menu270. The advanced context menu270contrasts with the first portion and the second portion, which together provide a “direct workflow mode.” The advanced context menu270may be accessed via an advanced context menu selector290, which may, in embodiments, be present on some or all screens of a methodical user interface. The advanced context menu270provides additional advanced menu items271beyond the items appearing in the current menu in the active first portion204or one or more past menus appearing in the historical second portion208. The advanced context menu270may be accessed by clicking or hovering over the advanced context menu selector290or otherwise indicating a desire to access the advanced context menu270. The advanced context menu270includes a selection of advanced menu items271. The selection of advanced menu items271may include items displayed in the current menu in the first (active) portion204and items displayed in the previous menus in the second (historical) portion208. In accordance with an embodiment hereof, advanced menu item(s)271of the advanced context menu270may be emphasized. For example, the advanced menu item(s)271may be displayed in a larger font size. In other embodiments, the menu item(s) may be bolded, italicized, or highlighted using a different color than the background, or underlined. Other items included in the selection of items in the advanced context menu270may be items related to but not currently included in one of the displayed menus. That is, the selection of items in the advanced context menu270is driven by the current context of the UI display. For example, five menu items of a first menu may be displayed as the current menu in the active portion. Three additional menu items related to the five menu items of the first menu may be displayed in the advanced context menu270. The three additional menu items may be items of the first menu that were excluded or limited (as discussed further below) from the current menu display for various reasons. The advanced context menu270offers the user a greater array of accessible menu items without causing clutter in the active portion or the historical portion. In embodiments, some of the advanced menu items271in the advanced context menu270may be items that are infrequently selected, for example, in less than 50, 45, 40, 35, 30, 25, 20, 15, 10, or 5% of use cases. Advanced menu items271of the advanced context menu270may be selected according to patterns of user interaction with the MUI, as explained in greater detail below. In embodiments, the advanced context menu270may include three portions. A first top portion272of the advanced context menu270may include advanced menu items271related to the currently active menu, as described above. A second, middle portion273of the advanced context menu270may include advanced menu items271pertaining to MUI modules available on the same workstation at which the advanced context menu270is selected. These options may permit a user to swap modules based on a desired task. A third, bottom portion274of the advanced context270menu may include global functions, such as login/logout functionality, user manuals and help, EULA information, and privacy policy information. The above described ordering is not limiting, and any of the described advanced menu items271may be presented in a different order. In embodiments, when the advanced context menu270is selected, the MUI causes other graphics, text, etc. to become faded and/or blurred. The advanced context menu270is displayed on a transparent background so that the advanced context menu270and the rest of the background are the same (e.g., black). Accordingly, the MUI provides a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display. In an embodiment, certain menu items included in a hierarchical menu tree, i.e., a first menu, second menu, third menu, etc., may be excluded or restricted from being displayed when that menu is being displayed. Exclusions and restrictions may be managed by the exclusion manager1058in conjunction with the menu manager1054. Displaying any menu from a menu tree includes displaying one or more menu items from that menu but does not necessarily require display of all items from that menu. Menu items of a hierarchical menu level(s) may be excluded or restricted from being displayed based on an exclusion table. Exclusion tables may correspond to a user identifier, email address, username, team, and/or account number. In other embodiments, one or more entire menus from a menu tree may also be excluded based on an exclusion table. In certain embodiments, exclusion or restriction information may be stored in the storage device1120. The exclusion or restriction information may be stored as a data structure. Any data structure described herein may be employed. Exclusion or restriction information may be used to exclude menu items from the view of a particular user, group of users, type of user, etc. For example, administrative menu items or menu levels may be excluded from view of a user or operator that is an engineer or technician. In another example, design menu items or menu levels may be excluded from view of a user or operator that is a lab assistant or lab technician. User identifiers, account numbers and the menu item(s) and/or menus for exclusion may be input by an administrator. For example, an admin console module, discussed in greater detail below, may be used to manage and generate exclusion tables. The managing may be done when a user registers with the system. In other embodiments, the exclusion information may be added after registration and periodically updated. In embodiments, each time a user logs into the system, the hardware processor maintains a record of the login (and also a log out) via the data storage manager1064. In an embodiment, this record, i.e., login historical data, may be in a form of any data structures described herein. In an embodiment, this login historical data may include the user identifier and/or account number, a login time/date and a log out time/date. In an embodiment, upon receipt of the login information, the data storage manager1064adds the user identifier and/or account number and the login time/date to the login historical data. In certain embodiments, before issuing a command for displaying any menu, the menu manager1054may check the exclusion table (for example, stored in the storage device1120) to determine if any menu items in the initial display menu (e.g., default menu) are listed to be excluded from display for the user (or account number). In an embodiment, the menu manager1054may match the user identifier and/or account number of the user currently logged in with user identifiers and/or account numbers listed in the exclusion table. If there is a match, then the menu items listed in the exclusion table are to be excluded from being displayed in the initial display menu. This exclusion may be carried out, through the issuance of a separate exclusion command and/or instruction, or in the alternative, the exclusion can occur by modifying any display commands that cause the available menu item(s) to be displayed. The menu manager1054may remove the menu items included in the list from the menu items in the initial display menu (e.g., default menu) and issue the first command without the removed menu items. In certain embodiments, each time the input manager1052receives a selection of a menu item in the current menu, prior to issuing a relocation command, the menu manager1054may determine whether any menu item on a hierarchical menu level lower than the hierarchical menu level currently being displayed on by the MUI display206, as the current menu, is listed to be excluded (or whether a lower hierarchical menu is to be excluded). The determination may use the login historical data and the exclusion table. The login historical data may be used to confirm that the same user (user identifier or account number) is still logged in and match the same with user identifiers and account numbers in the exclusion table. In other embodiments, the menu manager1054may use a user identifier and account number received from the user manager1056instead of the login historical data for the determination. In other embodiments, a similar determination is made prior to issuing any relocation or display command. In yet other embodiments, different exclusion tables may be used depending on whether the menu items are to be displayed on the MUI display206in the first portion204or the second portion208. In accordance with this embodiment, the exclusion table may have additional columns of information, one column for each portion (menu). A column for the first portion lists menu items to be excluded when displayed on the first portion204of the MUI display206, a column for the second portion208lists menu items to be excluded when displayed on the second portion of the MUI display206, and columns for additional portions list additional menu items to be excluded when displayed on any additional portions of the MUI display206. As described above, an account number may be associated with multiple users (user identifiers). Thus, when an account number is used as the basis of exclusion, all of the users associated with the account number may have the menu items excluded from being displayed on the MUI display206. In another embodiment, since certain account numbers may be linked, when the account number is used, any account number linked with the account number may also have menu items excluded. In other embodiments, instead of excluding menu items, the menu items may be moved to a position of the respective menus to deemphasize the menu items with respect to other menu items. In accordance with this embodiment, the exclusion table may be used by the menu manager1054to reorder or change positions of the menu items on a hierarchical menu level. A subsequent command (first command, second command and/or third command) may reflect the changed position for the menu items. In other embodiments, menu items (or hierarchical menu levels) may be excluded based on a particular device or a location of a device. The device on which exclusion is based may be based on any of the one or more devices executing the various software instructions of the methodical user interface control system1102. The exclusion or restriction information may be stored, for example, in storage device1120as a data structure. Each device may have an identifier such as a Media Access Control (MAC) address or other unique identifier. The identifier of the device is not limited to a (MAC address and other identifiers may be used, such as Internet Protocol (IP) address, machine name, etc. In an embodiment, one column in the table may include the identifier, e.g., MAC address. A second column in the table may include the menu item(s) or hierarchical menu levels that are to be excluded from display, respectively, associated with the identifier, e.g., MAC address. In other embodiments, instead of a table (or tables), a list of menu items and/or hierarchical menu levels are stored in association with the identifier, e.g., MAC address. The device identifiers, such as the MAC address, and the menu item(s) and/or hierarchical menu levels for exclusion may be input by an administrator and/or one or more users with appropriate permissions. This exclusion information may be input when a first MUI module is installed into a device. In other embodiments, the exclusion information may be added after installation and periodically updated. In certain embodiments, upon receiving the login historical data or in response to receiving a notification, before issuing any command for displaying any menu (and menu items), the hardware processor executing the input manager1052may check the exclusion information in the storage device1120to determine if any menu items for the initial display menu or associated with the selection are to be excluded for the device(s). In an embodiment, the menu manager1054may compare the device identifier with the device identifier(s) listed in the exclusion information. When there is a match, certain menu items are to be excluded from display on the MUI display206. For example, when the initial display menu (e.g., default menu) or a hierarchical menu level lower than the hierarchical menu level currently being displayed on the MUI display206as the current menu, which is associated with a selection, includes one or more menu items listed to be excluded, the menu manager1054may remove the excluded menu item(s) from the menu prior to issuing a display command and then issue the display command with the menu items removed. In this example, the removed menu item will not be displayed on MUI display206. In other embodiments, certain menu items (or hierarchical menu levels) may be excluded based on what hierarchical menu level is currently being displayed as the current menu (in the first portion) or the previous menus (in the second portion). In an embodiment, one column in the exclusion table may include a menu identifier of a hierarchical menu level. A second column in the table may include the menu item(s) or hierarchical menu levels that are to be excluded from display, respectively, associated with the menu identifier. The menu identifier represents the hierarchical menu level that is displayable on either the first menu or second menu. The excluded menu items are menu items that are unavailable to be selected from a displayed hierarchical menu level. These menu items may be application specific. In certain embodiments, when a hierarchical menu is displayed, as the current menu in the first portion204or the previous menu in the second portion208, and a selection is made, prior to issuing a command, the menu manager1054checks the exclusion information to determine whether any menu items associated with hierarchical menu level which is selected to be display should be excluded. Based on the determination, the menu manager1054may remove the excluded menu items from the menu prior to issuing a responsive command and then issue the responsive command with the menu items removed. This exclusion may be carried out, through the issuance of a separate exclusion command and/or instruction, or in the alternative, the exclusion can occur by modifying the first, second, and/or third display commands that provide the available menu item(s) to be displayed. In other embodiments, instead of a display or relocation command being issued with the menu items removed, an exclusion command may be issued by the exclusion manager1058in combination with the display or relocation command. In this embodiment, the display command would have all of the menu items associated with the menus and the exclusion command would cause the display manager1050to delete the executed menu items included in the exclusion command prior to causing the display. In other embodiments, a number of menu items to be displayed may be limited by the menu manager1054based on a frequency of usage. For example, in an embodiment, the number of menu items may be limited based on a frequency of selection. In certain embodiments, the frequency can be determined over a predetermined period of time. The frequency of selection can be preset or customizable, and can include, for example, between 50%-80% frequency, although other frequencies of selection are contemplated as well. By limiting display of menu items to include only menu items that are used at greater than a specific threshold frequency, the amount of clutter in the menuing system is reduced and the menuing experience is streamlined. In accordance with this embodiment, the input manager1052tracks selection of all menu items and stores the same in the storage device1120. In an embodiment, the list of previously selected menu items is stored in a data structure. For example, the data structure may be a menu item selection table or any other data structures (e.g., those specifically described herein). In certain embodiments, a user's or users' selections may be tracked over a preset period of time. The period of time may be one day, one week, one month, or other preset or customizable periods of time. The specific period of time may be based on an application, such as a clinical trial or type of research, type of test, type of organization (e.g., university, corporate), etc. The tracking may be repeated for each preset period of time. Each time a notification is received by the hardware processor executing the input manager1052, within the preset period of time, the input manager1052may record, the user identifier, username, email address, and/or account number, the selected menu item and the time and date of the selection. The time and date may be obtained from a timestamp included in the notification. In an embodiment, the user identifier and account number may be obtained from the login history table. In other embodiments, the user identifier and account number may be included in the notification. At the end of a specific period of time, the input manager1052determines a frequency of selection for each menu item. In an embodiment, the input manager1052may determine for a user identifier, the frequency of selection. The frequency of selection is based on the number of times that the menu item was selected verses a total number of selections (within the specified period) by the user identifier. In other embodiments, the determination may be based on account number in addition to user identifier. For example, the input manager1052may determine a frequency of selection of a menu item by at least two user identifiers having the same account number. In this example, users form teams, where a single account number is associated and/or linked with two or more user identifiers. In another example, a team can include two or more account numbers associated and/or linked together. In still a further example, teams can be formed whereby N unique users are associated and/or linked with M unique account numbers, where N is greater than M. Identifying user identifiers having the same account number may be achieved using the shared account flag in the registration table in combination with the menu item selection table to determine that the at least two user identifiers made a selection within the period of time. For a menu item, a number of selections of the menu item is aggregated for the at least two user identifiers (as determined from the menu item selection table). Similarly, the total number of selections is aggregated for the at least two user identifiers (also as determined from the menu item selection table). The frequency is then based on the aggregated selections of the menu item and the aggregated total selections. In other embodiments, the frequency determination may be based on selections where the user identifier is associated with an account number that is linked to other account numbers (e.g., a team of users). In accordance with this embodiment, the input manager1052may identify the linked account numbers using the multiple account flag which is set to a specific value when the account number is linked. Once identified, the input manager1052may determine the frequency of selection by using selections from a user identifier which is associated with one of the linked account numbers. In this embodiment, selections from other user identifiers or the same user identifier that is not associated with one of the linked account numbers (in the case where the same user identifier is associated with different account numbers) may be ignored (not used in the determination). Similar to above, the input manager1052may determine the number of selections of a menu item and the total number of selections to determine the frequency. In other embodiments, the methodical user interface control system1102may use selections from any user identifier(s) which is/are associated with one of the linked account numbers for the determination (and may aggregate the selections). In other embodiments, the frequency determination may be based on selections of at least two user identifiers where the user identifiers are associated with one or more account numbers that are linked to other accounts. In accordance with this embodiment, the hardware processor executing the input manager1052may identify the linked account numbers using the multiple account flag which is set to a specific value when the account number is linked. Once the linked account numbers are identified, the hardware processor executing the input manager1052may further identify at least two user identifiers (associated with the linked account numbers) that made selections within the period of time, using the menu item selection table. For the identified at least two user identifiers that made a selection, for a menu item, a number of selections of the menu item is aggregated for the at least two user identifiers (as determined from the menu item selection table). Similarly, the total number of selections is aggregated for the at least two user identifiers (also as determined from the menu item selection table). The frequency is then based on the aggregated selections of the menu item and the aggregated total selections. In other embodiments, the frequency determination may be based on all selections regardless of the user identifier and/or account numbers. In accordance with this embodiment, the input manager1052, for each menu item, may determine the number of selections of the respective menu item verses the total number of selections (of any menu item) within the period of time to determine the frequency. The frequency described above can be used in conjunction with a limiting command issued by the menu manager1054. The functionality of the limiting command is similar to the functionality of the exclusion command, as discussed above. The limiting command serves to limit certain menu items to be displayed based on a criterion or two or more criteria. For example, the limiting command can be based on: (a) the frequency with which a user has previously selected the item while he/she was logged into his/her account. In one example, this determination can occur based on a given period of time. In another example, it can be based on the number of times a given user logged into his/her account. Another criterion includes: (b) the frequency with which at least two users have previously selected the item while they were logged into an account. In certain embodiments, this can include an amount of time for a given user or based on the total time the users were logged into their accounts. Alternatively, it can be based on the total number of logins of a given user or the total number of logins in the aggregate. Still further, the criterion can include: (c) the frequency with which a user has previously selected the item while he/she was logged into an account associated with multiple accounts; or (d) the frequency with which at least two users have previously selected the item while they were logged into one or more accounts associated with multiple accounts. For both of these examples, as described with regard to examples (a) and (b), above, the frequency can be based on one or more combinations of the period of time one or more users remained logged into their accounts or the number of account logins. Still further, the criteria can include: (e) the frequency with which any users have previously selected the item while logged into any account; and/or (f) the frequency with which any users have previously selected the item while logged into any account associated with multiple accounts. In these two examples, the previously selected item can be tracked with the use of a data structure, such as table (or any other data structure described herein), which can be periodically cleared after a given period of time elapses or a certain number of total logins by one or more users occurs. In certain embodiments, the criteria described in (c), (d), and (f), above, can be applied to team accounts, in particular, where users of those accounts are team members with one or more team that are associated with multiple accounts. When the determined frequency is greater than or equal to a threshold percentage, menu items may be limited for an immediate subsequent period of time. The threshold may be based on the application. In an embodiment, the threshold percentage may be 50% or more. In other embodiments, the threshold percentage may be 60% or more. In yet other embodiments, the threshold percentage may be 70% or more. In further embodiments, the threshold percentage may be 80% or more. In other embodiments, the threshold may be a percentage range. For example, the threshold percentage may be in a range of between 75% and 85%. The specific percentages have been described herein by way of example, and the threshold percentage is not limited to the same. Any threshold percentage or range may be used. In other embodiments, a ratio of selection may be used in place of a frequency of selection. The ratio is defined as the number of selections of the menu item divided by a number of selections of other menu items. For example, ratios of 9:1, 7:1, 5:1, 3:1, or any other suitable ratio may be used. In other embodiments, a number of times the menu item is selected may be used in place of a frequency of selection. For example, a specific selection threshold may be used instead of a percentage. The specific selection threshold may be 5, 10, 15, etc. Once it is determined that the menu items may be limited, the hardware processor may determine which menu items may be displayed on the MUI display206in the immediate subsequent period of time, and which menu item(s) are to be limited. In accordance with embodiments, any menu item determined to have a frequency above the threshold percentage may be displayed (e.g., not limited). In further embodiments, a display limitation may be based on menu items having a selection frequency below a certain threshold, e.g., below 50%, 40%, 30%, 20%, 10%, etc. In several embodiments, limiting commands can be issued based on various criteria. For example, one or more menu item(s) could be excluded based on menu item(s) being designated as unavailable to a particular user. This can occur, for example, if a particular user has not selected one or more menu item(s) over a certain period of time. Similarly, one or more menu item(s) could be limited based on a menu item(s) being designed as unavailable to an aggregation of two or more users. In this example, the frequency of two or more users selecting or not selecting one or more menu item(s) over a period of time can affect whether a limiting command issues for those menu item(s). Other embodiments contemplate issuing limiting commands in a similar fashion for the previous two examples, but for individual team and/or aggregation of teams (i.e., based on the frequency of selection of menu item(s) by users that are associated with teams). Still further, other embodiments can limit menu items based on a particular machine or aggregation of machines that are executing the computer application that one or more users have logged into. In an embodiment, the menu manager1054may issue a limiting command to the hardware processor executing the display manager1050. In accordance with this embodiment, the limiting command may include the menu items determined to have a frequency above the threshold percentage. The limiting command may be issued in conjunction with the one or more display commands. Upon receipt of the display command and the limiting command, the display manager1050may delete or remove menu items included in the display command that are not also included in the limiting command prior to causing the menu items to be displayed on the MUI display206. In other embodiments, the limiting command may include menu items other than the menu items determined to have a frequency above the threshold percentage. Upon receipt of the display command and the limiting command, the display manager1050may delete or remove menu items included in the display command that are also included in the limiting command prior to causing the menu items to be displayed on the MUI display206. In other embodiments, instead of a separate limiting command, the display command may be modified by the menu manager1054to remove menu items other than the menu items determined to have the frequency above the threshold percentage. Through use of the limiting command, menu items (user-selectable options or choices) may be limited to fewer than a number of menu items on the first menu and the second menu. For example, the first menu may include nine menu items, but the use of a limiting command restricts the total number of menu items to be displayed to be less than nine. For example, a total number of menu items (user-selectable options) may be fewer than or equal to seven (or fewer than the seven), fewer than or equal to five, fewer than or equal to three, or fewer than or equal to any other number. The number of menus (limited number) described herein is just an example, and the number may be any number selected to provide a limited display to avoid or prevent the user from being overwhelmed with choices. In embodiments, menu items that are excluded from display due to a limiting command are provided in the advanced context menu270. In embodiments, menu items excluded from display based on a limiting number may be selected according to frequency of selection. In some embodiments, if after determining the number of menu items that has a selection frequency greater than the threshold percentage and the number of menu items is greater than the limiting number, e.g., seven, the menu manager1054may increase the threshold percentage to lower the number of menu items that has a selection frequency greater than the threshold percentage. Thus, the menu manager1054may be configured to select and display a specific number of menu items having the highest selection frequencies. In an embodiment, the limiting function may operate as follows, as applied to any type of MUI module. The threshold percentage may be used to determine which menu items will be displayed (e.g., not limited). For example, a threshold percentage of 90% or 80% may be used, meaning that only menu items with a selection frequency higher than 90% or 80% are displayed. In an example, the selection frequency may be applied based on user login sessions, meaning that only menu items used 90% or 80% of the time that a user logs in are displayed. The limiting function may be applied to one or more menu levels, i.e., to a first menu level, a second menu level, etc. In some embodiments, the threshold may vary based on the menu level (e.g., lower levels may have lower frequency requirements for display—as there are often a greater number of options at lower levels, they may be selected less often.) Those menu items that do not meet the threshold (e.g., used 10% or less, or used 20% or less) are displayed in the advanced context menu, which changes according to the current menu being displayed. In this manner, the user's choices are limited to those that are most frequently used throughout the MUI, permitting significantly faster navigation by the user. The 90%/10% and/or 80%/20% values are exemplary only and other values may be selected according to the MUI module being implemented. In an example, the limiting function may also be based on a default protocol as compared to a user customized protocol. For example, a vendor may market an assay kit including a standard protocol that further permits customer modifications. The standard protocol options may be included in the available menu items displayed in the active portion as the user moves through the menuing system, while the available customer modifications may be displayed in the advanced context menu. This division of menu items may be adjusted based on actual user operation after that particular assay kit has been used several times by a user. Similarly, by using the limiting command, menu items (user-selectable options) may be limited to fewer than a number of menu items on the first menu, the second menu and the third menu. In certain embodiments, when the period of time expires, the menu item selection table may delete the selection history for a new determination. In this example, the menu item(s) that were previously excluded will again be made available. In embodiments, the MUI may provide team integration via communications between multiple MUI modules. An integrated system managed by systems consistent with embodiments hereof may be managed by multiple MUI modules configured for completing different tasks by different operators. For example, using the example of a laboratory information management system (LIMS), an admin console module, an experimental design module, an inventory control module, an experimental analysis module, and an experimental procedure module may be provided. The admin console module may provide the features and functionality to manage the various users, operators, instruments, and teams. The experimental design module may permit one or more members of a team to design experiments that other members of the team will conduct. The inventory control module may permit other team members to review inventory and order more consumables, taking into account experimental history and future scheduled experiments. the experimental procedure module may permit team members responsible for running the experiments to access the already designed experiments and implement them, through interaction between the MUI, the operator, and external systems. Finally, the experimental analysis module may permit other team members to access results of experiments after they have been conducted. Based on user and team set-up prepared via the admin console, each user may log-in to the system and be provided with access to the requisite modules for completing the tasks that they are responsible for. In embodiments, the requisite modules may be installed on computing devices in appropriate locations for completing tasks (i.e., an experimental procedure module may be installed on a device connected to a laboratory instrument while an admin console module may be installed on a desktop device). Accordingly, the systems provided herein permit the integration of workflows between multiple team members through the use of a single and consistent interface. In embodiments, the display manager1050may be configured to provide one or more icons or animations to designate a “working” status of the methodical user interface control system1102. When the methodical user interface control system1102is processing, a working status indication is provided to alert a user that processing is occurring to prevent impatience. In an embodiment, a working status indication may be provided via a light fountain display presented in a portion of the screen not occupied by active or historical portions. For example, a bottom portion of the screen, centered beneath the active portion, may be used for a light fountain display. The light fountain may provide a series of cascading bars shown in colors consistent with the remainder of the MUI. In an embodiment, the cascading bard may be presented in white and various shades of blue. In an embodiment, the bars are presented in four rows of elongated bars. Each row may contain, for example, a plurality of bars between two and twenty of varying lengths. When the system is processing, the bars may flash on and off in different shades of white and blue and in different lengths, giving the impression of a waterfall or light fountain. Embodiment described herein further include methods of designing user interface system. For example, such methods may include the design of MUIs consistent with embodiments hereof. Methods of designing user interface systems may include generating hierarchical menu trees as described herein. Hierarchical menu trees may include a series of menus, each including menu items that lead to a subsequent series of menus. Methods of designing user interface systems may further include selecting execution menus to terminate branches of the hierarchical menu tree, wherein the execution menus are configured to execute one or more commands within the software, to provide one or more sets of instructions to a user, and/or to output one or more commands to a connected device, system, instrument, or machine. Methods of designing user interface systems may further include configuring each of the menus in the hierarchical menu tree with one or more display modes, including at least an active display mode for display in an active portion of a user interface and an historical display mode for display in an historical portion of user interface. Further aspects of methods of user interface design may further include design methods for any of the menu functionalities described herein. In further embodiments, MUIs consistent with the disclosure may provide integrated help options during hierarchical menu navigation. A user may request help with a given menu by pressing a particular key combination and/or by accessing a help option displayed by the advanced context menu. Integrated help options may include one or more dialog boxed designed to provide explanations to a user regarding the options presented. As discussed above, the MUI provides a large amount of blank or background space. Thus, help options may be presented as pop-ups or dialog boxes pointing to the portions of the MUI for which a user seeks help without compromising the original MUI display. In embodiments, enabling the help functionality may cause a dialog box to appear as a user hovers over or otherwise indicates any item in the MUI. In further embodiments, the MUI historical portion may be further adapted to display menu items of menus subsequent to the current menu. For example, as a user navigates a current menu, they may, for example, scroll a vertical wheel, causing different menu items to be highlighted or emphasized. A submenu related to the highlighted menu item may be displayed in the historical portion to provide a visual representation of a subsequent menu to the current menu including future items that can be subsequently selected. In embodiments, as discussed above, the first active portion and the second historical portion are each adapted to for consistent display in a same portion of the MUI. Although the positioning of each of these portions is not limited to a specific place on the MUI, in certain embodiments, the location, once selected, is maintained. Accordingly, the active portion of the MUI display is adapted to be consistently displayed within a first same area of the UI display to optimize a user's focus while interacting with the UI display and the historical portion of the MUI display is adapted to be consistently displayed within a second same area of the UI display to optimize a user's focus while interacting with the UI display. The prior description provides example menu configurations for providing a UI display of multiple menus in a hierarchical menu tree.FIGS.2D-2Mprovide additional examples of menu display configurations. The following menu display configurations may be used, without limitation, in any combination with each other and with the menu configurations previously disclosed. For example, selection of a particular menu item anywhere in the hierarchical menu tree may cause the processor to execute commands to cause the UI display to shift to any of the menu configurations described herein. In particular, specific menu display configurations may be associated with specific menu selections. FIG.2Dshows another example of a menu display configuration in one embodiment.FIG.2Dillustrates a two wheel configuration in which the first wheel option has sub-options in a second wheel. For instance, selecting an option in a first wheel of options displays in the second wheel, the sub-options associated with the selected option. In an embodiment, a first portion214of the display may initially display the first wheel, and responsive to a selection of an option from the first wheel, the first wheel with its options may be relocated to a second portion216adjacent to the first portion. The first portion may then display the second wheel with sub-options to the first option, for example, in a parallel fashion (first wheel displayed in parallel to the second wheel in the same visual orientation). In further embodiments of this embodiment, both the first wheel and the second wheel may be displayed in the first portion214of the MUI display206. The first wheel may be displayed in a first sub-portion of the first portion214and the second wheel may be displayed in a second sub-portion of the first portion214. As used herein, sub-portions may be divided portions of a larger portion. Sub-portions may also be used interchangeably with sub-sections. In embodiments, selection of a menu item in the first wheel may be caused simply by clicking on any menu item in the first wheel or by rotating any menu item in the first wheel to a prominent, emphasized position. Selection of an item from a first menu on the first wheel may cause the second menu displayed on the second wheel to be revised accordingly. In still further embodiments of this embodiment, the first portion214may be split into more than two sub-portions, with each sub-portion including a wheel displaying a corresponding menu. Thus, three wheels may display a first menu, a second menu, and a third menu, representing different levels of a hierarchical menu tree. In another example, three wheels may display a second, third, and fourth menu. Other examples may include any number of wheels. In further embodiments, multiple wheels may be displayed in multiple sub-portions of the first portion204to permit the user to select from multiple menus at a same hierarchical menu level. For example, selection of a specific menu item at one menu level may lead to the display of multiple submenus at the same level. Thus, selection of an item at a second menu level may lead to display of multiple third menus, each containing a plurality of third menu items. In embodiments, the multiple submenus displayed may be execution menus, permitting a user to make multiple execution menu selections concurrently. In embodiments, where multiple submenus are displayed, the multiple submenus may be related or otherwise associated with one another. FIG.2Eshows yet another example of a menu display configuration in one embodiment. In this display configuration, two wheels are compressed into one wheel. Wheel option has sub-options which are expressed within the one wheel associated with the active wheel option. In this configuration, the first portion and the second portion of the display overlap but still all menu items are visible (or can be made visible by expanding in case of collapsed items, sliding or rotating a wheel of items). For instance, the second wheel of options may be displayed within the first wheel. The first wheel of options may be rotatable in one direction (e.g., vertically up and down) while the second wheel of options may be rotatable in another direction (e.g., horizontally sides ways, left and right). Selected path is also made visible in the second portion. For instance, selecting ‘Sub-option 2’ shown in the display moves that selected option below the ‘First Wheel Option 1’. FIGS.2F-2Gshow still yet another example of a menu display configuration in one embodiment. The figures show switching of wheel options from horizontal to vertical.FIG.2Fshows a menu of options displayed in a graphical wheel, for example, whose displayed options are rotatable in horizontal direction (left and right). The wheel is displayed in a first portion of a graphical user interface display. Upon selecting an option (a menu item in the list of options), the graphical wheel is switched to a vertically rotatable wheel. For instance, the wheel is moved or relocated to a second portion of the graphical user interface display, and the first portion of the graphical user interface display now displays a list of sub-options related to the option selected in the previous menu of options. In one embodiment, the second portion of the display may display up to a threshold number of menu levels, for example, after which a different visualization configuration may be employed for displaying the past menu levels, to keep the second portion from growing too large. For instance, referring toFIG.2C, if there are more than a threshold number of menu levels (as an example,FIG.2Cshows 2 levels (202,210)), a visualization mechanism may be employed that is able to visualize all past menu levels without having to grow the second portion of the display (e.g.,FIG.2Cat208). Consider for example, a threshold number to be 3. In that example, the second portion of the display may show 3 menu levels. When an additional choice for the next level is made (e.g., 4-th menu level), the second portion may show the most recent past 3 selections (the bottom 3 levels), with the items in the second portion made scrollable up and down. So, in this example, the first menu level choice is visible by scrolling on the second portion. As another example, the second portion may always show the top 2 levels, i.e., the first2decisions, and the last decision. In this way, the user is shown an overall context of a workflow, for instance, top-down. Tapping or scrolling the second portion allows the user to expand out the menu items, for example, like an accordion. In another aspect, a search function may be provided associated with a wheel. Search keywords allow for filtering the wheel options available to the user. The search function helps in handling a long wheel of options or multi-wheel of options, which may take a long time to navigate. FIGS.2H-2Jshow an example of the first portion and the second portion displayed as a series of concentric circles in one embodiment. Referring toFIG.2H, a dial220may be rotated clockwise or counterclockwise to view in an option window218, a menu item or item to select. Tapping on the area of the dial (e.g., circle)220selects the option. Selecting an option, for example, viewed via an option window218, transitions the user interface to a configuration shown inFIG.21. For instance, inFIG.21, concentric dials expand inside out, showing another concentric circle to represent another level (e.g., sub-level) of menu items or paths. Sub-options may be viewed via an option window222on that circle224(also referred to as a dial) by rotating that dial224clockwise or counterclockwise. Selection of an option in that level (shown as sub-option ‘n’)222may be made by tapping on the area of that circle224(that is non-overlapping with the inner circle or dial220). In another embodiment, selecting an option from the dial or circular menu user interface (for example, as shown inFIG.2H) may transition the user interface state to the configuration shown inFIG.2J. For instance, the next level of option selection billows out from the selected option, expanding the dial to show another inner dial224with an option window222. In an embodiment, the number of options that can be viewed on an option window (e.g.,218and222) need not be limited, such that an unlimited number of options may be shown and selected as applicable for an application. In an embodiment, an option window (e.g.,218) may be enlarged to show a selected option (e.g., as highlighted) and one or more unselected options, for example, an unselected option that appears before the selected option and another unselected option that appears after the selected option. In another aspect, an option window (e.g.,219) may show more than one item or option at a time, for instance, 3 menu items or options. In this example, tapping on a menu item in the option window selects the option. After a selection is made, the selected option may be displayed in a highlighted format or another differentiating format, for instance, to distinguish the selected option from unselected options appearing in the option window. In another embodiment, the relocation command may specify that the second portion is concentric with the first portion and that the relocated menu be displayed adjacent to the first portion (and concentric) where the first portion and the second portion are to be displayed on the MUI display206as a series of concentric circles. For example, the first portion may be displayed as the center circle of the series of concentric circles, and a relocated menu level(s) of the hierarchy being displayed as the circles outside or surrounding the center circle. FIG.2Kshows a tree type of menu levels in an embodiment. The hierarchical menu tree shown inFIG.2Kincludes a first menu of menu items, a second menu of submenu items, a third menu of sub-submenu items, and four execution menus. One execution menu is associated with submenu item 1 and three more are associated with sub-submenu items 1-3. Selection of menu item 1 from the first menu leads to display of the second menu of submenu. Selection of submenu item 1 leads to an execution menu for submenu item 1 in which process parameters may be selected. Selection of submenu item 2 leads to a third menu of sub-submenu items. Selection of any one of sub-submenu items 1-3 leads to execution menus for these respective third menu items. FIG.2Lshows another example of a menu display configuration in one embodiment. A graphical element242such as a wheel or a slider (or another graphical element) is displayed in a portion240of a display screen. The graphical element242, e.g., a wheel, is ordered with the most “n” recent items first (reverse chronological)244with a search function such a search box or area246next followed by a list, for example, an alphanumerically sorted list, of the all of the menu items248. In another embodiment, the menu items shown at248appear as indexed, for instance, as a search term is entered in the search box246. The entire wheel242is made scrollable. For instance, a user can scroll through the entire wheel242or enter a search string in the search box246. Entering a search term in the search area246displays menu items that match the search term as a search character is entered. For instance, on each character entered, one or more menu items closest to matching the search character(s) are indexed at248. The wheel240is bifurcated into two independent wheels, one that displays recently chosen menu items244and another that displays indexed list or all menu items248. The two wheels244and248are scrollable or movable independently from one another. So, for example, the entire wheel242is made to move or scroll as one wheel. Responsive to receiving or detecting an entry of a search term or character in the search area246, the wheel is bifurcated into two separate wheels244and248, which can be independently scrolled. One of the two separate wheels, e.g.,248, shows a filtered list of menu items based on the search. FIGS.2M-2Oshow examples of scrollable wheels that scroll or slide from a first menu item to a last menu item and back from the last menu item to the first menu item. In this embodiment, a graphical element (e.g., a wheel or slider) that shows the menu items do not revolve or rotate around fully but stops at the last menu item or the first menu item (if rotating from the last menu item). In this way, for example, the beginning and end of the menu are always apparent because the two do not merge or connect. This technique reduces computer processing cycle time because the wheel and/or the slider is able to convey (and user is able to immediately understand) the full menu of choices with clear indication as to where or which is the first menu item and where or which is the last menu item in the choices presented by the wheel and/or the slider such that the wheel or the slider need not repeatedly scroll in an attempt to determine which menu item is the first and which is the last, or to determine whether all menu items have been visited. In embodiments, the wheel and/or the slider need not rotate fully; for example, do not make a full revolution or complete circle. For instance, the wheel and/or the slider rotates or slides from a beginning menu item to an ending menu item, and reverses to rotate or slide back from the ending menu item to the beginning menu item. In this way, for example, the beginning and end of the menu are always apparent because the two are spaced apart as to not merge or come together. This technique decreases processing time because the wheel and/or the slider is able to convey (and user is able to immediately understand) the full menu of choices with clear indication as to where or which is the first menu item and where or which is the last menu item in the choices presented by the wheel and/or the slider. Further, as the wheel and/or slider rotates, selectable choices can be displayed in a more prominent fashion, such as using larger text, bolder font, etc. Choices that were previously selectable when the wheel and/or slider was rotated/slid to a different position or will be selectable as the wheel and/or slider continues to rotate/slide, can be displayed in a less prominent fashion, such as by shrinking or fading the text. In one embodiment, the more prominently displayed choices can be displayed to appear as if they are closer to the user vis-à-vis the less prominent choices. Referring toFIG.2M, the first menu item252is shown in the center of the wheel (or slider)250. A menu item shown at the center may be shown in highlighted format (e.g., bigger characters, different color font, etc.). Blank space appears before the first menu item (e.g., above the center of the wheel where the first menu item is displayed). The next menu items (e.g.,254,256) appear adjacent to (e.g., below) the first menu item. Scrolling the wheel (e.g., in vertical direction) shows additional menu items, e.g., as shown atFIG.2N. For instance, shown inFIG.2N, next menu items are shown as the wheel250is scrolled up.FIG.2Oshows the last menu item at the center of the wheel, with the previous menu items appearing adjacent to (e.g., above) the last menu item. The wheel or the slider250in this embodiment does not rotate around to show the first menu item after the last menu item258. Instead the wheel stops rotating at the last menu item258. Below the last menu item258shows blank space. Similarly, navigating back (e.g., scrolling the wheel in the opposite direction) shows the previous menu items up to the first menu item. While the example graphical wheel shown inFIGS.2M-2Oillustrates a vertical wheel, a horizontal wheel would function in similar manner. For instance, a first menu item may appear at a center of a horizontal wheel with the next menu items appearing horizontally adjacent to the first menu item (e.g., right of the center). Scrolling the wheel to the left in this example would display additional menu items. When the last menu item is reached by scrolling, that last menu item appears at the center with blank space beyond the last menu item (e.g., to the right of the last menu item). In another aspect, the orientation of the rotation may be reversed: e.g., with vertical wheel, scrolling down (instead of up) to navigate from the first to the last menu item; with horizontal wheel, scrolling right to navigate from the first to the last menu item. The number of menu items (options) shown on a wheel at one time is configurable, for example, based on screen size and/or area of the screen allocated for the wheel, etc., and is not limited to 6 items shown inFIG.2N. A non-limiting application of such a user interface is in selecting a channel to watch on television (TV). Broader categories may be displayed on top horizontal area with finer categorizations stacked below, and leaf item may be displayed vertically, for example, on a vertical wheel. For example, referring toFIG.2E, the ‘Wheel Option 1’ may represent a genre and the ‘Sub-options 1’ may represent shows and/or movies organized in a grid. In an embodiment, the methodical user interface control system1102provides an interface to a user for the running of a process. A process may include conducting an experiment, performing one or more manufacturing operations, or any other procedure. The following describes in detail various instructions for conducting experiments consistent with embodiment hereof. Instructions for conducting an experiment may be for manipulating, designing, performing, reviewing, measuring, analyzing, storing, and conducting any other task related to the experiment. The experiment may be but is not limited to one or more assays. The methodical user interface control system1102may be incorporated into and/or associated with an assay system and provide commands to generate a MUI display206for the system. The MUI display206, in response to the commands is able to display or provide a visual representation of a path of a workflow and/or menu items for the assay. The assays may include one or more electrochemiluminescence (ECL) assays. The methods of the present embodiments may be used in conjunction with a variety of assay devices and/or formats. The assay devices may include, e.g., assay modules, such as assay plates, cartridges, multi-well assay plates, reaction vessels, test tubes, cuvettes, flow cells, assay chips, lateral flow devices, etc., having assay reagents (which may include targeting agents or other binding reagents) added as the assay progresses or pre-loaded in the wells, chambers, or assay regions of the assay module. These devices may employ a variety of assay formats for specific binding assays, e.g., immunoassay or immunochromatographic assays. Illustrative assay devices and formats are described herein below. In certain embodiments, the methods of the present embodiments may employ assay reagents that are stored in a dry state and the assay devices/kits may further comprise or be supplied with desiccant materials for maintaining the assay reagents in a dry state. The assay devices preloaded with the assay reagents can greatly improve the speed and reduce the complexity of assay measurements while maintaining excellent stability during storage. The dried assay reagents may be any assay reagent that can be dried and then reconstituted prior to use in an assay. These include, but are not limited to, binding reagents useful in binding assays, enzymes, enzyme substrates, indicator dyes and other reactive compounds that may be used to detect an analyte of interest. The assay reagents may also include substances that are not directly involved in the mechanism of detection but play an auxiliary role in an assay including, but not limited to, blocking agents, stabilizing agents, detergents, salts, pH buffers, preservatives, etc. Reagents may be present in free form or supported on solid phases including the surfaces of compartments (e.g., chambers, channels, flow cells, wells, etc.) in the assay modules or the surfaces of colloids, beads, or other particulate supports. A wide variety of solid phases are suitable for use in the methods of the present embodiments including conventional solid phases from the art of binding assays. Solid phases may be made from a variety of different materials including polymers (e.g., polystyrene and polypropylene), ceramics, glass, composite materials (e.g., carbon-polymer composites such as carbon-based inks). Suitable solid phases include the surfaces of macroscopic objects such as an interior surface of an assay container (e.g., test tubes, cuvettes, flow cells, cartridges, wells in a multi-well plate, etc.), slides, assay chips (such as those used in gene or protein chip measurements), pins or probes, beads, filtration media, lateral flow media (for example, filtration membranes used in lateral flow test strips), etc. Suitable solid phases also include particles (including but not limited to colloids or beads) commonly used in other types of particle-based assays e.g., magnetic, polypropylene, and latex particles, materials typically used in solid-phase synthesis e.g., polystyrene and polyacrylamide particles, and materials typically used in chromatographic applications e.g., silica, alumina, polyacrylamide, polystyrene. The materials may also be a fiber such as a carbon fibril. Microparticles may be inanimate or alternatively, may include animate biological entities such as cells, viruses, bacterium and the like. The particles used in the present method may be comprised of any material suitable for attachment to one or more binding partners and/or labels, and that may be collected via, e.g., centrifugation, gravity, filtration or magnetic collection. A wide variety of different types of particles that may be attached to binding reagents are sold commercially for use in binding assays. These include non-magnetic particles as well as particles comprising magnetizable materials which allow the particles to be collected with a magnetic field. In one embodiment, the particles are comprised of a conductive and/or semi conductive material, e.g., colloidal gold particles. The microparticles may have a wide variety of sizes and shapes. By way of example and not limitation, microparticles may be between 5 nanometers and 100 micrometers. Preferably microparticles have sizes between 20 nm and 10 micrometers. The particles may be spherical, oblong, rod-like, etc., or they may be irregular in shape. The particles used in the present method may be coded to allow for the identification of specific particles or subpopulations of particles in a mixture of particles. The use of such coded particles has been used to enable multiplexing of assays employing particles as solid phase supports for binding assays. In one approach, particles are manufactured to include one or more fluorescent dyes and specific populations of particles are identified based on the intensity and/or relative intensity of fluorescence emissions at one or more wave lengths. This approach has been used in the Luminex xMAP systems (see, e.g., U.S. Pat. No. 6,939,720) and the Becton Dickinson Cytometric Bead Array systems. Alternatively, particles may be coded through differences in other physical properties such as size, shape, imbedded optical patterns and the like. The methods of the embodiments can be used with a variety of methods for measuring the amount of an analyte and, in particular, measuring the amount of an analyte bound to a solid phase. Techniques that may be used include, but are not limited to, techniques known in the art such as cell culture-based assays, binding assays (including agglutination tests, immunoassays, nucleic acid hybridization assays, etc.), enzymatic assays, colorimetric assays, etc. Other suitable techniques will be readily apparent to one of average skill in the art. Some measurement techniques allow for measurements to be made by visual inspection, others may require or benefit from the use of an instrument to conduct the measurement. Methods for measuring the amount of an analyte include label free techniques, which include but are not limited to i) techniques that measure changes in mass or refractive index at a surface after binding of an analyte to a surface (e.g., surface acoustic wave techniques, surface plasmon resonance sensors, ellipsometric techniques, etc.), ii) mass spectrometric techniques (including techniques like MALDI, SELDI, etc. that can measure analytes on a surface), iii) chromatographic or electrophoretic techniques, and iv) fluorescence techniques (which may be based on the inherent fluorescence of an analyte), etc. Methods for measuring the amount of an analyte also include techniques that measure analytes through the detection of labels which may be attached directly or indirectly (e.g., through the use of labeled binding partners of an analyte) to an analyte. Suitable labels include labels that can be directly visualized (e.g., particles that may be seen visually and labels that generate a measurable signal such as light scattering, optical absorbance, fluorescence, chemiluminescence, electrochemiluminescence, radioactivity, magnetic fields, etc.). Labels that may be used also include enzymes or other chemically reactive species that have a chemical activity that leads to a measurable signal such as light scattering, absorbance, fluorescence, etc. The use of enzymes as labels has been well established in in Enzyme-Linked ImmunoSorbent Assays, also called ELISAs, Enzyme ImmunoAssays or EIAs. In the ELISA format, an unknown amount of antigen is affixed to a surface and then a specific antibody is washed over the surface so that it can bind to the antigen. This antibody is linked to an enzyme, and in the final step a substance is added that the enzyme converts to a product that provides a change in a detectable signal. The formation of product may be detectable, e.g., due a difference, relative to the substrate, in a measurable property such as absorbance, fluorescence, chemiluminescence, light scattering, etc. Certain (but not all) measurement methods that may be used with solid phase binding methods according to the embodiments may benefit from or require a wash step to remove unbound components (e.g., labels) from the solid phase Accordingly, the methods of the embodiments may comprise such a wash step. Methods disclosed herein may be performed manually, using automated technology, or both. Automated technology may be partially automated, e.g., one or more modular instruments, or a fully integrated, automated instrument. Example automated systems are discussed and described in commonly owned International Patent Appl. Pub. Nos. WO 2018/017156 and WO 2017/015636 and International Patent Appl. Pub. No. WO 2016/164477, each of which is incorporated by reference herein in its entirety. Automated systems (modules and fully integrated) on which the methods herein may be carried out may comprise the following automated subsystems: computer subsystem(s) that may comprise hardware (e.g., personal computer, laptop, hardware processor, disc, keyboard, display, printer), software (e.g., processes such as drivers, driver controllers, and data analyzers), and database(s); liquid handling subsystem(s), e.g., sample handling and reagent handling, e.g., robotic pipetting head, syringe, stirring apparatus, ultrasonic mixing apparatus, magnetic mixing apparatus; sample, reagent, and consumable storing and handling subsystem(s), e.g., robotic manipulator, tube or lid or foil piercing apparatus, lid removing apparatus, conveying apparatus such as linear and circular conveyors and robotic manipulators, tube racks, plate carriers, trough carriers, pipet tip carriers, plate shakers; centrifuges, assay reaction subsystem(s), e.g., fluid-based and consumable-based (such as tube and multi well plate); container and consumable washing subsystem(s), e.g., plate washing apparatus; magnetic separator or magnetic particle concentrator subsystem(s), e.g., flow cell, tube, and plate types; cell and particle detection, classification and separation subsystem(s), e.g., flow cytometers and Coulter counters; detection subsystem(s) such as colorimetric, nephelometric, fluorescence, and ECL detectors; temperature control subsystem(s), e.g., air handling, air cooling, air warming, fans, blowers, water baths; waste subsystem(s), e.g., liquid and solid waste containers; global unique identifier (GUI) detecting subsystem(s) e.g., 1D and 2D bar-code scanners such as flat bed and wand types; sample identifier detection subsystem(s), e.g., 1D and 2D bar-code scanners such as flat bed and wand types. Analytical subsystem(s), e.g., chromatography systems such as high-performance liquid chromatography (HPLC), fast-protein liquid chromatography (FPLC), and mass spectrometer can also be modules or fully integrated. Automated systems consistent with embodiments hereof may be controlled and/or managed by the methodical user interface control system1102. Systems or modules that perform sample identification and preparation may be combined with (or be adjoined to or adjacent to or robotically linked or coupled to) systems or modules that perform assays and that perform detection or that perform both. Multiple modular systems of the same kind may be combined to increase throughput. Modular system(s) may be combined with module(s) that carry out other types of analysis such as chemical, biochemical, and nucleic acid analysis. The automated system may allow batch, continuous, random-access, and point-of-care workflows and single, medium, and high sample throughput. The system may include, for example, one or more of the following devices: plate sealer (e.g., Zymark), plate washer (e.g., BioTek, TECAN), reagent dispenser and/or automated pipetting station and/or liquid handling station (e.g., TECAN, Zymark, Lab systems, Beckman, Hamilton), incubator (e.g., Zymark), plate shaker (e.g., Q. Instruments, Inheco, Thermo Fisher Scientific), compound library or sample storage and/or compound and/or sample retrieval module. One or more of these devices is coupled to the apparatus via a robotic assembly such that the entire assay process can be performed automatically. According to an alternate embodiment, containers (e.g., plates) are manually moved between the apparatus and various devices (e.g., stacks of plates). The automated system may be configured to perform one or more of the following functions: (a) moving consumables such as plates into, within, and out of the detection subsystem, (b) moving consumables between other subsystems, (c) storing the consumables, (d) sample and reagent handling (e.g., adapted to mix reagents and/or introduce reagents into consumables), (e) consumable shaking (e.g., for mixing reagents and/or for increasing reaction rates), (f) consumable washing (e.g., washing plates and/or performing assay wash steps (e.g., well aspirating)), and (g) measuring ECL in a flow cell or a consumable such as a tube or a plate. The automated system may be configured to handle individual tubes placed in racks, multiwell plates such as 96 or 384 well plates. Methods for integrating components and modules in automated systems as described herein are well-known in the art, see, e.g., Sargeant et al., Platform Perfection, Medical Product Outsourcing, May 17, 2010. In embodiments, the automated system is fully automated, is modular, is computerized, performs in vitro quantitative and qualitative tests on a wide range of analytes and performs photometric assays, ion-selective electrode measurements, and/or electrochemiluminescence (ECL) assays. In embodiments, the system includes the following hardware units: a control unit, a core unit and at least one analytical module. In embodiments, the control unit uses a graphical user interface to control all instrument functions, and is comprised of a readout device, such as a monitor, an input device(s), such as keyboard and mouse, and a personal computer using, e.g., a Windows operating system. In embodiments, the core unit is comprised of several components that manage conveyance of samples to each assigned analytical module. The actual composition of the core unit depends on the configuration of the analytical modules, which can be configured by one of skill in the art using methods known in the art. In embodiments, the core unit includes at least the sampling unit and one rack rotor as main components. Conveyor line(s) and a second rack rotor are possible extensions. Several other core unit components can include the sample rack loader/unloader, a port, a barcode reader (for racks and samples), a water supply and a system interface port. In embodiments, the analytical module conducts ECL assays and includes a reagent area, a measurement area, a consumables area and a pre-clean area. The methods of the invention may be applied to singleplex or multiplex formats where multiple assay measurements are performed on a single sample. Multiplex measurements that can be used with the invention include, but are not limited to, multiplex measurements i) that involve the use of multiple sensors; ii) that use discrete assay domains on a surface (e.g., an array) that are distinguishable based on location on the surface; iii) that involve the use of reagents coated on particles that are distinguishable based on a particle property such as size, shape, color, etc.; iv) that produce assay signals that are distinguishable based on optical properties (e.g., absorbance or emission spectrum) and/or v) that are based on temporal properties of assay signal (e.g., time, frequency or phase of a signal). The invention includes methods for detecting and counting individual detection complexes. In embodiments, the surface comprises a plurality of binding domains, and each analyte forms a complex in a different binding domain of the plurality of binding domains. In embodiments, the surface is a particle. In embodiments, the surface is a bead. In embodiments, the surface is a plate. In embodiments, the surface is a well in a multi-well array. In embodiments, the surface comprises an electrode. In embodiments, the electrode is a carbon ink electrode. In embodiments, each binding domain for each analyte of the one or more additional analytes is on a separate surface, and the surfaces are beads in a bead array. In embodiments, each binding domain for each analyte of the one or more additional analytes is on a single surface, and the binding domains form the elements of a capture reagent array on the surface. In embodiments, the surface comprises an electrode and the detection step of the method comprises applying a potential to the electrode and measuring electrochemiluminescence. In embodiments, applying a potential to the electrode generates an electrochemiluminescence signal. In a specific embodiment, the surface comprises a plurality of capture reagents for one or more analytes that are present in a sample, and the plurality of capture reagents are distributed across a plurality of resolvable binding regions positioned on the surface. Under the conditions used to carry out and analyze a measurement, a “resolvable binding region” is the minimal surface area associated with an individual binding event that can be resolved and differentiated from another area in which an additional individual binding event is occurring. Therefore, the method consists of binding one or more analytes to one or more capture reagents on the surface, determining the presence or absence of the analytes, in a plurality of resolvable binding regions on the surface, and identifying the number of resolvable binding regions that contain an analyte of interest and/or the number of domains that do not contain analyte. The resolvable binding regions can be optically interrogated, in whole or in part, i.e., each individual resolvable binding region can be individually optically interrogated and/or the entire surface comprising a plurality of resolvable binding regions can be imaged and one or more pixels or groupings of pixels within that image can be mapped to an individual resolvable binding region. A resolvable binding region may also be a microparticle within a plurality of microparticles. The resolvable binding regions exhibiting changes in their optical signature can be identified by a conventional optical detection system. Depending on the detected species (e.g., type of fluorescence entity, etc.) and the operative wavelengths, optical filters designed for a particular wavelength can be employed for optical interrogation of the resolvable binding regions. In embodiments where optical interrogation is used, the system can comprise more than one light source and/or a plurality of filters to adjust the wavelength and/or intensity of the light source. In some embodiments, the optical signal from a plurality of resolvable binding regions is captured using a CCD camera. Other non-limiting examples of camera imaging systems that can be used to capture images include charge injection devices (CIDs), complementary metal oxide semiconductors (CMOSs) devices, scientific CMOS (sCMOS) devices, and time delay integration (TDI) devices, as will be known to those of ordinary skill in the art. In some embodiments, a scanning mirror system coupled with a photodiode or photomultiplier tube (PMT) can be used for imaging. In embodiments, the binding of each analyte to its corresponding capture reagent is performed in parallel by contacting the one or more surfaces with a single liquid volume comprising a plurality of analytes. In embodiments, the plurality of analytes includes the analyte and one or more additional analytes. In embodiments, each step of the method is carried out for each analyte in parallel. In embodiments, the method is a simultaneous multiplexed assay. Multiplexed measurement of analytes on a surface are described herein; see also, e.g., U.S. Pat. Nos. 10,201,812; 7,842,246 and 6,977,722, incorporated by reference herein in their entireties. In a specific embodiment, the methods of the invention can be used in a multiplexed format by binding a plurality of different analytes to a plurality of capture reagents for those analytes, the capture analytes being immobilized on coded bead, such that the coding identifies the capture reagent (and analyte target) for a specific bead. The method may further comprise counting the number of beads that have a bound analyte (using the detection approaches described herein). Alternatively or additionally, the capture reagents can be bound, directly or indirectly, to different discrete binding domains on one or more solid phases, e.g., as in a binding array wherein the binding domains are individual array elements, or in a set of beads wherein the binding domains are the individual beads, such that discrete assay signals are generated on and measured from each binding domain. If capture reagents for different analytes are immobilized in different binding domains, the different analytes bound to those domains can be measured independently. In one example of such an embodiment, the binding domains are prepared by immobilizing, on one or more surfaces, discrete domains of capture reagents that bind analytes of interest. Optionally, the surface(s) may define, in part, one or more boundaries of a container (e.g., a flow cell, well, cuvette, etc.) which holds the sample or through which the sample is passed. In a preferred embodiment, individual binding domains are formed on electrodes for use in electrochemical or electrochemiluminescence assays. Multiplexed measurement of analytes on a surface comprising a plurality of binding domains using electrochemiluminescence has been used in the Meso Scale Diagnostics, LLC, MULTI-ARRAY® and SECTOR® Imager line of products (see, e.g., U.S. Pat. Nos. 10,201,812; 7,842,246 and 6,977,722, incorporated herein by reference in their entireties). Still further, the capture reagents can be bound, directly or indirectly, to an electrode surface, which optionally includes different discrete binding domains, as described above. The electrode surface can be a component of a multi-well plate and/or a flow cell. Electrodes can comprise a conductive material, e.g., a metal such as gold, silver, platinum, nickel, steel, iridium, copper, aluminum, a conductive allow, or the like. They may also include oxide coated metals, e.g., aluminum oxide coated aluminum. The electrode can include working and counter electrodes which can be made of the same or different materials, e.g., a metal counter electrode and carbon working electrode. In one specific embodiment, electrodes comprise carbon-based materials such as carbon, carbon black, graphitic carbon, carbon nanotubes, carbon fibrils, graphite, graphene, carbon fibers and mixtures thereof. In one embodiment, the electrodes comprise elemental carbon, e.g., graphitic, carbon black, carbon nanotubes, etc. Advantageously, they may include conducting carbon-polymer composites, conducting particles dispersed in a matrix (e.g. carbon inks, carbon pastes, metal inks, graphene inks), and/or conducting polymers. One specific embodiment of the invention is an assay module, preferably a multi-well plate, having electrodes (e.g., working and/or counter electrodes) that comprise carbon, e.g., carbon layers, and/or screen-printed layers of carbon inks. In embodiments, each binding domain comprises a targeting reagent complement capable of binding to a targeting reagent complement and each anchoring reagent and capture reagent comprise a supplemental linking reagent capable of binding to a linking reagent, and the method further comprises immobilizing a capture reagent and anchoring agent in each binding domain by: (1) binding the capture and anchoring reagent through the supplemental linking reagent to a targeting reagent complement connected to the linking reagent; and (2) binding the product of step (1) to the binding domain comprising the targeting reagent complement, wherein (i) each binding domain comprises a different targeting reagent complement, and (ii) each targeting reagent complement selectively binds to one of the targeting reagents. Accordingly, in embodiments, the surface comprises the targeting reagent complement; the targeting reagent is connected to the linking reagent; and each of the capture reagent and anchoring reagent comprises the supplemental linking reagent. Thus, in embodiments, the targeting reagent complement on the surface binds to the targeting reagent, which is connected to the linking reagent, which binds to the supplemental linking reagent on the capture reagent and the anchoring reagent. In embodiments, the linking reagent has more than one binding site for supplemental linking reagents, and the immobilization of the capture reagent and anchoring reagent further comprises: binding the capture and anchoring reagent through the supplemental linking reagent to a targeting reagent connected to the linking reagent; and binding the product of to the binding domain comprising the targeting reagent complement, wherein, (i) each binding domain comprises a different targeting reagent complement, and (ii) each targeting reagent complement selectively binds to one of the targeting reagents. For example, in the case where the targeting agent is an oligonucleotide, the linking reagent is streptavidin and the supplemental linking agent is biotin, a biotin-labeled oligonucleotide can be bound to a first of the four biotin binding sites of a streptavidin to form the targeting reagent connected to a linking reagent. A biotin-labeled capture reagent (i.e., a capture reagent linked to the supplemental linking agent) can then bind to a remaining biotin binding site on the streptavidin to connect the targeting agent to the capture reagent. Exemplary targeting reagents and targeting reagent complements are described herein. In embodiments, the targeting reagent and targeting reagent complement are two members of a binding partner pair selected from avidin-biotin, streptavidin-biotin, antibody-hapten, antibody-antigen, antibody-epitope tag, nucleic acid-complementary nucleic acid, aptamer-aptamer target, and receptor-ligand. In embodiments, the targeting reagent is biotin and the targeting reagent complement is streptavidin. In embodiments, the linking reagent and supplemental linking reagent pair is a different binding partner pair than the targeting reagent and targeting reagent complement pair. In embodiments, the linking reagent is avidin or streptavidin, and the supplemental linking reagent is biotin. In embodiments, the targeting reagent and targeting reagent complement are complementary oligonucleotides. In embodiments, the methods of the invention are applied to singleplex or multiplex formats where multiple assay measurements are performed on a single sample. Multiplex measurements that can be used with the invention include, but are not limited to, multiplex measurements i) that involve the use of multiple sensors; ii) that use discrete assay domains on a surface (e.g., an array) that are distinguishable based on location on the surface; iii) that involve the use of reagents coated on particles that are distinguishable based on a particle property such as size, shape, color, etc.; iv) that produce assay signals that are distinguishable based on optical properties (e.g., absorbance or emission spectrum) or v) that are based on temporal properties of assay signal (e.g., time, frequency or phase of a signal). Exemplary assay formats include V-PLEX (www.mesoscale.com/en/products and services/assay kits/v-plex) and U-PLEX (www.mesoscale.com/en/products and services/assay kits/u-plex gateway, and U.S. Pat. Nos. 10,201,812 and 10,189,023, each of which is incorporated herein by reference in its entirety). Additional ultrasensitive assay formats include those disclosed in U.S. Provisional Appl. No. 62/812,928, filed Mar. 1, 2019, and U.S. Provisional Appl. No. 62/866,512, filed Jun. 25, 2019, each of which is incorporated herein by reference in its entirety. Exemplary plate readers include the MESO SECTOR S 600 (www.mesoscale.com/en/products and services/instrumentation/sector_s_600) and the MESO QUICKPLEX SQ 120 (www.mesoscale.com/en/products and services/instrumentation/quickplex_sq_120), both available from Meso Scale Diagnostics, LLC., and the plate readers described in U.S. Pat. No. 6,977,722 and U.S. Provisional Patent Appl. No. 62/874,828, Titled: “Assay Apparatuses, Methods and Reagents” by Krivoy et al., filed Jul. 16, 2019, each of which is incorporated by reference herein in its entirety. The user interface methodology described above may also be incorporated into a user interface of an assay system. The assay system that is described below allows a user to perform assays via the user interface. The following describes an example of a user interface incorporated into the assay system for assay method. The term “system software” or “system” referred to below in describing the functions of the assay system and its user interface refer to software that implements the assay system. The user interface is able to display or visualize a path of a workflow and/or menu items. The following terminologies are used in describing the assay system and its user interface workflow. Advanced Context Menu—A menu of options dependent on the particular context (such as current screen, sub-step, and the state of the screen) for advanced users. Assay Method—The method by which an assay is performed, including but not limited to: 1. Instrument protocol that should be executed and the parameters for execution of that protocol; 2. Test plate layouts; 3. Calibrator titration scheme such as dilution factors; 4. Control layout; and 5. Sample replicate schemes. Audit Log—A continuous record of events both automated and user-initiated that happened in the system that may impact the results generated. This record is used to trace issues and to ensure proper operations in controlled environments. The Audit Log is persistent and immutable. It includes a subset of the information in the Instrument Log. Compatible Protocols—Protocols are compatible if they have the same basic outline and steps, although dilution ratios, times of incubation, washing, and others, may vary between them. Protocols are considered compatible if they can run on an automated platform together during the same run. Completed Run—A run that has been aborted, completed with flag(s), or completed successfully. CV—Coefficient of Variation. Database Clean—Resets the entire database, restoring it to the state it was in at system installation. ECL—Electrochemiluminescence. A proprietary format for detecting molecules of biological interest. Existing Run—A run that has been planned, aborted, completed with flag(s), or completed successfully. Global Product Data (GPD)—Data that is for a specific item identified with a GPI. While the same data can be used for multiple items, the GPI allows for matching of data to one specific item. The GPD may comprise information used to identify at least one element including (i) an assay consumable, (ii) one or more test sites within the consumable, (iii) a reagent and/or sample that has been or will be used in the consumable, or (iv) combinations thereof. Further, the GPD can be used to distinguish a first test site within the consumable from a different test site within the consumable. The GPD can comprise lot identification information, lot specific analysis parameters, manufacturing process information, raw materials information, expiration date, calibration data, threshold information, the location of individual assay reagents and/or samples within one or more test sites of the assay consumable, Material Safety Data Sheet (MSDS) information, or combinations thereof, The GPD can also comprise one or more analytical tools that can be applied by the system to analyze data generated during and/or after the conduct of an assay, assay system maintenance information, system-consumable promotional information, system and/or consumable technical support information, or combinations thereof. In addition, GPD includes consumable identification and/or configuration information, and one or more steps of an assay protocol that can be applied by the system in the conduct of an assay using the consumable. Test sites may also be referred to as spots. Spot layouts may refer to arrays of test sites, for example, within a single well of a test plate or assay plate. Global Product Identifier (GPI)—A system/instrument/consumable vendor-specified, unique identifier for an individual specific product such as an assay consumable. The identifier can be any number of configurations. In the case of consumables such as assay plates, the identifier may be an associated manufacturing barcode. Instrument Log—A detailed log file that records all actions carried out by the system and any failures or error states that have occurred during a run. The Instrument Log is a rolling circular log with stored information, limited by the amount of memory space allocated to this log file; for instance, older entries are overwritten over time. Instrument Software—Software that controls the instrument hardware LED—Light-emitting diode. A light source. Normal State—Instrument is considered to be in a normal state if the software is functioning without any error or warning. Instrument is returned to normal state once error state is recovered and/or warning message is acknowledged. Run—A run includes 0 or more named samples and 1 or more assay methods and tests the samples according to the information described in the assay methods. Run Owner—User who created the run. Sample—A generic term encompassing materials to be analyzed including Calibrators, Controls, Blanks, and Unknowns. Sample ID—The unique identifier for each sample. Sample Layout—The sample locations and sample IDs on a plate. Sample Type—The functional type of a sample such as Calibrator, Control, Blank, or Unknown. Spot Layout—The analyte locations and names in a well on a plate. Step—One of a sequence of separate, consecutive stages in the progression towards a goal. Steps constitute broad stages that may consist of multiple sub-steps. Sub-step—One of a sequence of separate, consecutive stages in the progression towards completion of a step. Sub-steps constitute focused activities within a step. Unexpected Barcode—A barcode that is different than the one expected. A consumable may also be considered to have an “unexpected barcode” if no barcode is read. User Interface (UI)—The software interface that the user of the instrument interacts with to control and monitor the system. UI Warning Event—Any attention messages that require a user response. The user should fix the error and/or acknowledge the message before proceeding. For example, a UI Warning Event may be that the instrument is in a “Not Ready” state. System Events Log—A persisted log of events that occurred in the software that are not instrument related. FIG.4is a flow diagram illustrating a first user login user interface for an assay system in one embodiment.402, system software for assay method may check that the End User License Agreement (EULA) associated with the assay system has been accepted every time it starts. When the user first starts the system software, the EULA is presented. A record of the username and date and time is created when the user accepts the agreement. If the user has not previously accepted the agreement, at404, EULA is displayed and allows the user to accept the agreement. At406, if the user does not accept the agreement, the software closes. At408, a splash screen is displayed that contains: System software branding, Copyright, Legal notice, Software Version. The initial login screen requests the username at410. In one embodiment, the system software may present the past usernames used to login on the system to minimize error due to typing the username. The user is allowed to also enter a new username that has not previously been used to login. After selecting (or receiving) the username at412the software prompts the user to enter the password for the username at414. In one embodiment, the system software may also use biometric ways such as facial recognition, voice, and/or fingerprint, to login or to verify login. In another embodiment, the system software may use a badge keycard that contains information that can be scanned or read via near field communication. At416, the system software receives entered password. Once the username and password have been entered, the system software authenticates the user at418. If the user is authenticated successfully, the user interface proceeds to a start screen at420. Otherwise, the system software via the user interface prompts the user to retry. The system software in one embodiment next requires all users to login to access the software. In one embodiment, authentication may be performed through the Microsoft Windows® authentication facility and may be configured to authenticate through Active Directory. In this first user interface display, the username and password prompt may be displayed in one orientation, for example, horizontally on a horizontal wheel graphical element422. FIG.5is a flow diagram illustrating a method of displaying a start user interface screen display in one embodiment. This display screen includes lists of menu items in two different visual orientations, for example, horizontal and vertical. Thus, for example, a more general category of menu items is displayed on a horizontal wheel502and the submenu items are displayed on a vertical wheel504. For instance, the START option506, which has been selected from the login user (FIG.4) is presented on the horizontal wheel502. The second level of options stemming from the START option406is presented on the vertical wheel504. In this example assay method, the start screen is the initial software screen displayed to the user. The workflows that a user can perform are listed as options (sub options) on a vertical wheel they can selected from. In this assay method example, less common and advanced workflows may be grouped under an advanced menu. In this example assay method, the options for the workflows for the system include: Create a new run508, when the user selects the create new run workflow510the user can create a run from scratch or base it on a previously defined run512; Continue a run that was previously planned or started514, when the user selects to continue a previously planned or started run516, the software automatically resumes518from the last step the user completed in the run; View the results of a completed run520, when the user selects to view a completed run522, the software brings the user to the review screen524. After the user selects any of the options from the vertical wheel504, the options on the vertical wheel are added to a new horizontal wheel above the screen. This horizontal wheel allows the user to change their selection. For example, after selecting “Create New” the options for Planned and Completed runs are moved to the horizontal wheel allowing the user to change their mind. FIG.6is a diagram illustrating a workflow of a define assay method screen in one embodiment. In this example, the software requires an Assay Method in order to process the samples being considered. The processing shown in this screen may be performed responsive to the DEFINE option (FIG.5,512) being executed. The assay method defines: The assays on the plate; The plate layout; Number of calibrators, controls, and max number of samples; Control, Calibrator, and sample dilutions; Number of replicates for controls, calibrators, and samples; The instrument protocol (incubation time, perform blocker, and/or others). A default Assay Method is provided for every kit, the system software allows the user to create a custom Assay Method based on a default. In one embodiment, the Assay Method is distributed in the Global Product Data (GPD) file. The GPD contains, for example: Product barcode; Assays; Placement of assays in the well; Lot identification for Kit, Plate, Antibodies, Calibrators, Controls; Measured concentration of: Calibrators, Controls; Instrument instructions on how to process the product; and Recommended plate layout. FIG.7is a diagram illustrating a user interface workflow for selecting assay method in one embodiment. This user interface workflow may follow a selection or execution of defining assay method, for instance, selected or executed as an option in the workflow shown inFIG.6. Options under Define Assay Method may include Assay Method selection option, Samples option and Confirm option, which are shown in horizontal orientation, for example, on a horizontal wheel graphical element702. The selected Assay Method option may be highlighted and/or centered on that horizontal wheel over other unselected options. The sub-level options below the Assay Method option may be displayed in vertical orientation, for example, on a vertical wheel graphical element704. In this example, there may be 3 ways the user can select an Assay Method: a) Selecting from recent Assay Methods used on the system sorted by reverse chronological order; and b) Selecting from all available Assay Methods installed on the system. In this screen the UI uses multiple wheels, and each wheel filters the results of the following wheel until the final wheel contains just the results: c) Searching for an Assay Method installed on the system, which can be done using free text search. When the user selects one of the sub-level options, the sub-level options move into the horizontal wheel to allow the user to change their Assay Method selection model. After the user makes the initial selection of the assay method, the user is allowed to select whether the user only wants to run a single assay method or multiple assay methods: Single assay method, wherein all Mesoscale Diagnostics test plates in the run use the same assay method; Multiple assay method, wherein there are at least one Mesoscale Diagnostics test plate per assay method in the run. FIG.8is a flow diagram illustrating a workflow of a user interface displayed for defining samples in one embodiment. Based on selecting “Define Samples” option, that option is shown in horizontal orientation, for example, on a horizontal wheel graphical element802, which may be stacked under the Define option, its parent menu item. The sub-level of options associated with the “Define Samples” options are displayed vertically, for example, on a vertical wheel graphical element804. In the Define samples screen the user interface allows the user to select to import samples or manually define the samples. These options move to the horizontal wheel after the user selects an option. When the user selects to import samples from a file, the software via the user interface presents the sample files the user can use in a vertical wheel. The system can alternatively import from a Laboratory Information System or a Laboratory Information Management System. The system can also import from a sample management system. When the user selects to manually define samples, the user may define the number of samples to run. The software automatically assigns samples IDs. FIG.9is a flow diagram illustrating a workflow of a user interface displayed for confirming a run definition in one embodiment. Based on selecting “Confirm Run Definition” option, a submenu item of “Define” option, the “Confirm Run Definition” option is displayed on a horizontal wheel graphical element, for example, stacked below its parent menu item, “Define” option. After the user has defined the run in the previous steps, the system provides a summary of the run for the user to review and confirm. The following information is displayed to the user: The number of samples in the run. The user may also select the number of samples to view the sample identifiers (IDs), the number of Mesoscale Diagnostics plates in the run, the layout of the plates, and the name of the run. The system gives a default name to the run and allows the user to change it. Once the user has confirmed the run, the system prompts the user requesting whether the user wants to continue to execute the run or return to the Start Goal. FIG.10is a flow diagram illustrating a workflow of a user interface displayed for notifying the user of the accomplished tasks in one embodiment. The system may walk the user through accomplishing tasks via the user interface in a wizard (an automated help feature). The major logical steps may be broken down into Goals. In this example, the system has three major goals in the wizard: Start, wherein the user begins and selects what the user wants to do in the system; Define, wherein after the user picks what the user wants to do, the wizard walks the user via the user interface through defining any information needed; and Execute, wherein the system walks the user through execution of the task the user has selected. FIG.11is a flow diagram illustrating a workflow of a user interface displayed for execute/collect option in one embodiment. In this collect screen the system creates a list of items that the user needs to collects in order to perform the run. Each item has to be marked as collected before proceeding. The system also allows the user to print this list or use a tablet computer for collection. For each item to be collected the item may optionally be scanned so the system can check it is the correct item, expiration date, lot information. For instance, the system may request bar code scans on items. This is done using the barcode (GPI) to retrieve the GPD. FIG.12is a flow diagram illustrating a workflow of a user interface displayed for execute/prepare option in one embodiment. In this prepare screen, the system presents a list of steps needed to prepare items that were collected in a wheel. For each step in the wheel the system presents the detailed instructions for that prepare step when it is selected. The detailed prepare step may include: Text that describes the actions to be taken; Images that visually indicate the actions; Video that demonstrates the actions; and Web content, as in Web pages, that provide further details or context about the actions. The user is prompted to indicate that all the prepare steps have been completed before proceeding to the next step. The user may also print the prepare steps or complete them using the tablet. FIG.13is a flow diagram illustrating a workflow of a user interface displayed for execute/load option in one embodiment. In this load screen the system displays a list of the items to load onto the instrument in a wheel format. For each item the system displays graphically where the item should be loaded. The system provides a graphical indication of whether the item was loaded or is empty. The system checks whether all the items have been loaded before proceeding to the next screen. FIG.14is a flow diagram illustrating a workflow of a user interface displayed for execute/run option in one embodiment. This run screen allows the user to indicate to the system to start the run, for example, via a run button UI control. This screen also allows the user to register other users for system update messages. The updates may be distributed through for example, electronic mail (email), text such as short message service (SMS), social network applications and/or blogs, and/or others. Once the user initiates the run, the system transitions to present a timer of the estimated time to completion. In one embodiment, there are 3 modes to the timer; 1) Estimated time in an analog watch format; 2) Estimated time in a digital watch format; and 3) A live camera feed of the instrument. The user may also request the run to stop through the advanced context menu. FIG.15is a flow diagram illustrating a workflow of a user interface displayed for execute/unload option in one embodiment. After a run completes, the system transitions to this unload screen. At the unload screen a list of steps is presented to unload the system in a wheel. For each item the system displays graphically where the item should be unloaded. The system provides a graphical indication of whether the item is loaded or is unloaded. The user needs to unload all the items before proceeding to the next screen. FIG.16is a flow diagram illustrating a workflow of a user interface displayed for execute/review option in one embodiment. At the review screen, the system presents the results of the run. The results are also automatically exported in: File format; Transmitted to LIMS/LIS system; and Email. The results are presented and can be viewed: a) Graphically as a plate representation. ECL or Calculated Concentration is displayed using a luminosity scale, where a dark/black color indicates a low result and a bright color indicates a high result. A scale is presented to annotate the color luminosity to number; b) The results are also available as a table. This table can be exported through File format; Transmitted to LIMS/LIS system; and/or Email. The system records any unusual operations or results in a table, for instance, if the temperature during the run was not in the specified range. After the user is done reviewing the run data, the user may go to the Start goal to begin another run or view results. FIG.17is a flow diagram illustrating a workflow of a user interface displayed for execute/review option in one embodiment. In one embodiment, the system categorizes tasks the user may do into major workflows and advanced workflows. Major workflows are those the user will routinely perform and are optimized for ease of execution. The major workflows are represented in the wizard Goals and steps. Advanced workflows are present in the advance context menu and represent workflows that are not routinely done or restricted to configuration manager. The user opens the advanced context menu by clicking on the Mesoscale Diagnostics Globe. The advanced context menu items are contained in a vertical wheel with3main groups: Functions related to the current screen—context sensitive items, which change depending on the active screen; Modules that can be switched to; and Functions that are applicable across all modules, for instance login and logout of the software. In this screen, the selected option, advanced menu is displayed horizontally on a horizontal graphical wheel, while the sub-options of the advanced menu are shown vertically on a vertical graphical wheel. In one embodiment, the graphical user interface in one embodiment maximizes black space by making the background black, thereby minimizing coloring of pixels in the user interface display (e.g., display screen), save storage and improve speed of presentation.FIG.20is an example screen shot of a screen displaying graphical wheel/slider, which maximizes screen black space, in one embodiment. Further screen shot examples consistent with embodiments hereof are shown inFIGS.58-64RR.FIGS.58A-58HHare example non-limiting embodiments of the reader module.FIGS.59A-59Tare example non-limiting embodiments of an experiment module.FIGS.60A-60Iare example non-limiting embodiments of a maintenance module.FIGS.61A-61Qare example non-limiting embodiments of an admin console module.FIGS.62A-Pare example non-limiting embodiments of generic screenshot applicable to multiple modules herein.FIG.63is an example non-limiting embodiment of an audit trail module.FIGS.64A-64RRare example non-limiting embodiments of screenshots applicable to an assay method module. Further screen shot examples consistent with embodiments hereof are included in U.S. Design patent application No. 29/675,777, Titled “Display Screen with Graphical User Interface,” and filed on Jan. 4, 2019, which is incorporated by reference herein in its entirety. As described above, the user interface in the present disclosure, whether used in an assay system or another system, is capable of presenting the complete trail on a single screen of a user interface display, for example, on a graphical wheel, and allowing the user to select any item on any level to go back from the current path of selected items, for example, instead of having to enter or type a series of back buttons on a keyboard or another input device. The user interface allows for past decisions to be visible, for example, primary decision and last n recent decisions (the history of decision may be visible via scrolling through the graphical wheel or another graphical element such as a graphical slider). In one embodiment, the graphical user interface minimizes the number of menu choices the user needs to make in order to navigate through the assay system. For instance, the order in which the menu choices are presented may minimize the number of user options. In one embodiment, by minimizing options or choices to present to the user and receive input from those choices, computer processing time may be improved. The user interface leads the user through the next step in the application, while providing minimal number of choices the user needs to make. The following discussion provides additional embodiments and implementations of the system as presented herein. The user interface systems discussed above may be broadly applicable to a variety of applications, including manufacturing environments, testing environments, instrumentation environments, experimental environments, and others. In a series of embodiments, the user interface systems discussed above may be employed to provide a user interface into a comprehensive bioinstrumentation system encompassing software, hardware, testing equipment, and all additional required features. The following discusses such a comprehensive bioinstrumentation system. In particular, the following discusses an embodiment of the systems described herein as a cloud-based platform. The embodiments discussed below, e.g., with respect toFIGS.21-50may also be implemented via alternative networked hardware and software platforms. The description herein is made with reference to the figures for purposes of convenience only; it is not restrictive as to the scope of embodiments hereof. The ensuing description is adaptable to a variety of analytical applications, including without limitation, bioanalytical applications, chemical analytical applications, radiological analytical applications, and the like. The components shown may include computer-implemented components, for instance, implemented and/or run on one or more hardware processors, or coupled with one or more hardware processors. One or more hardware processors, for example, may include components such as programmable logic devices, microcontrollers, memory devices, and/or other hardware components, which may be configured to perform respective tasks described in the present disclosure. Processors and cloud-based processing systems as disclosed inFIGS.21-50may be examples of processor1110. Coupled memory devices may be configured to selectively store instructions executable by one or more hardware processors. Memory devices and cloud-based storage systems as disclosed inFIGS.21-50may be examples of storage device1120. Examples of a processor may include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a cloud based processing unit, another suitable processing component or device, or one or more combinations thereof. FIG.21is an embodiment of a cloud-based system providing seamless integration of other systems, computers, and instruments, e.g. bioinstruments, supporting and optimizing users doing analytical work, e.g., bioanalytical work.21100is the system boundary around the other systems, computers, and instruments either wholly or partly makes up the analytical computing system21100, wherein, the operating system on each computer and/or instrument, in whole or part, includes the analytical computing system21100can include, e.g., Windows™, UNIX, Linux, MacOS™, iOS™, Android™, and/or any other commercial, open-source, and/or special-purpose operating system. At21101is an analytical user environment including one or more servers, desktop computers, laptop computers, tablet, and/or mobile devices of which one or more of same can be used in system21100. One or more analytical user environments21101can use the analytical system21100. At21102is a support provider environment including one or more servers, desktop computers, laptop computers, tablet, and/or mobile devices of which one or more of same can be used in system21100supporting instruments, consumables, and/or software used by analytical users in analytical user environment21101. There can be one or more support provider environments at21102using the analytical computing system21100. At21103is a consumable provider environment including one or more servers, desktop computers, laptop computers, tablet, and/or mobile devices of which one or more of same can be used in analytical computing system21100for providing consumables to be used by users in analytical user environment2001, optionally in conjunction with instrumentation including instrumentation environment21106. There can be one or more consumable provider environments at103using the analytical computing system21100. At21105is an analytical instrumentation provider environment for a provider of instrumentation that can be used in instrumentation environment21106and that includes one or more servers, desktop computers, laptop computers, tablet, and/or mobile devices of which one or more of same can be used in analytical computing system21100for providing, e.g., selling or otherwise transferring instruments to be used by users in analytical user environment21101. There can be one or more instrumentation provider environments at21105using the analytical computing system21100. At21104is an analytical computing system provider environment for the provider of analytical computing system21100, which includes one or more servers, desktop computers, laptop computers, tablet, and/or mobile devices of which one or more same can be used in system211000to manage the business interaction with analytical computing system21100to be used by analytical users in analytical user environment2001. Each of the “providers” associated with the environments at21102,21103,21104, and21105can include one or more entities, including without limitation, a multiplicity of independent businesses, a single independent business, a combination of different independent businesses, or one or more businesses within any one of the “providers” herein. At21106is an instrumentation environment including one or more instruments, each with at least one computer that in one practice can be at least partially used by analytical computing system21100to run tests on samples for users in an analytical user environment21101. At21107is a cloud platform leveraged to connect, e.g., bi-directionally connect, through computers, networking, and software some or all of the computers in analytical computing system21100having in one practice, a common computing, software services, and data architecture such that data can be collected and shared by any computer having associated software of the analytical computing system21100, wherever a particular computer with associated software in analytical computing system21100is located throughout the world, in a secure manner, wherein cloud platform21107, in the preferred embodiment, is hosted by a public-cloud provider providing a shared computing environment, for example, Amazon™ Web Services, Google™ Cloud, Microsoft™ Azure, or others. In other embodiments, the cloud platform21107can be hosted by the analytical computing system provider at21104; or it can be self-hosted by an analytical user environment being a user of the analytical computing system21100; or it can be hosted by a private-cloud provider providing a dedicated computing environment, for example, Oracle™ Cloud, IBM™ Cloud, Rackspace, or others; or it can be hosted on some combination of public-cloud, private-cloud, self-hosted, and hosted by the analytical computing system provider21104. All communication with cloud platform21107can be done through the preferred embodiment over a secure communication protocol, such as without limitation https, to encrypt all communication between sender and receiver; but an unsecure communication protocol, such as without limitation Hypertext Transfer Protocol Secure (HTTPS), can be used as well using optionally in either the secured or unsecured case connected technologies, such as Ethernet for local area network (LAN), metropolitan area network (MAN), and/or wide area network (WAN) configurations, and/or unconnected technologies, such as WIFI, Bluetooth, and/or other like technologies for a distributed LAN. Additionally, analytical computing system21100can be wholly deployed on one computer such that all operations of analytical computing system21100occur on that computer with the only external communication occurring between computers and associated software running outside of analytical computing system21100. FIG.22. is an embodiment of a cloud-based system as shown inFIG.21that provides seamless integration of other systems, computers, and instruments supporting and optimizing users doing analytical work.21100depicts the boundary of the analytical computing system that encompasses other systems, computers, and instruments that include either wholly or partly the system bounded by21100. At21101is an analytical user environment including one or more servers, desktop computers, laptop computers, tablets, and/or mobile devices of which one or more of same can be used in analytical computing system21100. Administrator computer(s)22202includes one or more computers with software used by system administrators to manage the use of system21100by users in analytical user environment21101through services and data storage/retrieval provided by the cloud platform22223. Analytical user computers22203includes one or more computers with software used to perform analytical tasks by users in an analytical user environment at21101through services and data storage/retrieval provided by the cloud platform22223. Data integration computers22204includes one or more computers with software used to integrate, e.g., bi-directionally integrate, other business systems22224in analytical user environment21101with the analytical computing system21100providing services for the analytical user business systems22224through services and data storage/retrieval provided by cloud platform22223. Analytical user business system22224can be hosted internally, externally, and/or some combination of internally and externally to analytical user environment21101and can include one or more computer systems optionally with software, examples being laboratory information systems (LIMS), data analysis applications, data visualization applications, data reporting applications, business productivity applications, relational and/or non-relational databases, file servers, and/or any other systems providing access to the data of the analytical computing system21100to users directly using the analytical computing system21100, to users not directly using the analytical computing system21100, and/or to one or more other computer systems included with the business system22224not directly interfacing with the analytical computing system21100. Support provider environment21102is a support provider for users of analytical computing system21100, users of consumables from a consumable provider, and/or instrumentation in instrumentation environment21106including one or more servers, desktop computers, laptop computers, tablets, and/or mobile devices of which one or more of same can be used in the analytical computing system21100supporting instruments, consumables, and/or software used by analytical users in the analytical user environment21101. Support user computer22206includes one or more computers with software provided to users associated with a support provider environment21102that, among other things, can monitor, manage, and/or report on activity on the analytical computing system21100through services and data storage/retrieval provided by the cloud platform22223; and support data integration computer22207includes one or more computers with software and/or firmware used to integrate other support business systems22208in support provider environment21102with analytical computing system21100providing services for support business systems22208through services and data storage/retrieval provided by the cloud platform22223. Support business systems22208can be hosted internally, externally, and/or by some combination of internally and externally to support provider environment21102and can include one or more computer systems optionally with software, examples being customer relationship management, enterprise data systems, data analysis applications, data visualization applications, data reporting applications, business productivity applications, relational and/or non-relational databases, file servers, and/or any other systems providing access to the data of analytical computing system21100to users directly using the support user computer(s)22206, to users not directly using the support user computer(s)22206, and/or one or more other computer systems included with support business system22208not directly interfacing with the analytical computing system21100. Consumable provider environment21103is a consumable provider environment including one or more servers, desktop computers, laptop computers, tablets, and/or mobile devices of which one or more of same can be used in analytical computing system21100for a provider of consumables to users in analytical user environment21101, which can be optionally used in conjunction with instrumentation in instrumentation environment21106for providing consumables to users in analytical user environment21101to optionally be used with instruments in instrumentation environment21106. Consumable information upload computer22210includes one or more computers with software used to deliver consumable information regarding provided consumables from consumable provider business systems22211to analytical computing system21100through services and data storage provided by cloud platform22223. Consumable information, as used herein, may include, but is not limited to, global product data (GPD). Consumable provider business system22211can be hosted internally, externally, and/or by some combination of internally and externally to consumable provider environment21103and can include one or more computer systems optionally with software, examples being customer relationship management, enterprise data systems, data reporting applications, business productivity applications, relational and/or non-relational databases, file servers, and/or any other systems supporting business operations for the consumable provider to support delivery of consumable information to the analytical computing system21100or which is not used at all in the delivery of consumable information to the analytical computing system21100. Analytical computing system provider environment21104is the analytical computing system provider environment for the provider of analytical computing system21100including of one or more servers, desktop computers, laptop computers, tablets, and/or mobile devices of which one or more of same can be used in the analytical computing system21100for providing analytical computing system21100to users in analytical user environment21101and instrumentation in instrumentation environment21106, as well as for various providers at21102,21103, and21105, wherein, account information upload computer(s)22213includes one or more computers with software used to prepare and control the use of analytical computing system21100by users in analytical user environment21101and instrumentation in instrumentation environment21106through services and data storage provided by cloud platform22223. Computing system provider business system22214can be hosted internally, externally, and/or some combination of internally and externally to analytical computing system provider environment21104and can include one or more computer systems optionally with software, examples being customer relationship management, enterprise data systems, data reporting applications, business productivity applications, relational and/or non-relational databases, file servers, and/or any other systems supporting business operations for the analytical computing system provider to support preparing and controlling the use of analytical computing system21100, or not used at all in preparing and controlling the use of the analytical computing system21100. Instrumentation provider environment21105includes one or more servers, desktop computers, laptop computers, tablets, and/or mobile devices of which one or more of same can be used in analytical computing system21100for a provider of instrumentation to users in analytical user environment21101and which can optionally be used as instrumentation in instrumentation environment21106for processing samples under test and optionally with one or more consumables provided by consumables provider environment21103. The instrument information upload computer(s)22216includes one or more computers with software used to deliver instrument information regarding provided instrumentation from an instrumentation provider business system22217to analytical computing system21100through services and data storage provided by the cloud platform22223. Instrumentation provider business system22217can be hosted internally, externally, and/or by some combination of internally and externally to instrumentation provider environment21105and can include one or more computer systems optionally with software, examples being customer relationship management, enterprise data systems, data reporting applications, business productivity applications, relational and/or non-relational databases, file servers, and/or any other systems supporting business operations for the instrumentation provider to support delivery of instrument information to the analytical computing system21100, or not used at all in the delivery of instrument information to the analytical computing system21100. Instrumentation environment21106including one or more instruments with each instrument being either an individual-operation instrument22221, a coordinated-operation instrument22222, or a workflow-aid instrument(s)22226provided by instrumentation provider environment21105which can be leveraged by users in analytical user environment21101to process samples optionally in conjunction with consumables provided by consumable provider environment21103to generate data for analysis by users in analytical user environment21101, wherein, an individual-operation instrument22221can have an individual-operation instrument computer22219providing integration between the individual-operation instrument22221and the analytical computing system21100through services and data storage provided by the cloud platform22223, as well as optionally providing operational control over the individual-operation instrument22221; a coordinated-operation instrument22222can also have a coordinated-operation instrument computer22220that provides integration between the coordinated-operation instrument22222and the analytical computing system21100through services and data storage provided by the cloud platform22223, as well as optionally providing operational control over the coordinated-operation instrument22222; and workflow-aid instrument22226can have a workflow-aid instrument computer22225that provides integration between the workflow-aid instrument22226and the analytical computing system21100through services and data storage provided by the cloud platform22223, as well as optionally providing operational control over workflow-aid instrument224. Examples of an individual-operation instrument22221include without limitation a plate reader, plate washer, plate incubator, plate shaker, plate incubator-shaker, pipetting system, or any other type of instrument used in analytical sample testing. Coordinated-operation instrument22222can combine some or all of the functions provided by one or more of the individual-operation instruments22221into an integrated platform automating execution of the individual operations of individual-operation instruments22221, thereby relieving a user from executing the various individual operations of individual-operation instruments22221. Workflow-aid instrument22226can provide support to a user leveraging either individual-operation instrument(s)22221and/or coordinated-operation instruments22222to test assays on samples in the instrumentation environment21106where the support includes, but is not limited to, collecting various consumables stored at various temperatures potentially in different physical locations, preparing consumables to be used in the processing of one or more assays, and/or leading a user through the overall assay steps using one or more of the individual-operation instruments22221. In the alternative, the consumable provider environment analytical user app21103can assist with other tests in addition to or in place of the assay tests and/or plate-based tests described herein. Instrumentation in instrumentation environment21106can include zero or more individual-operation instruments22221each with their corresponding individual-operation instrument computer22219, zero or more coordinated-operation instruments22222each with their corresponding coordinated-operation instrument computers22220, and/or zero or more workflow-aid instruments22224each with their corresponding workflow-aid instrument computers22225. A preferred embodiment for instrumentation environment21106includes providing a separate computer integrating zero or more individual-operation instruments22221, zero or more coordinated-operation instruments22222, zero or more workflow-aid instruments22224, zero or more individual-operation instrument computers22219, zero or more coordinated-operation instrument computers22220, and zero or more workflow-aid instrument computers22225to analytical computing system21100through services and data storage provided by cloud platform22223. InFIG.23is an embodiment of system architecture for cloud platform22223as part of the analytical computing system21100providing common computing, software services, and data architecture such that data are collected and shared by any computer anywhere in the world having associated software of the analytical computing system21100(FIG.21), wherein, one or more services servers23302provide a scalable, robust, and high-performing computing and associated software platform to support services specific to the analytical computing system21100for retrieving, storing, transferring, and/or transforming data associated with the use of the analytical computing system21100; one or more database servers23309(e.g., including one or more team databases23310and one or more system databases23311) providing a scalable, robust, and high-performing computing and associated software platform for one or more structured databases used for storing and/or retrieving data produced by and/or for users of the analytical computing system21100, as well as, for storing and/or retrieving data produced and/or used by the analytical computing system21100for its preparation for use as well as through its use, wherein, the database technology can be relational in nature as e.g. SQL Server, Oracle, MySQL, Postgres, Aurora, and/or other like relational database technologies; and/or can be non-relational in nature as e.g. Dynamo DB, Mongo DB, and/or other like non-relational database technologies; with one or more bulk data servers23315, which may include system content23312, instrument content23313and consumable content23314, providing a scalable, robust, and high-performing computing and associated software platform for storing and retrieving file-based data provided for use of the analytical computing system21100and/or produced through the use of the analytical computing system21100. The services server(s)23302has associated with it, in one embodiment, a logical collection of services, namely: admin23303including a logical collection of services to support administration of the use of analytical computing system21100; dashboard23304including a logical collection of services to support monitoring and control of the use of analytical computing system21100; upload23305including a logical collection of services supporting upload of consumable and instrument information to analytical computing system21100; system23306including a logical collection of services supporting various non-user-specific functions associated with overall use of analytical computing system21100; application23307including a logical collection of services supporting typical scientific use of analytical computing system21100by analytical users; and authenticate23308including a logical collection of services supporting secure log-in to analytical computing system21100as well as log-out from analytical computing system21100. In one practice, services server(s)23302is an easily scaled computing infrastructure from one or more servers as represented by services server(s)23302wherein, in a preferred embodiment, each server has deployed all logical collection of services23303,23304,23305,23306,23307, and23308to enable a load balancer to equally distribute requests for services across the one or more servers represented by services server(s)23302to optimize user interaction. This load balancing technique can be effectuated, e.g., if the logical collection of services23303,23304,23305,23306,23307, and23308are designed using a RESTful (representational state transfer) design pattern, i.e., each provided service is stateless, i.e., does not store or hold data, and therefore any request made on the service can be fulfilled by any available server on which the service deployed in services server(s)23302based on demand at the time of request. To support optimal deployment and operation of the logical collection of services23303,23304,23305,23306,23307, and23308on one computer or on many computers, the preferred embodiment is for these services to be built on a distributed-object platform such as, e.g., Java Platform Enterprise Edition to be able to support cross-platform computing architectures, .NET Framework for Windows-only computing architectures, or other like distributed-object platform, or leveraging some combination of one or more of these distributed-object platforms. Database Server23310can include one or more databases, for example, Team Database23310and System Database23311. Team Database23310is adapted to store information, data, and/or metadata as it relates to Teams (e.g., Team name, members, permissions, etc.). System Database23111can include files, data, and/or other information as it relates system functionalities. Further, Bulk Data Server23315can include various content, e.g., System Content23312, e.g., data or content relating to the system's functionality, etc., Instrument content23313, e.g., type of instrument, parameters, etc., and Consumable Content23314, e.g., type of consumable, quantities, etc. InFIG.24is an embodiment of an administrator using an Administrator Computer24401to run administrator app software24402to perform administrative functions provided by the analytical computing system21100through services provided through the cloud platform22223. The administrator app software, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the administrator computer24401and the cloud platform22223. By way of example, one or more services servers24411can provide various functionalities such as authentication, one or more other administrative functionalities, capability for uploading data, e.g., to one or more database servers, one or more system functionalities, one or more applications, e.g., app functions, and/or graphical visualization support via a dashboard. The administrator app24402can be executed on the administrator computer24401through either a distinct application installed on the administrator computer24401, or accessed via an internet browser installed on the administrator computer24401pointing to a Uniform Resource Locator (URL) with a web site part of the services provided by cloud platform22223logically organized with admin24408in this embodiment but not limited to that organization. In one embodiment, the first interaction between an administrator and the cloud platform occurs through use of administrator app24402requesting login services, e.g., via a user manager1056of methodical user interface control system1102, through service link24403to authenticate,24404, with appropriate credentials that, e.g., include a unique username and password and/or metric identification and can also include an optional or required additional authentication input, commonly referred to as two-factor authentication, previously configured for use by an administrator. In this embodiment, the login service retrieves a user's encrypted credentials, through a service link24405, from system database24406to verify that the administrator can access and administer analytical computing system21100with the login updating data stored in system database24406through the service link24407to track usage of analytical computing system21100. An administrator can also reset their own password via administrator app24402through service link24403to authenticate24404if they forgot or do not know their password, with the password reset updating data stored in system database24406through service link24407to track usage of analytical computing system21100. Administrators can also configure their additional authentication input via administrator app24402through service link24403to authenticate24404so as to retrieve and change the configuration of their additional authentication input through service link24405to system database24406with the configuration change updating data stored in system database24406through the service link24407to track usage of analytical computing system21100. After an administrator is authenticated on login, they can use services provided by admin24403through use of administrator app24402through service link24407to perform administrative functions of the analytical computing system21100, wherein, these services, as required, use the service link24407to create, read, update and/or delete data stored in system database24406, e.g., via data storage manager1064, with the use of these services also creating and updating data stored in system database24406through the service link24407to track usage of analytical computing system21100. Additionally, an administrator in performing administrative functions for analytical computing system21100, as provided by administrator app24402, can create one or more new groups of users whose use of the analytical computing system21100is through a shared team database24414through a service link24413as well as create a new database server24415through a service link24412to which the new team database24414can be added so as to optimize performance of database server(s)24415. Ultimately, an administrator can logout from use of analytical computing system21100via administrator app24402through service link24403to terminate their current use of analytical computing system21100with the logout service of authenticate24404updating the administrator's login information through a service link24409to the system database24406with the logout updating data stored in system database24406through service link24407to track usage of analytical computing system21100. Analytical Computing System21100can include one or more Services Servers24411. These servers are adapted to host various applications and/or modules including, system modules, application modules, authentication modules, administrative modules, dashboard modules, and upload modules. In one embodiment, the authentication and administration modules allow users to communicate, e.g., through one or more service link, with System Database24406and/or the Team Database24414through the Administrator App24402. FIG.25is an embodiment of an analytical user using an analytical user computer25502to run an analytical user app software25503to perform analytical functions provided by an analytical computing system21100through services provided through a cloud platform at22223. The analytical user app software25503, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the analytical user computer25502and the cloud platform22223. The analytical user app25503can be executed on analytical user computer25502through either a distinct application installed on the analytical user computer25502or accessed via an internet browser installed on the analytical user computer25502pointing to a URL with a web site part of the services provided by cloud platform22223logically organized with an application25509in this embodiment but not limited to that organization. In one practice, the first interaction for an analytical user with cloud platform22223is through use of an analytical user app25503requesting login services through a service link25504to authenticate,25505, with appropriate credentials that can include a unique username and password and/or another information such as biometric identification and can also include an optional or required additional authentication input, commonly referred to as two-factor authentication, previously configured for use by an administrator, wherein, the login service can retrieve a user's encrypted credentials through a service link25506from a system database25507to verify that the analytical user may access and use analytical computing system21100with the login updating data stored in system database25507through the service link25506to track usage of analytical computing system21100. An analytical user can also reset their password via analytical user app25503through service link25504to authenticate,25505, if they forgot or do not know their password with the password reset updating data stored in system database25507through service link25506to track usage of analytical computing system21100. An analytical user can also configure their additional authentication input via analytical user app25503through service link25504to authenticate,25505, so as to retrieve and change the configuration of their additional authentication input through service link25506to system database25507with the configuration change updating data stored in system database25507through the service link25506to track usage of analytical computing system21100. After an analytical user is authenticated on login, they can use services provided by an application25509through use of the analytical user app25503through a service link25508to perform analytical functions provided by application25509, wherein, these services, as required, use the service link25510to create, read, update, and/or delete data stored in team database25511with the use of these services also creating and updating data stored in system database25507through the service link25510to track usage of analytical computing system21100. Ultimately, an analytical user can logout from use of the analytical computing system21100via analytical user app25503through service link25504to terminate their current use of analytical computing system21100with the logout service of authenticate25505updating the analytical user login information through service link25506to the system database25507with the logout updating data stored in system database25507through the service link25506to track usage of analytical computing system21100. InFIG.26is an embodiment of a data integration computer26602running data integration app software26603to perform data integration functions provided by an analytical computing system21100through services provided through a cloud platform at22223between the analytical computing system21100and, optionally, computing system(s) not part of analytical computing system21100. The data integration app software26603, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the data integration computer26602and the cloud platform22223. The data integration app26603can be provided as part of analytical computing system21100and/or can be provided by an analytical user or someone working with an analytical user. In one practice, the first interaction for data integration app26603with cloud platform22223is requesting login services through a service link26604to authenticate26605with appropriate credentials configured by an administrator that preferably include a unique username and password and can also include an optional or required additional authentication input, commonly referred to as two-factor authentication, previously configured for use by an administrator, wherein, the login service can retrieve the encrypted credentials for data integration app26603through service link26606from a system database26607to verify that the data integration app26603can access and use analytical computing system21100with the login updating data stored in system database26607through the service link26606to track usage of the analytical computing system21100. After a data integration app26603is authenticated on login, it can use services provided by application26609through use of data integration app26603through a service link26608to perform analytical functions provided by the application26609, wherein, these services, as required, use a service link26610to create, read, update, and/or delete data stored in a team database26611with the use of these services also creating and updating data stored in system database26607through the service link26610to track usage of the analytical computing system21100. Ultimately, a data integration app can logout from use of the analytical computing system21100via data integration app26603through service link26604to terminate the current use of analytical computing system21100with the logout service of authenticate26605updating the data integration app login information through the service link26606to system database26607with the logout updating data stored system database26607through the service link26606to track usage of the analytical computing system21100. InFIG.27is an embodiment of a user monitoring the use of an analytical computing system21100using a support user computer27702to run the monitoring user app software27703to perform monitoring functions provided by the analytical computing system21100through services provided through a cloud platform22223. The monitoring user app software27703, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the support user computer27702and the cloud platform22223. The monitoring user app27703can be executed on the support user computer27702through either a distinct application installed on the support user computer27702or accessed via an internet browser installed on the support user computer27702pointing to a URL with a web site part of the services provided by cloud platform22223logically organized with a dashboard27709in this embodiment, but not limited to that organization. In one practice, the first interaction for a support user computer with the cloud platform is through use of the monitoring user app27703requesting login services through a service link27704to authenticate27705with appropriate credentials that preferably include a unique username and password and/or metric identification and could also include an optional or required additional authentication input, commonly referred to as two-factor authentication, previously configured for use by an administrator, wherein, the login service can retrieve a user's encrypted credentials through a service link27706from a system database27707to verify that the monitoring user can access and monitor the analytical computing system21100with the login updating data stored in system database27707through the service link27706to track usage of analytical computing system21100. A monitoring user can also reset their password via the monitoring user app27703through service link27704to authenticate27705if they forgot or do not know their password with the password reset updating data stored in system database27707through the service link27706to track usage of analytical computing system21100. A monitoring user can also configure their additional authentication input via the monitoring user app27703through service link27704to authenticate27705so as to retrieve and change the configuration of their additional authentication input through the service link27706to the system database27707with the configuration change updating data stored in system database27707through the service link27706to track usage of analytical computing system21100. After a monitoring user is authenticated on login, they can use services provided by a dashboard27709through use of the monitoring user app27703through a service link27708to perform monitoring functions of the analytical computing system21100, wherein, these services, as required, use a service link27710to create, read, update, and/or delete data stored in system database27707with the use of these services also creating and updating data stored in system database27707through the service link27710to track usage of the analytical computing system21100. Ultimately, a monitoring user can logout from use of the analytical computing system21100via monitoring user app27703through a service link27704to terminate their current use of the analytical computing system21100with the logout service of authenticate27705updating the administrator's login information through the service link27706to the system database27707with the logout updating data stored in system database27707through the service link27706to track usage of the analytical computing system21100. InFIG.28is an embodiment of a support data integration computer28802running monitoring data integration app software28803to perform monitoring data integration functions provided by the analytical computing system21100through services provided through a cloud platform at22223between analytical computing system21100and, optionally, computing system(s) not part of the analytical computing system21100. The monitoring data integration app software28803, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the support data integration computer28802and the cloud platform22223. Thus, the monitoring data integration apps software is adapted to track, review, and/or monitor one or more features of the data integration functions described herein. In one practice, the first interaction for a monitoring data integration app28803with the cloud platform22223is requesting login services through a service link28804to authenticate28805with appropriate credentials configured by an administrator that preferably include a unique username and password and can also include an optional or required additional authentication input, commonly referred to as two-factor authentication, previously configured for use by an administrator, wherein, the login service can retrieve the encrypted credentials for a monitoring data integration app28803through a service link28806from a system database28807to verify that the monitoring data integration app can access and use the analytical computing system21100with the login updating data stored in system database28807through the service link28806to track usage of the analytical computing system21100. After a monitoring data integration app28803is authenticated on login, it can use services provided by a dashboard28809through use of the monitoring data integration app28803through a service link28808to perform monitoring functions provided by dashboard28809, wherein, these services, as required, use a service link28810to create, read, update, and/or delete data stored in system database28807with the use of these services also creating and updating data stored in system database28807through the service link28810to track usage of the analytical computing system21100. Ultimately, a monitoring data integration app28803can logout from use of the analytical computing system21100via monitoring data integration app28803through a service link28804to terminate the current use of the analytical computing system21100with the logout service of authenticate28805updating the monitoring data integration app login information through the service link28806to the system database28807with the logout updating data stored in system database28807through the service link28806to track usage of the analytical computing system21100. InFIG.29is an embodiment of a consumable information upload computer29902running a consumable information upload app software29903to perform consumable information upload functions provided by analytical computing system21100via services provided through a cloud platform at22223between the analytical computing system21100and, optionally, computing system(s) not part of the analytical computing system21100. The consumable information upload app software29903, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the consumable information upload computer29902and the cloud platform22223. In one practice, the first interaction for a consumable information upload app29903with the cloud platform at22223is requesting login services through a service link29904to authenticate29905with appropriate credentials configured to preferably include a unique username and password, wherein, the login service can retrieve the encrypted credentials for a consumable information upload app29903through a service link29906from a system database29907to verify that the consumable information upload app can access and use the analytical computing system21100with the login updating data stored in system database29907through the service link29906to track usage of analytical computing system21100. After a consumable information upload app29903is authenticated on login, it can use services provided by upload29909through use of the consumable information upload app29903through a service link29908to perform consumable information upload functions provided by upload29909, wherein, these services, as required, use the service link29910to push data to be stored in consumable content29911associated with a particular customer account for subsequent storage to one or more team databases29913that are associated with a particular customer account by upload29909transferring the data via the service link29912with the use of these services also creating and updating data stored in system database29907through the service link29906to track usage of the analytical computing system21100. Once upload is complete, a consumable information upload app29903can logout from use of the analytical computing system21100via consumable information upload app29903through a service link29904to terminate the current use of analytical computing system21100with the logout service of authenticate29905updating the consumable information upload app login information through the service link29906to the system database29907with the logout updating data stored in system database29907through the service link29906to track usage of the analytical computing system21100. InFIG.30is an embodiment of an account information upload computer301002running account information upload app software301003to perform account update functions provided by analytical computing system21100via services provided through a cloud platform at22223between the analytical computing system21100and, optionally, computing system(s) not part of the analytical computing system21100. The account information upload app software30103, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the account information upload computer301002and the cloud platform22223. The account update functions can include adding, modifying, and/or deleting information as it relates to one more given accounts including, for example, usernames, passwords, permissions, and other attributes associated with one or more individual or team accounts. In one practice, the first interaction for the account information upload app301003with cloud platform22223is requesting login services through a service link301004to authenticate301005with appropriate credentials configured that preferably include a unique username and password, wherein, the login service can retrieve the encrypted credentials for an account information upload app301003through a service link301006from a system database301007to verify that the account information upload app can access and use the analytical computing system21100with the login updating data stored in system database301007through the service link301006to track usage of the analytical computing system21100. After an account information upload app301003is authenticated on login, it can use services provided by admin301009through use of the account information upload app301003through a service link301008to perform the account information upload functions provided by admin301009, wherein, these services, as required, use a service link301010to push data to be stored in system database301007associated with creating or updating a customer account with the use of these services also creating and updating data stored in system database301007through the service link301010to track usage of the analytical computing system21100. Once upload is complete, an account information upload app301003can logout from use of the analytical computing system21100via account information upload app301003through a service link301004to terminate the current use of the analytical computing system21100with the logout service of authenticate301005updating the account information upload app login information through the service link301006to the system database301007with the logout updating data stored in system database301007through the service link301006to track usage of the analytical computing system21100. InFIG.31is an embodiment of an instrument information upload computer311102running instrument information upload app software311103to perform instrument information upload functions provided by analytical computing system21100via services provided through a cloud platform at22223between the analytical computing system21100, and optionally computing system(s) not part of analytical computing system21100. The running instrument information upload app software311103, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the instrument information upload computer311102and the cloud platform22223. The first interaction for an instrument information upload app311103with the cloud platform at22223is requesting login services through a service link311104to authenticate311105with appropriate credentials configured that preferably include a unique username and password, wherein, the login service can retrieve the encrypted credentials for an instrument information upload app311103through a service link311106from a system database311107to verify that an instrument information upload app can access and use the analytical computing system21100with the login updating data stored in system database311107through the service link311106to track usage of the analytical computing system21100. After an instrument information upload app311103is authenticated on login, it can use services provided by upload311109through use of an instrument information upload app311103through a service link311108to perform instrument information upload functions provided by upload311109, wherein, these services, as required, use the service link311110to create a new instrument on first occurrence of an instrument and push data to be stored in instrument content311111associated with the instrument for a particular customer account for subsequent storage to an account in the system database311107through the service link311106with the use of these services also creating and updating data stored in system database311107through the service link311106to track usage of the analytical computing system21100. Once upload is complete, an instrument information upload app311103can logout from use of the analytical computing system21100via instrument information upload app311103through a service link311104to terminate the current use of the analytical computing system21100with the logout service of authenticate311105updating the instrument information upload app login information through the service link31106to the system database at1107with the logout updating data stored in system database311107through the service link311106to track usage of the analytical computing system21100. InFIG.32is an embodiment of a coordinated-operation instrument computer321202running coordinated-operation instrument app software321203to perform coordinated-operation instrument functions provided by analytical computing system21100via services provided through a cloud platform at22223associated with instrumentation processing where a coordinated-operation instrument provides an integration of one or more individual operation instruments; an integration of custom-designed hardware; or a combination of one or more individual-operation instruments with custom-designed hardware. The coordinated-operation instrument app software321203, as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the coordinated-operation instrument computer321202and the cloud platform22223. In one practice, the first interaction for a coordinated-operation instrument app321203with the cloud platform at22223is requesting login services through a service link321204to authenticate321205with appropriate credentials configured that preferably includes a unique username and password, wherein, the login service can retrieve the encrypted credentials for a coordinated-operation instrument app321203through a service link321206from a system database321207to verify that a coordinated-operation instrument app321203can access and use the analytical computing system21100with the login updating data stored in system database321207through the service link321206to track usage of the analytical computing system21100. The coordinated-operation instrument computer321202running coordinated-operation instrument app software321203can communicate with a system component321213of a services server via a service link321212, which may communicate with a team database321211via a service link321210; the coordinated-operation instrument computer321202running coordinated-operation instrument app software321203can communicate with an application component321209of a service server via a service link321208. One or more services server components, e.g.,321213,321209,321205, may communicate with a bulk data server, e.g., access instrument content321215, via a service link321214. InFIG.33Ais an embodiment of an individual-operation instrument computer331302running individual-operation instrument app software331303to perform individual-operation instrument functions provided by analytical computing system21100via services provided through a cloud platform at22223associated with instrumentation processing. The individual-operation instrument app software331303as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the individual-operation instrument computer331302and the cloud platform22223. In one embodiment, an individual-operation instrument performs one or more logical assay steps on one or more samples in a stepwise process to collect data about the samples under test. In this embodiment, the individual-operation instrument does not perform all assay steps, which can include, without limitation, steps that relate to a plate reader, plate washer, plate incubator, plate shaker, plate incubator-shaker, pipetting system, or any other type of instrument used in support of analytical sample testing. In other embodiments, the individual-operation instrument can perform all assay steps. The first interaction for an individual-operation instrument app331303with the cloud platform at22223is requesting login services through a service link331304to authenticate331305with appropriate credentials configured that preferably includes a unique username and password, wherein, the login service can retrieve the encrypted credentials for an individual-operation instrument app331303through a service link331306from a system database331307to verify that an individual-operation instrument app331303can access and use the analytical computing system21100with the login updating data stored in system database331307through the service link331306to track usage of the analytical computing system21100. In the alternative, the individual-operation instrument computer331302can assist in performing other functions in addition to or in place of the assay steps and/or plate-based tests described herein. InFIG.33Bis an embodiment of an individual-operation instrument computer331302running workflow-aid instrument app software331331to perform individual-operation instrument functions provided by analytical computing system21100via services provided through a cloud platform at22223associated with instrumentation processing. The individual-operation instrument app software331331as discussed herein, may employ a MUI as described above to facilitate user access to the functionality provided. As such, embodiments of the methodical user interface control system1102may be provided by a combination of the individual-operation instrument computer331330and the cloud platform22223. The workflow-aid instrument helps a user perform collection of assay components used in the processing of the assays in an associated experiment, as well as, preparing bioassay components that require preparation prior to be used in the processing of an assay, for example but not limited to, rehydrating lyophilized reagents, thawing frozen reagents, pretreating samples, and/or any other step required to prepare constituent components to be used in processing one or more assays in a chosen experiment. The first interaction for an workflow-aid instrument app331331with the cloud platform22223is requesting login services through a service link331332to authenticate331333with appropriate credentials configured with preferably unique username and password, wherein, the login service would retrieve the encrypted credentials for an workflow-aid instrument app331331through a service link331334from system database331335to verify a workflow-aid instrument app331331may access and use the analytical computing system21100with the login updating data stored in system database331335through the service link331334to track usage of the analytical computing system21100. After workflow-aid instrument app331331is authenticated on login, it will use services provided by application331337through use of a workflow-aid instrument app331331through a service link331336to perform workflow-aid instrument app functions provided by application331337wherein, these services as required use the service link331338to retrieve experiments ready to be processed331339; to store data331339as an experiment is processing; and/or to store data331339after an experiment completes processing, with the use of these services also creating and saving data stored in system database331335through the service link331334to track usage of the analytical computing system21100. Once a user has completed use of a workflow-aid instrument app331331, they could logout from use of the analytical computing system331300via workflow-aid app331331through a service link331332to terminate the current use of the analytical computing system21100with the logout service of authenticate331333updating a workflow-aid instrument app login information through the service link331334to the system database331335with the logout updating data stored in system database331335through the service link331334to track usage of the analytical computing system21100. InFIG.34AandFIG.34Bis a combined embodiment of software architecture for services deployed on cloud platform22223. Scalable computing service341401associated with cloud platform22223provide a secure computing environment for scaling the server utilization of services servers341406as system computing requirements change, as well as supporting the building of failure-resilient applications and isolating them from common failure scenarios. Bulk storage service341402associated with cloud platform22223provides unlimited data storage space in a highly available and durable way for any kind of data to be stored, such as images, video, documents, binary data files, and/or other types of files. Database service341403associated with cloud platform22223provides a secure, distributed, scalable database system used to store structured data for applications and system, supporting and easily distributing one or more databases across one or more servers. Lambda function service341404associated with cloud platform22223provides an event-driven computing platform for running special-built utility applications in response to configured events, while automatically managing computing resources required by these special-built utility applications. Load balancing service341405with cloud platform22223provides distribution of incoming service requests from clients across multiple services servers341406to meet continuous performance demands, as well as performing health checks on the scalable computing service341401to ensure the service is operational before sending requests and providing an extra security layer by isolating the scalable computing service341401from direct access from the internet. Logical collection of authenticate services341407deployed on services servers341406provides login services341408supporting a user logging in with username and password with strong password policy and customizable password expiration period; and services341409supporting two-factor authentication (2FA) for a supplemental method of confirming a user's claimed identity by using another form of authentication other than a username and password, for example using a Time-based One-time Password algorithm (TOTP) optionally configured on or off for an account. Logical collection of admin services341410deployed on services servers341406provides account services341411supporting account preparation, team creation, team administration, software releases, and instrument service plan; and team services341412supporting managing team membership, defining role permissions per each module, assigning users one or more roles, and notifying users via email of key events in the system pertinent to them. Logical collection of dashboard services341413deployed on services servers341406provides performance data services341414supporting gathering use and performance data from all instruments and consumables in the field operating within the system for analysis and presentation as well as, supporting export of this data to external systems; and account services341415providing visibility into the structure and operation of various teams and their users in an account plus across all accounts and supporting export of this data to external systems, as well as providing ability to monitor and disable suspicious and/or undesired behavior. Logical collection of upload services341416deployed on services servers341406provides instrument information services341417supporting import of instrument information from external systems for an account and teams associated with an account; and consumable information services341418supporting import of consumable information from external systems for an account and teams associated with an account. Logical collection of system services341419deployed on services servers341406providing performance data upload services341420supporting storing instrument and consumable performance data from instruments in the field operating within the system to be stored using bulk storage service341402; and content services341421supporting dissemination of user manuals and application installers for various versions of applications. Logical collection of application services341422deployed on services servers341406providing plate services341423supporting storing for a user plate data, including signal, plate identifier, username who processed the plate, timestamp of execution, and plate type, in a team database associated with the requesting user; audit log services341424supporting capturing time-stamped events linked to a user's actions with data and services in the system to be stored in the team's database associated with a user performing the actions; experiment services341425supporting creating an experiment with selected assay methods and samples to process, committing an experiment for execution and storing to a requesting user's team database, retrieving plate data from instruments to store with an experiment in a requesting user's team database, retrieving a collection of recent or all experiments from a requesting user's team database, initiating calculation of results using one or more associated analysis methods and storing to a requesting user's team database, and retrieving a specific experiment with its plate data including calculated results from a requesting user's team database; assay method services341426supporting retrieving a collection of recent or all assay methods from a requesting user's team database, retrieving a specific assay method from a requesting user's team database with assay method configuration data including but not limited to assay method name, associated assays to be tested, layout of different sample types being optionally calibrators, optionally controls, optionally blanks, and optionally samples (i.e., unknowns), analysis method assignments to the assay method as well as optionally to one or more assays associated with the assay method, and protocol parameters configuring the performance of the assay method either manually or automatically, and committing the assay method for use storing it in the requesting user's team database; and data analysis services341427supporting retrieving a collection of recent or all analysis methods from a requesting user's team database, retrieving a specific analysis method from a requesting user's team database with analysis method configuration data including but not limited to analysis method name, algorithm and associated configuration, background detection configuration, and limits of detection configuration, and committing the analysis method for ultimate use storing it in the requesting user's team database. In the alternative, the logical collection of application services341422can assist in performing other services in addition to or in place of the assay services and/or plate based tests described herein. InFIG.35Ais an embodiment of a logical design for system data for the analytical computing system with the data entities organized in logical groups as defined by Account and Team351500, Instrument and Service Plan351501, User and Role351502, Software Release351503, Multi-Region351504, and Audit Trail351505, wherein, Account and Team351500includes one or more data entities associated with managing an account of users organized into teams on the analytical computing system; Instrument and Service Plan351501includes one or more data entities associated with instrumentation used in conjunction with the analytical computing system; User and Role351502includes one or more data entities associated with managing the users assigned to zero or more teams on the analytical computing system; Software Release351503includes one or more data entities associated with managing new releases of software for the analytical computing system; Multi-Region351504includes one or more data entities associated with managing deployment of an account on a cloud platform providing support for a geographically distributed computing environment providing localized performance improvement and/or meeting governmental restrictions such that an account could be located to a desired geographical location; and Audit Trail351505includes one or more data entities associated with capturing a log of actions performed by administrators of the analytical computing system. Account and Team351500having a data entity351506representing one or more accounts on the analytical computing system where each account has associated with it a data entity351509representing one or more teams organized within an account and a data entity351507representing a company establishing a specific account for which is a data entity351508representing a contact who is responsible for preparing use of the account for a company, such that each company351507can have more than one account associated with it as prepared by a specific primary contact351508. Instrument and Plan351501has a data entity351513representing one or more instruments to be used in conjunction with an account on the analytical computing system where each instrument has associated with it a data entity351511representing the ship-to address associated with a data entity351510representing a parent company to which the associated instrument was shipped, such that a Parent Company351510may have associated with it one or more Ship-To Addresses351511that in turn can have associated with it one or more Instruments351513that have associated with each instrument a data entity351512representing a service plan either active or inactive for an instrument that itself is associated with an Account351506to aid an administrator in managing service plan renewals of one or more instruments potentially in use with the account on the analytical computing system. User and Role351502has a data entity351514representing a distinct user of the analytical computing system associated with one or more teams351509prepared for an Account351506where each user has for a team an association with a data entity351515representing a role in the use of the analytical computing system with a prescribed set of software function permissions as defined in the associated data entity351516derived from the permissions defined by the data entity351518associated with the data entity351517representing each software module in the analytical computing system, such that, a distinct User351514may participate with one or more teams351509where each team could be in the same or different accounts351506and the user assuming one or more roles351515for each team that enables and disables one or more functions of the software as configured for each role351516. Software Release351503has a data entity351519representing the overarching software release of a collection of one or more applications in the analytical computing system as represented by the data entity351520, such that each Account351506is using a particular software release351519and may upgrade to one of one or more new software releases351519, but all users351514associated with an Account351506have up to upgrade to the new software release351519when an Account351506performs the upgrade. Multi-Region351504has a data entity351522representing the geographical region supported by the cloud platform and associated a data entity351523representing the database server for user for creating databases for use, such that, an Account351506is associated with a specific geographical region351522to which all of its Teams351509will have their associated databases351523created for use by each respective team so that only Users351514assigned to a Team351509may access the assigned database created351523. Audit trail351505includes data entity351524representing an audit event. Software release351503can include version control351521, which is adapted to document, maintain, and/or track previous versions of the Software release351503. In one embodiment, version control351521includes information as it relates to the existing version and all previous versions of Software release351503along with information to changes to the software that propagated through the various versions of the software. Moreover, version control351521can include information as it relates to future plans for additional revisions to Software release351503. Audit trail351505can further include an audit event351524, which can be used to trigger a system audit and/or audit various user- or instrument-based operations. InFIG.35Bis an embodiment of a mapping between one or more business entities351570and351576using an analytical computing system351581to the analytical computing system351581through the cloud platform351582used in delivering electronic information about purchased products as the products prepared for physical shipping. Business entity351570can include, but is not limited to, a corporation, limited liability company, sole proprietorship, non-profit company, academic institution, government agency or affiliate, private individual or group of individuals, research institute, or any other type entity using the analytical computing system351581, wherein the business entity351570is described by parent business information351571that can have associated with it zero or one or more ship-to addresses351572and351574, wherein the ellipse351573illustrates the potential for zero or one or more ship-to addresses, where each ship-to is a unique address to which the business entity351570wants products it purchases to be delivered for ultimate use within the targeted business entity. The ship-to351572labelled “A” and the ship-to351574labelled “Z” is merely illustrative as any number of ship-to addresses associated with any business entity are contemplated. A business entity351570or351576can have zero ship-to's associated with it such that it purchases one or more products to be delivered to some other business entity; regardless, each business entity can have an account with the analytical computing system351581. As shown by element351575, there can be zero or more business entities as depicted351570and351576using the analytical computing system351581, but the parent information351571labelled “A” and the parent information351577labelled “Z” illustrates up to 26 business entities351570and351576with their respective parent information351571and351577, but any other number is contemplated as well. Similarly, the business entity351576can be described by parent business information351577that can have associated with it zero or one or more ship-to addresses351578and351580, with an wherein the ellipse351579to illustrates the potential for zero or one or more ship-to addresses, where each ship-to is a unique address to which the business entity351576wants products it purchases to be delivered for use within the targeted business entity. The ship-to351578labelled “A” and the ship-to351580labelled “Z” represents26ship-to's associated with any business entity, but any other number is contemplated as well.351581is the analytical computing system with its associated computing platform351582and its system database351583and consumable content351585wherein the system database351583has stored with it in part a collection of data351584being account information of business entities using the analytical computing system351581having an auto-generated unique identifier from cloud platform351581for tracking a business entity's use of the analytical computing system351581along with the account's identifier being associated with the unique identifier of the business entity in this example of an embodiment being either Parent A ID for business entity351570or Parent Z ID for business entity351576, while also being depicted that a business entity could have more than one account on the analytical computing system351581since Parent Z ID is repeated; and consumable content351585having stored with it a collection of purchased consumable content being the general and lot-specific content for a purchased product as shipped to a business entity, Parent A ID being the business entity351570and Parent Z ID being the business entity351576, to particular ship-to addresses of the associated business entity351570and351576, where the ship-to addresses are unique within a business entity351570and351576, but not necessarily unique between the two business entities, which is to say, two different business entities can share a common ship-to address, such that the cloud platform351581may transfer consumable content to each account on the analytical computing system that can desire to use a purchased consumable available to it as determined through the PARENT ID mechanism of ACCOUNT351584mapped to the associated ship-to's as defined in consumable content351585. InFIG.35Cis an embodiment of a logical design for data representing plate data generated by a user for a team using instrumentation in conjunction with the analytical computing system with the data entities351599logically organized have a data entity351594representing a physical plate processed and/or analyzed by instrumentation where each plate has an association with a data entity351595representing a definition of how many measurements of the possible measurements this plate will provide for each tested sample; an association with a data entity351596representing the collected data produced from the plate; an association with a data entity351597representing the configuration of the instrumentation used to produce the collected data; and an association with a data entity351598representing any abnormal events that might have occurred on the instrumentation in the process of producing the collected data351596from the plate351594. Although this embodiment describes plate-reader operations and/or applications, the methods described herein can be applied in the alternative to the logical design of other experiments and tests in the alternative. InFIG.35Dis an embodiment of a logical design for data representing methods for performing assays in the analytical computing system with the data entities351569logically organized having a data entity351560representing a named assay method to be performed using consumables and instrumentation where each assay method has an association with a data entity351561representing a named method by which data collected from instrumentation is post-analyzed to provide assay context to the collected data in association with a data entity351566representing the configuration of settings relevant to a prescribed analysis as well as an optional association with a data entity351567representing the configuration of settings relevant to a prescribed analysis leveraging a curve fitting technique; an association with a data entity351562representing a definition of how many measurements of the possible measurements this plate will provide for each tested sample; an association with a data entity351563representing a definition of the available measurements each plate will provide for each tested sample in association with a data entity351568representing the specific assay to be measured; an association with a data entity351564representing general information about a consumable to be used with the assay method; and an association with a data entity351565representing a definition of the layout of various types of samples to be dispensed on a plate where the types of samples are calibrators, controls, blanks, and samples (also referred to as unknowns or samples under test), such that, the collection of these data entities provides the assay-specific context to help a user determine what the measured data collected from instrumentation means about the samples they are testing. Although this embodiment describes methods for performing assays and/or plate-based tests, other experiments and tests are contemplated as well. InFIG.35Eis an embodiment of a logical design for data representing a collection of plates organized to be processed together or independently, either way a unit of schedulable work referred to as a run, with the data entities351592logically organized having a data entity351586representing a run to be processed using instrumentation with the run having an association with a data entity351594representing a physical plate processed by instrumentation; an association with a data entity351560representing each assay method used with each corresponding plate for the run; an association with a data entity351588representing a run record; and an association with a data entity351589representing a system record. The data entity351588has associations with a data entity351590representing a kit inventory record and with a data entity351591representing a sample layout. Although this embodiment describes plate-based operations and/or applications, the methods described herein can be applied in the alternative to the review of other experiments and tests in the alternative. InFIG.35Fis an embodiment of a logical design for data representing the definition of a collection of sample(s) to be measured, assay method(s) to be used to prepare the samples to be measured on plates, and analysis results determined from the measured data using analysis algorithm(s) defined in association with an assay method and/or its constituent assays, all referred to as an experiment, with data entities351542logically organized having351535a data entity representing an experiment to be processed using instrumentation with the experiment having an association with a data entity351536representing a plate processed in the context of an experiment where one or more of these processed plates are associated with a run351586and associations with data entities351539and351541used to post-analyze the measured data from a plate using assay method setup to determine results351540; an association with a data entity351537representing a specification of the structure of data table to be presented for the measured data from plates; and an association with a data entity351538representing a collection of sample statistics determined from the measured and analyzed data. Although this embodiment describes methods for performing assays and/or plate-based experiments, other experiments and tests are contemplated as well. InFIG.36Ais an embodiment of an example of account structure for users of analytical computing system361600, but this example is not intended to limit the account structure in any way since the analytical computing system361600can support an infinite number of accounts, an infinite number of teams in an account, and an infinite number of administrators and/or other users assigned to accounts and/or teams. In the example for this embodiment there are shown four accounts prepared on analytical computing system361600as represented by Account 1361601, Account 2361602, Account 3361603, and Account 4361604, these account names used to aid in the example. Each account has associated with it one or more teams where, Account 1361601has one team named Team with a team database361607dedicated to that team; Account 2361602has one team named Team to illustrate team names are only unique within an account, also with its dedicated team database361611; Account 3361603has two teams named Team 1361615with its dedicated team database361617and Team 2361616with its own dedicated team database361618; and Account 4361604has two teams named Team A361626with its dedicated team database361628and Team B361627with its own dedicated team database361629. The users in the various accounts and teams are uniquely named as well to illustrate users can be uniquely named for easy end-user identification with User 1361605and361608, User 2361609and361623, User 3361612, User 4361613, User 5361614, User 6361619and361622, User 7361620and361633, User 8361621, User 9361624and361630, User 10361625, User 11361631, and User 12361632, but in the preferred embodiment the username would be a fully expressed unique email address and/or username of a user, simplified for this example. Additionally User 1361605illustrates a user could be an account admin, a team admin, and/or a team member; User 2361609illustrates a user could only be an account admin and a team admin; User 2361623illustrates a user could only be an account admin and also illustrates a user could be an admin of more than one account; User 5361614illustrates a user could be a team admin for more than one team; User 6361619and361622illustrates a user could be a team member of more than one team in an account; User 7361620and361633illustrates a user may be a team member of more than one team in more than one account; User 9361624illustrates a user could be a team admin and a team member (e.g., shown361630); User 10361625illustrates a user could only be a team admin; User3361612, User 4361613, User 6361619, User 7361620and361633, User 8361621, User 11361631, and User 12361632illustrates users could only be assigned as a team member with no administrative permissions; and not explicitly illustrated but should be obvious is there are no constraints placed by the system on how a particular user is assigned to an account and teams associated with an account, since user assigned is fully the responsibility of the person or people managing an account and team(s) associated with an account. Additionally the analytical computing system361600in a preferred embodiment would provide a role-based permission mechanism beyond account administrator and team administrator to help control access to various system functions of the analytical computing system361600for various users on a team in an account where predefined and user-changeable user roles could be but not limited in name and/or number to lab manager, designer, associate, operator, and maintenance technician; such that, an account administrator would have permissions associated with managing an account shared by users on all teams; a team administrator would have permissions associated with managing user participation on a team; a lab manager could have permissions with greatest responsibility compared to other roles for users on a team; a designer and associate could have permissions different than each other befitting each of their normal job functions; an operator could only have permissions for normal operation of instrumentation; and a maintenance technician could only have permissions for maintenance and diagnosis of instrumentation, where a user could be assigned more than one role and given permissions aggregated across the assigned roles, hence, as an example User 1361605would have account permissions for Account 1361601, team administration of Team361606plus whatever other permissions based on the role(s) User 1361605assigned themselves as the team administrator. InFIG.36Bis an embodiment of the computing flow for creation and update of an account on the analytical computing system361654. The computing flow of the account information upload app361653may be managed, for example, by a MUI provided via methodical user interface control system1102operating, at least in part, on the account information upload computer361652and the analytical computing system361654. For example, an admin module may be used to manage the interface embodiments of the computing flow. The flow is represented in a “swim lane” diagram depicting independent computing systems, analytical computing system provider business system361651, account information upload computer361652, and cloud platform361655, each operating concurrently to each other with processing swim lanes for analytical computing system provider business system361651depicted between lines361659and361660, processing swim lanes for account information upload computer361652with its software application account information upload app361653depicted between lines361660and361661, and processing swim lanes for cloud platform361655with its software account services361658depicted between lines361661and361662. The processing of analytical computing system provider business system361651is depicted as out of scope for the analytical computing system361654with the dotted-line outline of analytical computing system provider environment361650, but other embodiments are contemplated as well. Analytical computing system provider business system361651can cause generation of a request for a new account361163or an update to an existing account361669. The interface mechanism for processing between analytical computing system provider business system361651and account information upload app361653occurs through a messaging mechanism361664that can be a file share, a message queue like Java Messaging Service, Microsoft Message Queue or some other queuing service, email, or some other mechanism for software applications to communicate with each other, wherein the processing361663can be to prepare a message with proper format and content per a prescribed interface definition with information about an account defined in the analytical computing system provider business system361651and post it to the messaging mechanism361664for ultimate processing by the account information upload app361653. First flow to be described is account creation as initiated361663to generate new account request based on an event that occurs in the analytical computing system provider business system361651by posting a message via messaging mechanism361664with information including, but not limited to, the account number as managed by analytical computing system provider business system361651, primary contact information including but not limited to name, business contact address and phone number, the email address of the first person the analytical computing system361654automatically contacts to initiate their setup and use of the analytical computing system361654, the unique identifier for the account in the analytical computing system provider business system361651, and any other information deemed necessary for an account. The message is received at step361665and checked for the type of message being received first for a request to create an account at step361666then for updating an account at step361671and if neither posting an error message at step361676to messaging mechanism361664and returning wait for the next message at step361665. On receiving a create account request at step361666, a create account request is constructed at step361667from the message content received from messaging mechanism361664to post at step361668using the cloud platform361655, e.g., using services server361656which may include admin functionality or component361657, wherein on receipt of the post it is verified at step361669to ensure the request has all relevant content and on failure returning an error response at step361668and on success create the account at step361670and store all of the account information in the create post in the system database on the cloud platform361655and making the primary contact identified in the create post the first account administrator for the new account emailing the primary contact with instructions of how to log into the analytical computing system361654returning success to the requester at step361668, and returning at step361667the account information upload app361653to waiting for a message361665. On receiving an update account request361671, an update account request is constructed361672from the message content received from messaging mechanism361664to post at step361673using the cloud platform361655, wherein on receipt of the post it is verified at step361674to ensure the request has all relevant content and on failure returning an error response at step361673and on success update the account at step361675and store all of the account information in the update post in the system database on the cloud platform361655, returning success to the requester at step361673, and returning at step361672the account information upload app361653to waiting for a message at step361665. InFIG.36Cis an embodiment of the computing flow for instrument association with an account on the analytical computing system361654. The computing flow of the instrument information upload app361637may be managed, for example, by a MUI provided via methodical user interface control system1102operating, at least in part, on the instrument information upload computer361636and the analytical computing system361654. For example, an admin module may be used to manage the interface features of the computing flow. The flow is represented in a “swim lane” diagram depicting independent computing systems, instrumentation provider business system361635, instrument information upload computer361636, and cloud platform361655(which may include services server361656providing, e.g., an admin functionality or component361657), each operating concurrently to each other with processing swim lanes for instrumentation provider business system361635depicted between lines361659and361698, processing swim lanes for instrument information upload computer361636with its software application instrument information upload app361637depicted between lines361698and361699, and processing swim lanes for cloud platform361655with its software account services361658depicted between lines361699and361662. The processing of instrumentation provider business system361635is depicted as out of scope for the analytical computing system361654with the dotted-line outline of instrumentation system provider environment361634, but other embodiments are contemplated as well. Instrumentation provider business system361635results in a generation of a request for a new instrument purchase at step361638, a request for an instrument evaluation at step361648, or a request for an instrument lease361649, wherein, each request results in a ship of the instrument at step361639. The interface mechanism for processing between instrumentation provider business system361635and instrument information upload app361637occurs through a messaging mechanism361640that can be a file share, a message queue like Java Messaging Service, Microsoft Message Queue or some other queuing service, email, or some other mechanism for software applications to communicate with each other, wherein the processing at step361638and at step361648and at step361649can be to prepare a message with proper format and content per a prescribed interface definition with information about an instrument purchase at step361638, evaluation at step361648, or lease at step361649including the ship-to address defined in the instrumentation provider business system361635and post it to the messaging mechanism361640for ultimate processing by the instrument information upload app361637. The resulting flow on purchase at step361638, evaluation at step361648, or lease at step361649is identical so the description can focus on new instrument purchase as initiated at step361638to generate new instrument purchase request based on an event that occurs in the instrumentation provider business system361635by posting a message via messaging mechanism361640with information including, but not limited to, the account number of the analytical computing system to which the instrument will be assigned as managed by instrumentation provider business system361635, instrument serial number, the unique identifier of the parent company of the organization expecting the instrument(s), and the unique identifier of the ship-to location to which the instrument will be shipped as managed by the instrumentation business system361635, the service plan details associated with duration of the plan and the available number of seats for users to use the analytical computing system361654, and any other information deemed necessary for an account on the analytical computing system361654. The message is received at step361641checking the message at step361642to confirm it is assigning an instrument to an account and if the message is assigning an instrument to an account then processing continues at step361643but if not processing continues at step361647to post an error message to messaging mechanism361640and returning to get messages at step361641. On receipt of a correct instrument assignment request361642, processing continues at step361643to construct from the message content received from messaging mechanism361640a request and put at step361644using the cloud platform361655, wherein on receipt of the put it is verified at step361645to ensure the request has all relevant content and on failure returning an error response at step361644and on success assigning the instrument to the account at step361646and storing all of the instrument information in the put request in the system database for the account on the cloud platform361655returning success to the requester at step361644, and returning at step361643the instrument information upload app361637to waiting for a message at step361641. InFIG.36Dis an embodiment of the computing flow for consumable association with an account on the analytical computing system361654. The computing flow of the consumable information upload app361683may be managed, for example, by a MUI provided via methodical user interface control system1102operating, at least in part, on the consumable information upload computer361682and the analytical computing system361654. For example, an admin module may be used to manage the interface features of the computing flow. The flow is represented in a “swim lane” diagram depicting independent computing systems, consumable provider business system361681, consumable information upload computer361682, and cloud platform361655, each operating concurrently to each other with processing swim lanes for consumable provider business system361681depicted between lines361659and361696, processing swim lanes for consumable information upload computer361682with its software application consumable information upload app361683depicted between lines361696and361699, and processing swim lanes for cloud platform361655(e.g., services server361656which may include admin functionality or component361657) with its software account services361658depicted between lines361699and361662. The processing of consumable provider business system361681is depicted as out of scope for the analytical computing system361654with the dotted-line outline of consumable system provider environment361680, but other embodiments are contemplated as well. Analytical computing system361654results in a generation of a request for a new consumable purchase at step361684with each request resulting in a ship of a consumable at step361685. The interface mechanism for processing between consumable provider business system361681and consumable information upload app361683occurs through a messaging mechanism361686that can be a file share, a message queue like Java Messaging Service, Microsoft Message Queue or some other queuing service, email, or some other mechanism for software applications to communicate with each other, wherein the processing at step361685can be to prepare a message with proper format and content per a prescribed interface definition with information about a consumable purchase as well as lot-specific content associated with the consumable(s) being purchased, including the unique identifier of the parent company expecting the consumable(s), and the unique identifier of the ship-to address defined in the consumable provider business system361681, and post it to the messaging mechanism361686for ultimate processing by the consumable information upload app361683. The resulting flow on purchase at step361684generates a new consumable purchase request based on an event that occurs in the consumable provider business system361681by posting a message via messaging mechanism361686with information including but not limited to the barcodes of constituent components associated with a consumable, general and lot-specific content associated with the consumable, the unique identifier of the parent company, and the unique identifier of the ship-to location to which the consumable(s) will be shipped as managed by the consumable business system361681and any other information deemed necessary for an account on the analytical computing system361654. The message is received at step361687checking the message at step361688to confirm it is assigning a consumable to a site account and if the message is assigning a consumable to a site account then processing continues at step361689but if not processing continues at step361693to post an error message to messaging mechanism at step361686and returning to get messages at step361687. On receipt of a correct consumable purchase request at step361688, processing continues at step361689to construct from the message content received from messaging mechanism361686a request and post it at step361690using the cloud platform361655, wherein on receipt of the post at step361690it is processed to store the new consumable information to consumable content on the cloud platform organizing the content on consumable content by parent account provided with the new consumable information for ultimate dissemination to instrument(s) and account(s) associated with the ship-to associated with the consumable, posting an event to trigger the ultimate dissemination to account(s) associated with the ship-to of the consumable returning success to the requester at step361690, and returning at step361689the consumable information upload app361683to waiting for a message at step361687. At step361692processing trigged by an event being delivered at step361691that initiates the deployment of all new consumable information to one or more accounts associated with ship-to's of the new consumables via the unique identifier of the parent company expecting the consumable(s). InFIG.37is an embodiment of software modules in administrator app371700forming the primary user interface experience for administrative work typically but not limited to using data configured and generated through the use of services provided by cloud platform371704to create, read, update, and/or delete any and all data relevant to each module's processing, as well as any other services needed for each module's processing, wherein admin console module371701can be the active module by default when the administrator app371700starts. Admin audit trail module371702provides visibility into the actions various account admins and/or team admins perform in the administrator app371700. Collection of system functions371703provides typical utilities in support of use of a system such as but not limited to logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing user password, and/or other utilities. The collection of system function371703may be provided as a separate MUI module and/or a series of software protocols that operate alongside the other discussed MUI modules. As discussed above, the administrator app371700may employ a MUI supplied by a methodical user interface control system1102for interface purposes. The admin console module371701, the admin audit trail module371702, and the system functions371703may all employ a MUI for user interface purposes. A user will log into the administrator app371700through system functions371703using services provided by cloud platform371704. If authentication of an administrator by a login service on cloud platform371704returns an administrator has more than one account, an administrator could be required to select the default account, but if an administrator does not belong to more than one account and/or team, the service on the cloud platform371704can auto-assign an administrator to the sole account for that administrator. On completing login, the user lands at start of the admin console module371701and begins using the administrator app371700as they need. InFIG.38Ais an embodiment of a user experience flow through admin console module for an account admin whose responsibilities are to administer the overall account for an analytical computing system, as well as, administering all teams associated with an account using administrator app at381800running on an admin's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for an admin as depicted in administrator app at381800being labelled “1.” The user experience flow ofFIG.38Amay be managed by a MUI as discussed herein.FIGS.38D-38Hprovide screenshots illustrating embodiments of the experience flow illustrated inFIG.38A. At381801an admin is requested to login and in this case the authentication service on the cloud platform recognizes the user logging in is identified as an account administrator per user configuration allowing the user to log in and if not recognized as an account administrator denied access with an error message informing the user. At381810the user interface auto-transitions to presenting a first menu of options including prepare teams at381811, define administrators at381812, manage teams at381815, and/or update account at381822. On selecting prepare teams at381811user interface presents an execution menu including information on the number of seats available for an account, the maximum number of teams an account may have, and the current set of named teams, if any. A field to enter a new team name is provided with an execution function that will initiate the creation of new teams. The user may type in a unique team name and presses enter. The team name, if unique, is added to the set of teams ready to be created for the account on initiating execution function, with the execution function invoking service(s) on the cloud platform to create each new named team for an account in the system database and create a new team database on a database server using database service, as well as updating the new team database(s) through a lambda service invoked on the cloud platform to populate consumable information from consumable content for potential use by each new team. Subsequent to execution, the user interface transitions back to start at381810to display the first menu again. Additionally, at381811an account admin can change the name of a previously created team. On selecting define administrators at381812, transitions the user interface to present the set of account admins, as well as admins for each team created in prepare at381811, a second menu of options is presented including account administrators at318813and team administrators at381814. The first menu may be relocated to an historical portion of the MUI. A user can optionally navigate to an execution menu under account administrators at381813to add users named by unique username to the set of account admins or to remove a previously defined account admin for which on completion of the add or remove invokes a service on the cloud platform to update the account admin information in system database and notify the added account admin via email and/or other electronic communication mechanism. The user may also optionally navigate to an execution menu under team administrators at381814for one or more teams to add users named by unique username to the set of the associated team's admins or remove previously defined team admins for which on completion of the add or remove invokes a service on the cloud platform to update the team admin information in system database and notify the added team admin(s) via email and/or other electronic communication mechanism, where by default each account admin would be assigned as a team admin to each team to simplify team admin setup. On selecting manage teams at381815from the first menu, the system relocates the first menu to a historical portion and presents a list of the one or more teams being administered as a second menu (not shown). After selecting a team from the second menu, a third menu of items including define roles and permissions361816, add/remove members at381817, assign roles to members at381818, and/or authorize and notify members at381819. On selecting define roles and permissions at381816a user is provided an execution menu presenting options to configure each role in the system on a module-by-module basis based on all of the modules available in the analytical computing system is presented. A user may also change one or more of the default roles names to whatever they want. On selecting add/remove members at381817a user is provided an execution menu presenting the collection of usernames identified as members of the team, as well as the open seats available for new members, and enabling an account administrator to enter new member usernames to add members if there are open seats and/or remove existing members using services on the cloud platform to update account team configuration on each request updating the open seats available. On selecting assign roles to members at381818a user is provided an execution menu presenting the collection of members with the ability to turn on or off each role available for the account member by member, using services on the cloud platform to update account team configuration on each request. On selecting authorize and notify members at381819a user is provided an execution menu presenting a synopsis of all members and their assigned roles with an authorize and notify option to notify new members of being added to a team if any and/or informing existing members of changes to their assigned role(s) if any. The notification may be invoked through a service request on cloud platform causing an email and/or other electronic communication mechanism to be sent to each affected user, and on completing transitioning the user interface back to manage teams at381815, also shown at381821. On selecting update account at381822the MUI transitions the user interface to present a second menu of item to view software releases and renewals associated with the account. On selection of releases at381823the account administrator is presented information displaying the status of the current release as well as available new releases. On selecting to upgrade to a new software release affecting the entire account the user interface transitions to an execution menu for scheduling the software update at381824presenting an account admin a function to set the date and time for the update to occur. On acceptance of an admin's configuration invoking a service on the cloud to store the scheduled update in system database, the MUI transitions back to releases at381823and displays the scheduled date and time associated with the view of software releases, and notifies all account members via email and/or other electronic communication mechanism of the impending update and periodically notifying the account members at various configurable date/time intervals so they are fair warned of an approaching update. When the update occurs, the system locks the account from use until such time as the software and database(s) for an account have been updated per the software release. Additionally, an account admin may cancel or change the date and time of an update at any time prior to the update occurring through selecting the scheduled date and time for a software release to transition to schedule update at381824to either cancel or change the data. On selecting renewals at381825the account administrator is presented renewal status for all instrumentation associated with the account, as well as, the available number of user seats for the account. InFIG.38Bis an embodiment of a user experience flow through admin console module for a team admin whose responsibilities are to administer one or more teams associated with an account with administrator app at381800running on an admin's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for an admin as depicted in administrator app at381800being labelled “1.” as the first step. The user experience flow ofFIG.38Amay be managed by a MUI as discussed herein. Thus, as an admin works through the flow of the user interface, they may easily back track to one or more previous steps through historical portions displaying previous menus. At381801an admin is requested to login and in this case the authentication service on the cloud platform recognizes the user logging in is identified as a team administrator per user configuration allowing the user to log in and if not recognized as a team administrator denied access with an error message informing the user. At381810the user interface automatically selects manage teams at381815as a first menu item because the user is identified as only a team administrator that has no additional account administration permissions. The team administrator is then presented with a second menu (not shown) permitting the selection of a team. After selection of a team from the second menu, the MUI may move to the third menu, which display options for managing the team selected in the second menu, including the options for each managed team being to define roles and permissions at381816, add/remove members at381817, assign roles to members at381818, and/or authorize and notify members at381819. If only one team is managed by the administrator, the MUI may skip the second menu and jump immediately to the third menu. On selecting define roles and permissions at381816the MUI transitions the user interface to an execution menu presenting options to configure each role in the system on a module-by-module basis based on all of the modules available in the analytical computing system as pre-configured in system content. On selecting add/remove members at381817the MUI transitions the user interface to an execution menu presenting the collection of usernames identified as members of the team, as well as, the open seats available for new members, enabling a team administrator to enter new member usernames to add members if there are open seats and/or remove existing members using services on the cloud platform to update account team configuration on each request updating the open seats available. On selecting assign roles to members at381818the MUI transitions the user interface to an execution menu presenting collection of members with the ability to turn on or off each role available for the account member by member, each member may have one or more roles with the corresponding permissions module-by-module, using services on the cloud platform to update account team configuration on each request. On selecting authorize and notify members at381819the MUI transitions the user interface to an execution menu presenting a synopsis of all members and their assigned roles with a authorize and notify option to notify new members of being added to a team if any and/or informing existing members of changes to their assigned role(s) if any. InFIG.38Cis an embodiment of a user experience flow through logging in to use any admin or user application in the analytical computing system beginning with login at381801with each step through the login user interface numbered sequentially 2 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for any user as depicted in login at381801being labelled “1.” as the first step of login, also, as a user works through the flow of logging in they could easily back track to one or more previous steps. The user experience flow ofFIG.38Cmay be managed by a MUI, as discussed herein. At381801a user is first presented an option to enter a unique username at381802as either an email address, a user-provided name, or a system-provided name. On entering or selecting a username the username can be checked through a service request to the cloud platform to confirm this is a known username and on confirmation of being a known username transitioning to password at381803for the user to provide the secure password associated with the unique username that uniquely confirms authenticity of a user logging in, passing username and password through a service request on the cloud platform to provide authentication. On proper authentication a user is permitted to use the application they wish to use. When authentication is not possible, an error message is presented to inform the user they were not granted access. Optionally at381804it may be required of a user to provide two-factor authentication (2FA) credentials to further secure access to the analytical computing system because an account admin has configured this security feature on for the account they administer. If 2FA is configured on for an account, a user logging in the first time would have to perform a setup function at381805typically, but not limited to, a user scanning a barcode or typing a code provided in 2FA setup at381805into a separate 2FA application running on another computing device, mobile or otherwise, that synchronizes the user's use of an account with the separate 2FA application to provide another unique, independent credential to further confirm the user is as logged in. Completing setup at381805causes transition to enter code at381806for a user to use the separate 2FA application to produce a one-time unique code for them to enter into login for the code to be passed through a service request on the cloud platform to perform the final authentication of the user logging in, on success granted access and on failure getting an error message informing a user the access is not granted. At381807, the user may be allowed to proceed, for example, choose account and/or team. In further embodiments, the admin console module371701can be used to create, modify, and/or delete teams and/or accounts; add, remove, and modify individuals users within teams and/or accounts; and set, modify, and/or remove permissions for one or more individual users, teams, instruments, and/or accounts. Once these administrative procedures have be carried out (e.g., by one or more administrators), notifications and/or instructions can be transmitted to one or more of the users, accounts, and/or teams, for example, via electronic mail or through the cloud. In certain embodiments, users, accounts, and/or teams can receive these notifications and/or instructions through a uniquely assigned email address. Referring specifically toFIG.38D, in certain embodiments, first portion381821can include a first menu of user-selectable choices, including one or more of the following choices: Prepare Teams Define Administrators and Manage teams (i.e., a first set of choices). In another embodiment (not shown), first portion381821can include a first menu of user-selectable choices, including a Define Roles and Permissions choice; an Add/Remove Members choice; and Assign Members to Roles choice; and an Authorize and Inform Members choice, i.e., a second set of choices. Certain features and/or particular embodiments of these choices are described in additional detail in conjunction withFIGS.38A and38B, above. One feature of the admin console module allows users to prepare and define teams. For example, regarding the first menu, in response to a selection of the Prepare Teams choice, the second menu of user-selectable choices includes one or more previously added teams. Previously defined teams can be viewed in this aspect and additional teams can be created and/or defined. Teams can be defined, and permissions can be assigned, based on role, experiment type, user, etc. The previously added teams may have been added by the same user, e.g., an administrator, or by other users who have access to the admin console module, e.g., with appropriate permissions. In addition to displaying previously added teams, in response to a selection of the Prepare Teams choice, the second menu of user-selectable choices is adapted to receive one or more new teams to add among the one or more previously added team. These new members can be added, for example, by a user manually entering the information into the MUI through an input device, such as a keyboard, touchscreen, etc. Further, new teams can be added through an automated process, such as with a barcode reader, or an input file that contains a list of one or more of the new teams the user wishes to add. In one example, the team name can be preassigned. Once teams have been added, in response to the Prepare Teams choice, the user can add, modify, remove, or otherwise define certain aspects of one or more of the teams. Referring specifically toFIG.38H, for example, in response to the Prepare Teams choice, the first portion381821can be adapted to display the previously entered teams in a second menu of user-selectable choices. In the embodiment provided in this figure, the user has selected Team3, as designated by the choice being represented by all capital letters, although a user's selection can be depicted in other manners, for example, any of those described herein for displaying a particular choice in a more predominate fashion as described in greater detail above. In this embodiment, the second portion381830is adapted to display one or more of a number of available teams defined, a number of available seats assigned, a total number of available teams, and a total number of available seats as additional information. In the embodiment shown in this figure, this may be displayed as Team Availability Information381831. Although particular numbers of teams defined, total teams, seats assigned, and total seats are depicted in this embodiment, other examples, e.g., totals, are contemplated as well. As users add, modify, and/or remove teams and seats, the numbers provided in the Team Availability Information381831will vary accordingly and will be updated as such. Further, certain users, e.g., an administrator, can override and/or change the totals defined. The example inFIG.38Hshows first menu381829having been moved from the first portion381821to the second portion381830as a previous menu. In this embodiment, the first menu381829illustrates the first set of choices, with the “Prepare” choice highlighted as past-selected. In response to a user's selection of the “Define” choice within the first menu (which, in this example, equates to the Define Administrators item from the first menu when the first menu was displayed in the first portion381821), a second menu of user-selectable choices of usernames and/or e-mail addresses for defining administrators may be displayed in the first portion381821. Further, the usernames and/or email addresses displayed are adapted to be deleted from the second menu of user-selectable choices in response to a user's deletion input. Moreover, in response to the Define Administrators choice, the second menu of user-selectable choices is adapted to receive new usernames and/or email address to add among the previously added usernames and/or email addresses. These aspects, e.g., the adding, deleting, user's deletion inputs, etc. for usernames and/or email addresses, are described in greater detail below, e.g., in conjunction with Add/Remove Member choice. In an embodiment, in response to a selection of the Define Administrators choice, a menu of the one or more previously added teams, e.g., Team1, Team2, Team3, may be displayed in either the first portion381821or the second portion381830. In this example, the previously added usernames and/or email addresses can be associated with a particular team among the one or more previously added teams from that menu of choices. Further, in response to the Define Administrator choice, the first portion is adapted to display an execution menu having an Authorize and Email choice. With this feature, authorizations and/or team-assignment information is adapted to be transmitted to the previously added email addresses in response to a selection of the Authorize and Email Install Instructions choice. This Authorize and Email choice is described in greater detail below in conjunction withFIG.38G, e.g., as applied to the Authorization Summary381828described below. Just as the Authorization Summary381828relates to providing authorization, instructions, and/or notification vis-à-vis user's defined roles, the Authorization Email choice described in conjunction with the Define Administrators features relates to authorization, instructions, and/or notification of teams and administrator functions. By utilizing the Define Administrators feature, users can establish and/or create teams based on particular users, account, etc., so that groups of individuals can work collaboratively and/or cohesively as a team, In response to a selection of, for example, a particular team from the second menu and a specific action from a third menu, the first portion381821can be adapted to display two or more subsections of user-selectable choices, e.g., from successive hierarchical menus. Regarding the multiple subsection embodiments, as illustrated inFIG.38E, three subsections can be displayed in the first portion381821, including first subsection381824, second subsection381825, and third subsection381826, respectively. In certain embodiments, the user-selectable choices available in these subsections will depend upon the section from the first menu, e.g., the original three choices discussed previously in connection withFIG.38D. In other embodiments, the choices are static, so that the user can be presented with the same choices no matter which choice was previously selected. In the example shown inFIG.38E, the choices available are successive hierarchical levels below the first menu, in which many teams may have been selected, the second menu, in which a particular team was selected, and a third menu, in which define roles/permissions was selected. Although three subsections are depicted in this example, fewer or greater numbers of subsections of user-selectable choices can be adapted to be displayed as well. Further, their display configuration is not necessarily limited to the horizontal arrangement illustrated in is figure, as other configuration, such as those provided by way of example herein, e.g., vertical, concentric, etc., are contemplated as well. In response to the user-selectable choices available in the multiple subsections, the user-selectable choices displayed in one or more of the other subsection can change depending on the previous selection. Specifically, one feature of the admin console is to define roles of individual users and/or teams, and to assign permissions to those one or more users and or teams. Teams can be formed and permissions can be assigned based on role, experiment type, user, etc. These actions can be performed through the Define Roles and Permissions menu. For example, in response to a selection of the Define Roles and Permissions choice, the first subsection381824of user-selectable choices can include one or more of the following choices: Lab Manager, Designer, Associate, Operator (Base), and Maintenance Tech (Base). In this particular embodiment, if the user selects the one or more of the Lab Manager, Designer, or Associate choices, the second subsection381825of user-selectable choices can include one or more of the following choices: Analysis Method, Assay Method, Experiment, Assay Engine, Audit Trail, Maintenance, Reader, and System. In contrast, if the user selects the one or more of the Operator (Base), and Maintenance Tech (Base) choices, the second subsection381825of user-selectable choices can include one or more of the following choices: Assay Engine, Audit Trail, Maintenance, Reader, System. User-selectable options displayed in the third, fourth, etc. subsections can further depend on the choices made from choices previously made from one or more of the of the other subsections. For example, in response to a selection of an Analysis Method choice from the second subsection381825, the third subsection381826of user-selectable choices can include a Run Analysis Method choice. Similarly, in response to a selection of the Assay Method choice from the second subsection381825, the third subsection381826of user-selectable choices can include a Run Assay Method choice. Still further, in other examples, the third subsection381826can include multiple user-selectable choices. By way of example, in response to a selection of the Experiment choice from the second subsection381825, the third subsection381826can include the following choices: Create Experiment, Edit Layout, Exclude/Include Data Points, Export Data Table, Export Sample Result Table, and View Experiment. Additional exemplary, non-limiting embodiments are presented below. In response to a selection of the Assay Engine choice from the second subsection381825, the third subsection381826can include the following choices: Export Data Table; Modify Instrument Settings; Override Mesoscale Diagnostics Kit Lot Assignment; Retry Inventory Validation; Run Instrument; and Show ECL for Unverified Run. In response to a selection of the Audit Trail choice from the second subsection381825, the third subsection381826can a include a View Audit Trail App choice. In response to a selection of the Maintenance choice from the second subsection381825, the third subsection281826can include the following choices: Run Maintenance; Run Maintenance Method; and View Maintenance Records. In response to a selection of the Reader choice from the second subsection381825, the third subsection381826can include the following choices: Manage Database; Modify Instrument Settings; and Run Instrument. In response to a selection of the System choice from the second subsection381825, the third subsection381826can include the following choices: Modify System Settings; and Unlock App Locked by Any User. The foregoing examples are non-limiting, as other user-selectable choices can be made available for display as well through the multiple subsections of the first portion. In some embodiments, one or more of the subsections and/or user-selectable choices within the one or more subsections can be user-customizable, e.g., by an administrator, team leader and/or member, user with permission, etc. Another feature of the admin console module is to add and/or remove members, such as from a team or other grouping of one or more users and/or accounts. Teams can be formed, and permissions can be assigned, based on role, experiment type, user, etc. These actions can be performed through the Add/Remove Members choice. For example, in response to a selection of the Add/Remove Members choice, a first or second portion of the MUI (FIG.38H,381830) displays a menu including previously added usernames and/or email addresses. These previously added usernames and/or email addresses could have been added by the same user or by other users who have access to the admin console module. In an embodiment, the usernames and/or email addresses can be modified or deleted in response to a user's deletion input, assuming the user accessing them has the appropriate permissions, either by overwriting the previously entered information or by making a selection, e.g., clicking a portion of the MUI display206, such as an “x”, to remove the username and/or email address entirely. In other embodiments, any user logged into the admin console module can modify or delete the usernames and/or email addresses regardless of permissions. The previously added usernames and/or email addresses, and the ones that have been modified can then later be associated with particular teams, accounts, instruments, etc. through the admin console module. Turning to the embodiment depicted inFIG.38F, in response to user's deletion input (as described above), the first portion381821is adapted to display a confirmation choice381827before removing one or more the users and/or teams. A similar confirmation choice is described below in conjunction with the reader module (e.g.,FIG.43F) for issuing a stop instrument command. In the context of the admin console module, a similar confirmation process can be employed with regard to deleting one or more the users and/or teams. The confirmation choice (FIG.38F,381827) can be adapted to be displayed to provide one or more users with the ability to confirm whether they want to delete the current user from a particular team, account, roles, etc. When this Confirmation choice381827is displayed, the user can be presented with a choice as to whether he wishes to delete the selected user, for this example the user is represented by the [email protected] email address. In this example, the user can either select “Cancel” from the menu, thereby terminating the decision to remove this member, or select “OK,” thereby removing the member. These options are merely exemplary as other choices and/or command prompts are contemplated as well. In addition to deleting and modifying members, in response to the Add/Remove Members choice at a third menu, the first portion381821may be configured to display an execution menu for receiving new usernames and/or email addresses to add among the previously added usernames and/or email addresses. These new members can be added, for example, by a user manually entering the information into the MUI display206through an input device, such as a keyboard, touchscreen, etc. Further, new members can be added through an automated process, such as with a barcode reader, or an input file that contains a list of one or more of the new members the user wishes to add. Another feature of the admin console module is to assign members to roles, e.g., based on title, responsibility, application performed, etc. These actions can be performed through the Assign Members to Roles choice at a third menu. For example, in response to a selection of this choice, an execution menu of user-selectable items may include previously added usernames and/or email addresses displayed in a first subsection381824. These previously added usernames and/or email addresses can, for example, be displayed in a similar manner as to those described in conjunction with the Add/Remove Members choice, above. In response to Assign Members to Roles choice, the second subsection381825can include one or more of the following role-assignment choices: Lab Manager, Designer, Associate, Operator (Base), and Maintenance Tech (Base). These are merely exemplary and additional and/or hybrid roles can be included in addition to or in place of these particular roles. In one embodiment, in response to selecting the Assign Members to Roles choice, first subsection,FIG.38E,381824, can include the previously entered username and/or email address, and second subsection,FIG.38E,381825, can include the role-assignment choices, such as the five provided above. In this embodiment, a one-to-one correspondence can be displayed between the username and/or email address and its respective role assignments. In this regard, selections from the first and second subsections (FIG.38E,381824and381825, respectively) are adapted to create an association among one or more of the previously added usernames and/or email addresses with one or more of the role-assignment choices. For example, if the user selects a first username, the second subsection (FIG.38E,381825) can display all the roles that particular user is currently assigned. Additionally, the second subsection (FIG.38E,381825) can display additional roles for which that particular user is not currently assigned. This is described in greater detail below in conjunction withFIG.38G. Whether the user is designated to a particular role can, for example, be displayed through an indicator associated with each role to indicate whether the user is assigned (or not assigned) to that particular role. The indicator can include, for example, a checkbox, although other indicators are contemplated as well, such as text-based indicators, e.g., an “x,” “1,” “0,” etc. In the checkbox embodiment, a box can be displayed as unchecked if that user is not currently assigned that that particular role, and the box can be checked, or otherwise marked in some fashion, if that user is currently assigned to that particular role. The marking or checking can occur, for example, by a user's input, e.g., mouse click, touchscreen, etc. In this example, the user accessing the admin console module can select and deselect one or more role assignments, by adding, removing, etc. roles to be associated with the given user, through the interaction with the MUI display206. Notably, the marking or checking selection process described with regard to this particular aspect of the admin console module can be applied to other selections from within this module, or any other module and/or user-selectable choices described herein. Another feature of the admin console module is to authorize user-specific roles and inform those users of the roles for which they have been assigned. These actions can be performed through the Authorize and Inform choice. As described in greater detail in conjunction withFIG.38E, an association among one or more of the users, e.g., by way of their usernames and/or email addresses, can be created with one or more of the role-assignment choices. In one embodiment, the association of one or more these users to their one or more roles can be displayed in response to a selection of the Authorize and Inform choice. Turning to the embodiment depicted inFIG.38G, an Authorization Summary381828can be displayed, for example, in the first portion of the MUI display206) in response to the Authorize and Inform choice, such that a table is created, although other structures and/or formats are contemplated as well, that summarizes those assignments. In this embodiment, two columns are created, e.g., a User column and a Roles column, although other additional columns are contemplated as well, that provide a one-to-one correspondence of user to assigned role, although other correspondences are contemplated as well. The rows depicted in this example each represent an individual user, although teams, accounts, etc. could be included as well. Additionally, the Authorization Summary381828is adapted to display an Authorize and Email Install Instructions choice, located at the lower portion of the Authorization Summary381828, although it is not limited to this embodiment. In response to a user's selection of the Authorize and Email Install Instructions choice, the role-assignment information and/or instructions are adapted to be transmitted to the previously added email addresses, or alternatively through the cloud. Thus, by selecting the transmit Authorize and Email Install Instructions choice, the user can inform one or more of the users of the role or roles for which they have been selected, and/or provide those users with information and instructions as it relates to their assigned roles. Accordingly, an Admin Console MUI provides an operator with a wide array of access control abilities, e.g., by using teams, individual user permissions, role assignments, specific permissions, and other functionality. The Admin Console is not specific to a laboratory setting and may be applied for adjusting user permissions in further settings such as manufacturing settings, parental controls over computer and media use, and others. In a particular embodiment, in response to a user's selection of the advanced context menu selector381822(FIG.38D), the advanced context menu381832(FIG.391) can be outputted to the MUI display206. The advanced context menu381832may include various commands and/or user-selectable choices. For example, with specific reference toFIG.38I, this menu can include an Resend Install Instructions command381833. This command, when selected, will resend installation instructions, e.g., to install the application that runs one or more of the modules as described herein, to one or more selected users, including the user who selected this command. Those instructions can be transmitted via electronic mail, e.g., to the users' email addresses, or over the cloud. The Import command381834, when selected allow the users to import names and/or email addresses of users, account information, team information, etc. without the need to manually input that information. Further the Change Team Name command381835and Change Account Name command can be used to allow the user to change the team or account, respectively, for one or more users, accounts, and/or teams. Finally, the Change Password command381837allows the user to change the password for her account. In other embodiments, depending on permissions, this command will allow a user, such as an administrator, to change the password of one or more additional users as well. InFIG.39Ais an embodiment of a user experience flow through an admin audit trail module beginning with administrator app at391900running on an admin's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for an admin as depicted in administrator app at391900being labelled “1.” as the first step, also, as an admin works through the flow of a user interface they could easily back track to one or more previous steps. The user experience flow ofFIG.39Amay employ a MUI for interface purposes as discussed herein. At391901an admin can select a module to access. In this illustration, an admin audit trail module is selected, and the MUI transitions the application to start at391902providing an admin a first menu including options to view all admin events at391903, view account admin events at391904, or view team-specific events at391905. On selection of an option the MUI transitions to a selected second menu at391903,391904, or391905. At391903only an account admin is presented all events captured across the entire account including, but not limited to, account and team-specific actions taken by every admin with each event including, but not limited to, username of originator, date and timestamp, the source, and information pertaining the event as returned from a service request made via the cloud platform. At391904only an account admin is presented all account admin events captured across the entire account including only overall account admin events for account actions taken by every admin with each event including but not limited to username of originator, date and timestamp, the source, and information pertaining the event as returned from a service request made via the cloud platform. At391905either an account admin or a team admin is presented all team-specific events captured for each team one team at a time for which an admin has administrative rights including team-specific administrative actions taken by every team-specific admin and account admin with each event including but not limited to username of originator, date and timestamp, the source, and information pertaining to the event as returned from a service request made via the cloud platform, wherein, an admin could easily transition to view other team-specific events without leaving the view at391905. The menus at391903,391904, and391905each enable a user to sort the view in forward or reverse ordering by username, date and timestamp, source, or information about the event. The menus at391903,391904, or391905each allow an admin to access an execution menu permitting export of the entire collection of events provided at391903,391904, or391905to a file format easily importable to other computer applications like Excel, Word, and/or any other computer application, such as, CSV, tab-delimited text, JSON, XML, and/or any other format. At391903,391904, or391905an admin may use an execution menu to export the entire collection of events provided at391903,391904, or391905to a computer operating system mechanism used for copying and pasting content from one computer application to another computer application often referred to as a clipboard. Based on the execution menu selection, appropriate event export function (e.g.,391906,391907,391908) can be executed for exporting the event or events. At interfaces5(e.g.,5a,5b,5c) the information and/or data related to the events specified at interfaces4(e.g.,4a,4b,4c) can exported. For example, in391906, all the events from interface4acan be exported. Similarly, in391907and391908, account events and team-specific events can be exported, respectively. This exportation can be provided in a user-readable format or in a file format easily importable to other computer applications such as, without limitation, Excel, Word, and/or any other computer application, such as, CSV, tab-delimited text, JSON, XML, and/or any other format. Further examples of the audit trail feature are disclosed with respect toFIGS.39B-39E. The audit trail module can be adapted to provide a summary of information as it relates to one or more users' and/or teams' interactions with the UI display specifically, or more generally based on actions that users have performed while logged into their accounts. For example, the audit trail module can include the usernames and/or email addresses of the users that have logged in to an account, the time of each login, the IP address of the computing device from which the users accessed their account, which instruments the users used while logged in, etc. In one embodiment, as shown inFIG.39B, the audit trail module can be accessed through the advanced context selector381822as part of the advanced context menu381832. In this example, the advanced context menu381832is adapted to be displayed in response to a selection of an advanced context selector381822when outputted to the UI display381823. When displayed, advanced context menu381832can include a plurality of commands and/or user-selectable choices arranged in a menu that may take the form of various configurations, e.g., vertically, horizontally, etc. In addition, one or more menu dividers391910can be employed to group and/or divide particular commands and/or choices. For example, the menu divider391911can be used to group commands on one portion of the advanced context menu381832and user-selectable choices on the other. In other embodiments, one or more of these dividers can used to group and or/divide according to other attributes or traits of the menu items. In some embodiments, the location, grouping, and/or division of the menu items can be user-customizable. Advanced context selector381822is described in greater detail above, thus, details are omitted here aside from examples and/or embodiments provided below as they relate to the audit trail module feature. In a particular embodiment, in response to a user's selection of the advanced context selector381822, the advanced context menu381832can be outputted to the MUI display206, e.g., by the menu manager1054. The advanced context menu381832can include various commands and/or user-selectable choices. For example, with specific reference toFIG.39C, this menu can include an export command391911, or a command to copy data to a clipboard (not shown). It further can include user-selectable choices including an admin console choice391912, which, when selected, can allow a user to access the admin console module, as described in greater detail above in conjunction with that module, or an admin audit trail choice391913, which, when selected, will allow a user to access the audit trail module described herein. Other commands and/or user-selectable choices are available to users as well in the advanced context menu381832, for example, and with reference toFIG.39C, Terms of Use391914, Privacy Policy391915, and a command to log the users out of their accounts, e.g., Log Out command391916. Although the advanced context selector381822is depicted inFIG.39Bnear the top left portion of MUI display206, with the advanced context menu381832directly below it, other configurations are contemplated as well. With specific reference toFIGS.39C and39D, in response to a selection of Admin Audit Trail choice391913, the first portion391920of the MUI206is adapted to display audit information391917, divided into one or more fields391918. The audit information391917can be arranged as a table, or in the alternative, as a list, or any other suitable arrangement of data. The embodiment illustrated byFIG.39Ddepicts the displayed audit information391917as including fields391918as the columns of a table, with each entry provided vertically in rows. Fields391918of audit information391917can include, for example, one or more of a timestamp, a username and/or email address, module, record ID, type, message, category, code, and IP address of a user. The timestamp can include when (e.g., date and/or time) the audit was generated. In one example, this field can be presented in MM/dd/yyyy HH:mm:ss format, although other formats are contemplated as well, including those that convey more or less information than this particular format (e.g., just recording the date, but not the time). The timestamp can also record each instance a particular user logged into her account, how long she was logged into for, and when she logged out. This information can be tracked by either the username, email address, and/or any other user, team, and/or account indicator. For example, the Username field will record the username of the user that was logged in when the event was generated. The module field can include the module that generated the audit event, e.g., Reader, Experiment, etc. In this manner, this field can be populated with one or more of the modules that were utilized during that particular log-in instance. For example, if a user utilized the Assay Method and Experiment modules, this field would indicate the same. In some embodiments, multiple modules can be displayed on a single row for that particular log-in instance, and in other embodiments, audit information391917can be arranged across multiple rows, one for each module that particular user utilized while logged in. The Record ID field may be included to show the ID of the record associated with the audit event. By way of example, if an experiment relates to the use of plates, the Record ID can include the plate barcode. It further can include information as it relates to the what experiments, assays, and/or functions a particular user performed while logged in, more generally. For example, it can include the file name, either default or user-customizable, associated with a particular experiment. In other examples, it can include information relating to actions performed while analyzing and assay plate, such the plate's loading, reading, ejection, etc. The Type field can include the type of audit event including, for example, Info, Action, Warning, or Error. This field can relate to other information summarized in the audit information391917, for example, whether the user received a warning and/or an error while logged in. Or it can include additional information related to the users' actions and/or interactions with the application, equipment, and/or experiments. Further, it can convey that an analysis was performed manually or automatically, etc. The Message field can include one or more of a static, dynamic, and/or user-customizable message that relates to the audit event. A separate field is provided for the category, e.g., Category field, of the audit event, e.g., record, system, equipment, etc. In one example, the Category field provide additional characterizations of the messages provided in the Message field. Further, the IP Address field can provide the IP address of the computing device, e.g., laptop, desktop, tablet, etc., from which the users accessed their account, which instruments the users used while logged in, etc. The Code field can be related to the IP Address in some embodiments, or unrelated in others, whereby a unique numerical value, such as an integer, for identifying the event. In some embodiments, this identifier can be predetermined. In other examples, they can be user-defined, such as by an administrator. In the latter example, the Code field can be customized to accommodate one or more users' specific needs. Additional fields such as permissions, team identifiers, etc. are contemplated as well. Thus, the audit information391917can be arranged in such a manner that associates one or more of these fields to provide a trail of information that summarizes the tasks, equipment, and/or instruments associated with one or more users' experiences while logged into their accounts. In several embodiments, the amount of information displayed can vary depending on the user's preferences. For example, a user can filter the audit information391917such that the information limited to one or more users, accounts, and/or teams, e.g., previously added teams by utilizing the Admin Console module as described above. An example of this is depicted in the embodiment shown inFIG.39D. An audit menu391919can be outputted to MUI display206, shown here by way of example in the second portion391921of MUI display206, that can be used to filter this information. In this embodiment, a user has selected to filter the audit information391917by team, which is illustrated by depicting the “Team1” selection in all capital letters in this particular embodiment, although a user's selection can be depicted in other manners, for example, any of those described throughout for displaying a particular choice in a more predominate fashion as described in greater detail herein. In this example, only User1 and User4 are members of this particular team (i.e., Team1), and, thus, the audit information391917has been filtered by the that team. In other embodiments, all audit information can be made available for display, or user can narrow the audit information to be displayed by one or more users, accounts, teams, and/or instruments. In one example, the audit menu391919can be outputted to MUI display206in response to a command to display an audit menu391919by selecting the Admin Audit Trail choice. In addition to being displayed by the MUI display206, the audit information391917can be copied and/or exported. For example, in response to an export command391911(FIG.39C), the audit information391917can be outputted to an export file, such as a Microsoft Excel® or other data processing and/or spreadsheet software. Alternatively, it can be provided in a comma-separated file, e.g., CSV file. In response to the export command391911, the requested file containing audit information391917can be exported to a user, either by the user selecting and/or viewing the exported file. Alternatively, it can be exported by emailing it to the user and/or transmitting it over the cloud. Further, in response to the copy to clipboard command (e.g., as depicted within the advanced context menu381832as shownFIG.39B), all or a subset of the data including the audit information391917can be temporarily stored to a buffer, such that the user can later access and/or view it (e.g., using a cut-and-paste command). In this example, the user is not confined to the formats for which the data are presented in the exported file, providing users with the ability to customize the data format and/or utilize one or more applications of their choice to access, modify, and delete those data. InFIG.40is an embodiment of software modules in an analytical user app402000forming the primary user interface experience for analytical work typically, but not limited to, using data generated through the use of instrumentation with each module using services provided by cloud platform402006to create, read, update, and/or delete any and all data relevant to each module's processing, as well as any other services needed for each module's processing, wherein experiment module402001would be the active module by default when the analytical user app402000starts. As discussed above, the analytical user app402000may employ a MUI supplied by a methodical user interface control system1102for interface purposes. The experiment module402001, assay method module402002, analysis method module402003, audit trail module402004, and the system functions402005may all employ a MUI for user interface purposes. An analysis method module402003provides a construct referred to as an analysis method to be used in post-read analysis of signal collected from a test plate by a plate reader, wherein an analysis method is used to configure an existing process and/or create a new process by which data collected from tested samples using instrumentation and/or consumables can be transformed through an algorithm configured by associated parameters into a quantitative or qualitative determination. Assay method module402002is used to configure an existing process and/or create a new process by which samples will be processed using consumables and/or instrumentation to generate data from the samples under test so they may be appropriately analyzed using a prescribed analysis method. Experiment module402001is used to design a test of one or more samples using one or more selected assay method(s) to collect the data from the samples through the use of instrumentation and/or consumables that may be reviewed and analyzed to ensure the experiment ran properly, as well as to learn from the data collected from the tested samples. Audit trail module402004is used to view all events generated through use of the analytical computing system by users from the same team who are creating, modifying, and/or deleting electronic records associated with the analytical computing system. The collection of system functions402005provides typical utilities in support of use of the system such as, but not limited to, logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing software configuration, changing user password, and/or other utilities. The collection of system function402005may be provided as a separate MUI module and/or a series of software protocols that operate alongside the other discussed MUI modules. A user can log into the analytical user app402000through system functions402005using services provided by cloud platform402006. If authentication of a user by login service on cloud platform402005the service returns that a user has more than one account and/or team, a user will be required to select the default account and/or team, but if a user does not belong to more than one account and/or team, the service on the cloud platform402006would auto-assign a user to the sole account and team for that user. On completing login, the user lands at start of the experiment module402001and begins using the analytical user app402000as they need. In the alternative, the analytical user app402000can assist in performing other experiments in addition to or in place of the assay and/or plate-based experiments described herein. InFIG.41is an embodiment of a user experience flow through an analysis method module beginning with analytical user app at412100running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in analytical user app at412100being labelled “1.” as the first step. The experience flow ofFIG.41may be provided via a MUI as discussed herein. At some point in the flow a user could have alternate flows based on a decision they are to make as denoted by a lowercase letter after a numbered step as depicted at412112,412113, and412114where a user chooses between configuring calibration curve at412112, background correction at412113, and/or limits of detection at412114, also as a user works through the flow of a user interface they could easily back track to one or more previous steps through the use of an historical portion of the MUI. At412101a user may select a user interface mechanism presenting one or more options including, but not limited to, module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option, in this case choosing analysis method module and transitioning the application to start at412102. At412103a user is presented one option to design an analysis method and a user on choosing to do so transitions to design at412104. At412104a first menu is presented, allowing the user to select between menu items analysis method412105, calibration curve412112, background correction412113, limits of detection412114, and confirm412115. Upon selecting analysis method at412105a second menu is presented including options to select from recent analysis methods at412106or available analysis methods at412107. A default may be recent analysis method at412106. The MUI may auto-transition to all analysis methods at412109if recent analysis methods at412106is empty. At412106, on selection of recent analysis methods, a user is presented a configurable amount, for example twenty five, of the most recently used analysis methods at412108as returned from a service request made via the cloud platform. Alternatively, selection of available at412107presents to a user a new execution menu of all analysis methods at412109as returned from a service request made via the cloud platform with the analysis methods organized by system-provided default analysis methods and user-provided analysis methods, enabling a user to browse the various analysis methods and to select the analysis method of choice. On selection of an analysis method at the execution menus412108or412109the user interface returns to the first menu at412104, presenting the options of configuring calibration curve at412112, background correction at412113, and limits of detection at412114. In embodiments, calibration curve at412112is the default, and the MUI is configured to walk the user through the subsequent menus412112,412113, and412114as the user executes a selection in each. On selection of calibration curve at412112a user is given options on the view to select an algorithm from the available set being system-provided algorithms 4PL, 5PL, Linear, Log-Log, Exponential, or any other algorithm potentially provided by the system, as well as, any user-provided algorithms. The 4PL algorithm may be calculated as y=b1+b2+b11+(x/b3)b4 where y is the response signal from a plate reader, x is the concentration, b1 is maximum response plateau or calculated top, b2 is minimum response plateau or calculated bottom, b3 is concentration at which 50% of the maximal response is observed or calculated mid-point, and b4 is the slope or shape parameter or calculated Hill Slope The 5PL algorithm may be calculated as y=b1+(b2+b11+(x/b3)b4)b5 where y is the response signal from a plate reader, x is the concentration, b1is maximum response plateau or calculated top, b2 is minimum response plateau or calculated bottom, b3 is concentration at which 50% of the maximal response is observed or calculated mid-point, and b4 is the slope or shape parameter or calculated Hill Slope, and b5 is asymmetry factor or calculated asymmetry factor. The Linear algorithm may be calculated as y=mx+b where y is the response signal from a plate reader, x is the concentration, m is the slope or calculated Hill Slope, and b is y-axis intercept or calculated y intercept. The Log-Log algorithm may be calculated as log10(y)=m(log10(x))+b where y is the response signal from a plate reader, x is the concentration, m is the slope or calculated Hill Slope, and b is y-axis intercept or calculated y intercept. The Exponential algorithm may be calculated as y=aebx where y is the response signal from a plate reader, x is the concentration, a is plate reader response signal at minimum response or calculated y intercept, and b is a constant describing the magnitude of increase or decrease or Hill Slope; with selection of an algorithm making it the default for the analysis method being configured. On selection of an algorithm in calibration curve at412112, a user may then define a weighting factor for the chosen algorithm to be used in calculations to compensate for the differences in magnitude of the residuals at low and high analyte concentrations with options 1/y2, 1/y, or none; then a user may choose input signal with options to use from the calibrators the raw input signal or the background-corrected signal; and finally a user defines to calculate replicates either individually or as an average of the replicates. At412113a user is provided a view for selection of background detection configuration provides options for a user each for calibrators, controls, and unknowns (i.e., samples under test) where a user may choose to do no signal correction or in calculating a corrected signal the software would adjust the raw signal from a plate reader by subtracting or dividing it by the background count of the plate reader. At412114the selection of limits of detections provides options for a user in determining the limits of detection using the standard deviation of the high and low calibrators or as a percentage of the ECL counts above or below the high and low calibrators. At412115selection of confirm by a user presents a user the option to use a system-provided name for the new analysis method or provide their own name and accept the new analysis method for inclusion in the set of user-provided analysis methods with any changes to the analysis method at412112, at412113, and/or at412114resulting in a service request made via the cloud platform creating a new analysis method as defined for a user's current team database and a user transitioning at412116back to start at412102. A user may also confirm at412115or, in any other step along the flow, reject their changes to the analysis method and return to start at412102not creating a new analysis method. Although these embodiments describe plate-based tests and/or experiments, the methods described herein can be applied in the alternative to the review of other experiments and tests in the alternative. InFIG.42Ais an embodiment of a user experience flow through an assay method module focused on designing an assay method beginning with bioanalytical user app at922200running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in bioanalytical user app at922200being labelled “1.” as the first step. The user experience flow ofFIG.42Amay be implemented via a MUI as discussed herein. At922201a user may select a user interface mechanism presenting one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option, choosing assay method module. On selection of assay method at922201the application transitions at922202to the start of the assay method module presenting at922203and at922204an option to design an assay method or review an assay method. If the user opts for design assay at922203, the flow continues as discussed below.FIG.42Bshows the process flow after92204is selected. On selection of design at922203a user may be presented a next menu including manual assay method at922206and automated assay method at922205. Should the user select manual assay an assay method at922206, they are presented options to select from recent assay methods at922207or available assay methods at922210. The default is recent assay methods at922207and the MUI may autotransition to all assay methods at922211, the recent assay methods are empty as returned from a service request made via the cloud platform. At922207on selection of recent assay methods, a user is presented a configurable amount, for example twenty five, of the most recently used assay methods at922208as returned from a service request made via the cloud platform. Alternatively, selection of available assay methods at922210presents to a user all assay methods at922211as returned from a service request made via the cloud platform. The assay methods are organized by source, such as, but not limited to an overall catalog of available assay methods, purchased consumables associated with available assay methods, and by each username those who have created new assay methods, then the consumable family that organizes assay methods based on a common use model, and then assay method name, enabling a user to efficiently browse the various assay methods and to select the assay method to base their new assay method design. On selection of a specific assay method at either922208or922211the user interface transitions to922213to present the user the assay configuration on the test plate associated with the assay method as returned from a service request made via the cloud platform, wherein the user may alter the test plate and assay assignment using either purchased assays or user-provided assays to various spots in a well on the associated type of test plate, including being able to disable desired assay assignments, and on completion of edits to the assay configuration a user will select layout at922214storing the changes via web service(s) on the cloud platform before transitioning. At922214a user is presented a graphical representation of a test plate and a visual representation of where on the test plate, i.e., which spots in which wells, various types of samples are allocated, which is key for properly processing an assay method on a test plate. In the layout menu at922214, the user is presented items to select previously defined layout of samples on a plate at922215or an edit layout function at922223. The previously defined layout selection at922215provides recently used layouts at922216as a carousel of plates with the sample layout and layout name being a configurable set of up to but not intended to limit to 25 layouts or all available layouts at922219. A user, may, from the select layout menu at922215, also select to create a new layout from scratch at922222, which advances the user to the edit layout function at922223. On selecting a layout at922217or922220a user transitions back to922214to see the selected layout. Anytime a user transitions to layout at922214they may edit the layout via edit layout at922223. On choosing to edit layout at922223, a user is presented a collection of options of equal importance to enable a user efficiently navigating to various options requiring attention. Thus, these options may be presented as a series of concurrently adjustable menus. At922224a user may configure parameters associated with calibrators used in the assay method, most notably, the number of calibrators and number of replicates which dictates how many wells on the test plate will be taken up calibrators. At922226a user may configure parameters associated with controls used in the assay method, most notably, the number of controls and number of replicates which dictates how many wells on the test plate will be taken up controls. At922228a user may configure parameters associated with blanks used in the assay method representing the expectation of very low signal purposefully, most notably, the number of blanks and number of replicates which dictates how many wells on the test plate will be taken up blanks. At922229a user may configure parameters associated with a samples used in the assay method representing a placeholder for samples that will be tested when this assay method is used in an experiment, most notably, the number of samples and number of replicates which dictates how many wells on the test plate will be taken up samples, by default samples take up all remaining wells on the plate after accounting for calibrators and/or controls and/or blanks but a user is enabled to set a specific number at or below the maximum number of unused wells on the test plate. On completing configuration of the various types of samples that are expected to be on a test plate for the assay method, a user at922230may edit the layout of the different sample types on the test plate, manipulating where wells are located by either moving rows in total and/or columns in total and/or moving individual sample types assigned to a well. A user may then select to define one or more groups at922231to provide one or more name groups that subdivide a test plate into one or more sub-plates each named as a user provides at922231. Once groups are defined at922224, at922226, at922228, and at922229each group may have a sub-definition associated with them per the number of defined and named groups for which a user may configure or not one or more of the prescribed sampled types, with an additional capability to assign to assign to one group the calibration curve of another group to allow sharing of calibrators across one or more groups on the plate and one more additional capability to assign blanks in one group to allow sharing of blanks across one or more groups on the plate. On completion of all of the edits under layout at922214, a user may select a confirm option at922232. Although this option is shown as a submenu of the edit layout function at922232, it may also be accessible as a submenu of the layout function at922214 At922232a user is presented a summary view of the layout for the assay method they have designed enabling a user to navigate to a previous steps to alter any decisions they made in the process of designing the layout and if all their decisions are in line with their expectations they would select confirm storing their layout via web service(s) to the cloud platform for future use in an experiment and on completion of the invocation of web service(s) the MUI transitions back to the assay menu at922213, where the user may further select assay analysis methods at922233. At922233a user is presented the assignment of analysis methods to either the assay method and/or the one or more assays assigned to the assay method with the option to select a particular analysis to canvas all assays in the assay method, that on selection automatically applies the chosen analysis method to all assays in the assay method. A user may also alter the analysis method for any or all individual assays in the assay method by choosing the analysis method assigned to an assay with the user interface presenting the available system-default analysis methods as well as any user-provided analysis methods from which the user chooses the desired analysis method for the assay. A user may use this previously disclosed mechanism of analysis method selection for selecting an analysis method at the assay method level to assign the same analysis method to all assays in the assay method. On completion of analysis method assignment at922233a user may select protocol configuration at922234with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning at922234. At922234a user is presented the various parameters associated with the processing of the assay method either on a coordinated-operation instrument or manually leveraging one or more individual-operation instrument. The parameter set would be instrument specific but could include but not intended to limit to incubation time(s), wash cycle(s), read buffer incubation(s), reagent addition(s), and/or any other step in the processing of a protocol that could be parameterized and configured. In some embodiments, an assay method may have no protocol defined for it and therefore this step may be not shown to a user for an assay method with no protocol. On completion of protocol configuration at922234a user may select confirm at922235, although this is shown as a submenu of the protocol menu at922234, it may also be accessible as a submenu of the assay menu at922213, with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning at922235. At the confirmation menu of922235a user is presented a summary view of the assay method they have designed to confirm they made all the right choices, enabling a user to navigate to a previous steps to alter any decisions they made in the process of designing the assay method and if all their decision are in line with their expectations they may select confirm storing their assay method via web service(s) to the cloud platform for future use in an experiment and on completion of the invocation of web service(s) the user interface would transition back to start at922202. In further embodiments, an assay method module may operate as follows. A first menu may be a design assay menu. Upon selection the design assay menu is relocated to the historical portion and a second menu is presented providing a user with an option to select a manual assay method or an automatic assay method. Selecting manual assay method provides a third menu including recent assay methods and available assay methods as options. Selecting recent assay method provides a third menu including names of recent assay methods. Selecting an assay name moves the assay to the historical portion and provides a new current assay design menu including “assay,” “layout,” “analysis method,” and “confirm,” menus. The assay menu provides, in sub-portions of the active portion, multiple sub-portions. A first sub-portion provides spot layout and lists of assays by spot assignment (i.e., test sites) in the selected method applied to an editable list of analytes. The first sub portion may include, test plate type on a horizontal wheel, e.g., 96 Wells 1 Small Spot, 96 Wells 1 Small Spot High Bind, 96 Wells 1 Small Spot Q, 96 Wells 1 Spot, 96 Wells 1 Spot High Bind, 96 Wells 1 Spot Q, 96 Wells 10 Spot, 96 Wells 10 Spot High Bind, 96 Wells 10 Spot Q, 96 Wells 10 Spot Q High Bind. If a 10-plex plate is chosen in the first sub-portion, then a middle sub-portion appears that lists 1-PLEX-10-PLEX. If a 10-plex plate is not chosen, then a right-side subportion appears that lists assays, which can be searchable depending on highlighted assay method or existence of unassigned spot position in first subportion. The layout menu provides a plate layout showing where sample types are located. The analysis menu provides a subsequent menu having subportions allowing a user to select from a first sub portion listing assays in the selected assay method and algorithm types for each assay in a second subportion. The confirm menu shows, in a first subportion a spot layout and list of assays by spot assignment in selected assay method and, in a second sub portion, assay method name, plate layout, and a confirm option. Selecting available assay options provides a third menu showing multiple subportions. The first subportion presented the options of assays purchased from consumable manufacturer (“MSD Purchased”), available from consumable manufacturer (“MSD Catalog”), and usernames. The second subportion provides assay method types filtered by the highlighted item in first subportion: Bio-dosimetry, Custom Sandwich Immunoassay, Immunogenicity, PQ, Pharmacokinetic, S-PLEX, U-PLEX, U-PLEX Dev Pack, Utility, V-PLEX, where Utility is less than an entire assay protocol performed by an automated instrument; e.g., wash, add read buffer, read; or add read buffer, read. The third sub-portion provides assay methods filtered by highlighted item in first and second subportions. After selection of an assay method via this process, a new menu is provided according to the assay design menu as described above. If, at the second menu, the user selects automated assay method, they are provided with a choice between recent assay methods and available assay methods, as described above. The only difference in the “available assay methods” flow as compared to the recent assay methods flow is in the protocol menu, described below. Selecting recent assay methods provides a third menu including names of recent assay methods. Selecting an assay name moves the assay to the historical portion and provides a new current assay design menu including “assay,” “layout,” “analysis method,” and “confirm,” menus similar to those described above. The assay design menu also includes a protocol menu option. The protocol menu option provides options for a coating menu, blocking, capture, detection, detection incubation, and secondary detection incubation. The coating menu provides options in a first subportion for Enable Coating, Wash Before Coating Step, Linker Volume, Capture Antibody Volume, Stop Solution Volume, Coating Species Volume, Volume of Diluent in Capture Blend, Coupled Antibody Volume in Blend, Coating Blend Dispensed Per Well, Coupling Incubation Duration, Stopper Incubation Duration, Coating Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The coating menu provides a second subportion that appears for editing numbers related to the first subportion. The blocking menu provides a first subportion for Enable Blocking, Wash Before Blocking Step, Blocking Volume, Blocking Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The blocking menu provides a second subportion that appears for editing numbers related to the first subportion. The capture menu provides a first subportion: Assay Volume, Wash Before Test Plate Incubation, Sample Incubation Duration, Test Plate Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The capture menu provides a second subportion that appears for editing numbers related to the first subportion. The detection menu provides a first subportion: Detect Volume, Detection Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The detection menu provides a second subportion that appears for editing numbers related to the first subportion. The detection incubation menu provides a first subportion: Wash Before Detection Step, Detection Species Volume, Detection Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The detection incubation menu provides a second subportion that appears for editing numbers related to the first subportion. The secondary detection incubation menu provides a first subportion including Enable Secondary Detection, Wash Before Secondary Detection Step, Secondary Detection Species Volume, Detection Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The secondary detection incubation menu provides a second subportion that appears for editing numbers related to the first subportion. The read buffer menu provides a first subportion: Read Buffer Volume, Read Buffer Incubation Duration, with On/Off toggle or adapted to be editable to enter a number. The read buffer menu provides a second subportion that appears for editing numbers related to the first subportion. InFIG.42Bis an embodiment of a user experience flow through an assay method module focused on reviewing an assay method beginning with analytical user app at422200running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in analytical user app at421100being labelled “1.” as the first step. The experience flow ofFIG.42Bmay be facilitated by a MUI as discussed herein. At422201a user may select a user interface mechanism presenting one or more options including, but not limited to, module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a scroll-wheel menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option, choosing assay method module. On selection of assay method at422201the MUI transitions at422202to the start of the assay method module presenting a first menu including options at422203and at422204an option to design an assay method or review an assay method respectively. The illustrated workflow shows the results of a selection of422204, to review an assay method, with the user in this case choosing to review at422204. On selection of review at422204a user is requested to choose an analysis method at422206from a next menu presenting options including recent assay methods at422207or available assay methods at422208. The default may be recent assay method at422207. The MUI may auto-transition to all assay methods at422208, if recent at422207is empty as returned from a service request made via the cloud platform. On selection of recent assay methods at422207, a user is presented a configurable amount, for example twenty five, of the most recently used assay methods at422209as returned from a service request made via the cloud platform. Alternatively, selection of available at422208presents to a user all assay methods at422211as returned from a service request made via the cloud platform. The assay methods may be organized by the source they are from, including, but not limited to, an overall catalog of available assay methods, purchased consumables associated with available assay methods, and by each username those who have created new assay methods, then the consumable family that organizes assay methods based on a common use model, and then assay method name, enabling a user to efficiently browse the various assay methods and to select the assay method to base their new assay method design. On selection of an assay method at either422211or422209, the MUI transitions to422213to present the user a summary graphical view of the layout for a plate to be used in an experiment using the assay method's definition as returned from a service request made via the cloud platform. The display at422213may also be reached from the review assay method menu at422204, where it will display a currently selected menu. Although this embodiment describes methods for performing assays and/or plate-based experiments, other experiments and tests are contemplated as well. InFIG.43Ais an embodiment of a user experience flow through an experiment module focused on experiment design beginning with analytical user app at432300running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in logical user app at432300being labelled “1.” The user experience flow ofFIG.43Amay be managed by a MUI as discussed herein. The experiment module may be implemented via a methodical user interface control system1102operating as part of or in conjunction with the analytical user app432300. The experience flow ofFIG.43Amay be facilitated via a MUI as described herein. At432301a user is logging into the analytical user app432300. After the login process the user interface transitions to start at432305since the experiment module is envisioned in this embodiment to be the default first module after a user logs in, where now the user has a menu of three options including either 1) design an experiment at432307,2) review an experiment at432308, or 3) select a user interface mechanism at432306. The user interface mechanism at432306permits a user to adjust a user interface by presenting one or more options including, but not limited to, module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. The review experiment option at432308provides a workflow as discussed below with respect toFIG.43B. In choosing to design an experiment at432307, the MUI transitions to a second (or next) menu with the user asked to choose to design at new experiment at432309or use a previous experiment at432310on which to base the new experiment. On selection at432309of a new design, the MUI transitions to a design setup menu at432321(discussed further below). On selection at432310of an existing design, the MUI transitions to design at432313. The design menu at432313asks the user to choose an experiment at432314with options to select from recent experiments at432315or available experiments at432316. The default is recent experiments at432315but the MUI may auto-transition to all experiments at432318if recent experiments at432315is empty as returned from a service request made via the cloud platform. At432315on selection of recent experiment, a user is presented a configurable amount, for example twenty five, of the most recently ran experiments at432317as returned from a service request made via the cloud platform. Alternatively, selection of available at432316presents to a user all experiments at432318as returned from a service request made via the cloud platform with the experiments organized by username and/or email address, date and time of creation, and experiment name, enabling a user to browse the various experiments and to select the experiment on which to base the new experiment. On selection of an experiment at either432317or432318the MUI transitions back to the design menu at432313and auto-highlights design setup at432321as a next step. At432321a user is provided options to name an experiment starting with a unique default name provided by the system, for example but not limited to, a concatenation of username, date, and timestamp, that a user may edit, as well as choosing whether the experiment will be performed on a coordinated-operation instrument (also referred to as automation) or an individual-operation instrument(s) (also referred to as manual). On a user making their decisions at432321the user interface advances to assay method selection at43232, which asks the user to choose an assay method with options to select from recent assay methods at432323or available assay methods at432325. The default is recent at432323, but the MUI may auto-transition to all assay methods at432326if recent at432324is empty as returned from a service request made via the cloud platform. At432322on selection of recent at432323, a user is presented a configurable amount, for example twenty five, of the most recently used assay methods at432324as returned from a service request made via the cloud platform. Alternatively, selection of available at432325presents to a user all assay methods at432326as returned from a service request made via the cloud platform with the assay methods organized by the source being from but not limited to an overall catalog of available assay methods, purchased consumables associated with available assay methods, and by each username those who have created new assay methods, then the consumable family that organizes assay methods based on a common use model, and then assay method name, enabling a user to efficiently browse the various assay methods and to select the assay method to be used with the new experiment. By default, an experiment may have assigned to it one assay method. But while choosing the assay method a user could select at432306, the function selection (as used herein, the “function selection” menus of various embodiments refer to advanced context menus) to view an option to allow the experiment to have defined for it multiple assay methods that on selection initiates the action at432332to enable a user to select more than one assay method for an experiment and conversely toggle back to single assay method selection, where multiple assay method selection is used to broaden even further the assays to run against a collection of samples with the potential to limit the number of assay methods that may be selected and/or not limit the number of assay methods that may be selected dependent on operational constraints of available instruments or arbitrary limits a user may want to place on an experiment. Once a user has completed selecting the assay methods for the experiment, the user interface is transitioned to sample definition at432327where the user is presented with options either to enter the number of samples to test at432328with the system auto-generating sample identifiers from 1 to the number of samples the user has entered limited by the sample configuration in the selected assay method(s) or to import sample definition from an import file as provided by an external system at432329. On manual sample definition at432328or import of samples at432329, the user interface transitions to the final design step of confirming the experiment is ready to process at432330. At432330a user is presented with the collection of one or more plates dependent on the number of samples being processed using the one or more selected assay methods, where each plate is assigned one assay method with an assigned set of samples to be processed on the respective plate, with a user being able to view the sample assignments to plates through a function at432333initiated through the function selection at432306and on completion returning at432330. If a user selects one assay method for an experiment then the defined samples will result in one or more plates each with the same assay method where the samples are distributed from 1 to whatever the number defined or imported resulting in however many plate-assay method pairings are required to be able to process the total set of samples defined to create a run of plates-assay methods-samples, but the number of plate-assay method pairings could be limited by the type of experiment, automated or manual, being selected in setup at432321dependent on physical or arbitrary constraints placed on the system. If a user selects more than one assay method for an experiment then the defined samples will be limited to the least number of samples provided for in any of the selected assay methods where the samples are distributed from 1 to the least number of samples provided for in any of the selected assay methods on each plate that has for each plate-assay method pairing based on the selected assay methods of the experiment to create a run of plates-assay methods-samples. In either the single assay method or multiple assay method experiment, the samples to test could result in more than one run of plates-assay-methods-samples; such that, there could be no limit on the number of samples a user defined for an experiment where each run of plates-assay methods-samples would be repeated to cover the complete processing of the full set of samples defined. Once a user has established the designed experiment is as expected they would select the confirm function on the user interface at432330that on selection creates the experiment ready to be processed by a team through a service request made via the cloud platform and at432331the user interface transitions back to start at432305. Setup components shown at432311,432312,432319and432320function similarly to432321. In the alternative, the analytical user app can assist in performing other experiments in addition to or in place of the assay experiments and/or plate-based tests described herein. InFIG.43Bis an embodiment of a user experience flow through an experiment module focused on reviewing an experiment beginning with analytical user app at432300running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in analytical user app at432300being labelled “1.” as the first step. The experience flow ofFIG.43Bmay be facilitated by a MUI as discussed herein. At432301is a login user step. After the login process the user interface transitions to start at432305since the experiment module is envisioned in this embodiment to be the default first module after a user logs in, where now the user has three options either 1) design an experiment at432307, 2) review an experiment at432308, or 3) select a user interface mechanism at432306. The user interface mechanism at432306presents one or more options including, but not limited to, module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. In choosing to review an experiment at432308, the MUI transitions the application to432340and presents the user with a first menu permitting a user to select review of experiments at432341or of specific plates at432348. Upon selecting experiments at432341, a next menu permitting a user to select from recent experiments at432342or available experiments at432343is presented. The default may be recent experiments at432342but may auto-transition to all experiments at432345if recent experiments at432344is empty as returned from a service request made via the cloud platform. At432342, on selection of recent experiments, a user is presented with a configurable amount, for example twenty five, of the most recently ran experiments at432344as returned from a service request made via the cloud platform. Alternatively, selection of available at432343presents to a user all experiments at432345as returned from a service request made via the cloud platform. The experiments may be organized by username, date and time of creation, and experiment name, enabling a user to browse the various experiments and to select the experiment to view. On selection of an experiment either at432344or at432345, the MUI transitions to review plates associated with the chosen experiment at432348. At432348a menu presents a collection of one or more plates in the order of the addition to the experiment and labeled with the assay method name assigned to the plate. Accessing the plates menu after selection of experiments serves as a filter to the plates menu, and only those plates corresponding with the selected experiment will be displayed. On selection of a plate at432348the MUI transitions to offer a next menu permitting a user to select from a plate data graphical view at432349, a plate data tabular view at432350, a plate flag tabular view at432351, a sample menu at432352, a calibrators menu at432353, a controls menu at432354, and execution menus for editing lot data function at432355, assigning plate function at432366, and editing layout function at422367. Selection of432349causes the MUI to present the selected specific plate in the experiment with a heat map representation of signal or calculated concentration if available for all assays (or spots) in the assay method in each well of the plate, where a user may choose a particular assay to view narrowing down the data to just that one assay and a user may select a particular well to see the specific signal value for a sample in the selected well for the selected assay while being able to change the high and/or low signal or concentration range for the plate to alter the intensity of the heat map across all samples visible on the plate. In addition to viewing a heat map of a plate at432349, a user has other options available for viewing plate data at432350, at432351, at432352, at432353, and at432354. At432350a user is presented a well-by-well table view of the data presenting but not limited to sample identifier, assay, signal (log and linear), concentration (log and linear) if available, and statistics associated with the generated data. In embodiments, the columns presented in the table of data may include: Plate, Sample, Assay, Well, Spot, Dilution, Conc., Conc. Unit, Signal, Adj. Signal, Mean, Adj. Signal Mean, CV, Calc. Conc., Calc. Conc. Mean, Calc. Conc. CV, % Recovery, % Recovery Mean. Each of the data presentations at43249-432354may be presented in the active portion in three subportions. The first subportion may allow the user to select spots from a visual representation of a well. The second subportion may allow the user to select wells from a visual representation of a plate. The third subportion may provide data from the selected spot. At432351a user is optionally presented a table view of flags denoting abnormal events that may have occurred during processing of one or more plates potentially bringing the data's quality into question for a user, only available to a user if there was at least one flag generated for a plate. At432352a user may select a scatter plot at432356of sample signal or concentration, if available, for each assay on all of the plates and may select to switch viewing signal or concentration, if available, through a toggle function at432358and at432359. At432352a user may also select to view the data in tabular form at432357. At432353a user is presented calibration curve plots one assay method at a time with one plot for each assay in the assay method if the assay method is using an analysis method that produces sample concentrations with up to five plates visible on each plot providing a user interface mechanism to enable a user to change the five visible plates if there are more than five plates. The user may further select the option at432360to change the assay method for which to view the calibration curves and additionally select the option to drill down on a particular assay calibration curve plot at432362to expand that plot to see its one or more plates of visible data. Also provided is a mechanism to view a small table of signal and concentration data for one or more selected points on a curve including excluding calibrator points if a calibrator appears to have an abnormal response, as well as to select the group to view on each plate if the assay method for the viewed assay has defined for it more than one group on its plate layout. At432354a user is presented percent recovery plots of controls one assay method at a time with one plot for each assay in the assay method if the assay method is using an analysis method that produces sample concentrations with up to five plates visible on each plot providing a user interface mechanism to enable a user to change the five visible plates if there are more than five plates. The user is further given the option at432363to change the assay method for which to view the percent recovery plots and additionally providing the option to drill down on a particular assay percent recovery plot at432365to expand that plot to see its one or more plates of visible data, while also providing a mechanism to select the group to view on each plate if the assay method for the viewed assay has defined for it more than one group on its plate layout. Execute menus provided at432355to edit provided lot-specific data associated with calibrators and/or controls, at432356to assign a plate manually in an experiment when the processing of an experiment cannot automatically assign processed plates to the experiment, and at432357to edit the layout for a specific plate being viewed in the case where a user needs to make a correction to a layout for an experiment. Supplemental functions not depicted inFIG.43Binclude but are not limited to exporting various tables and/or charts for import into other software applications and copying various charts to be pasted into other software applications. In an alternative, the analytical user app can assist in reviewing other experiments in addition to or in place of the assay experiments and/or plate-based tests described herein. Interface12at432341provides an interface that displays possible experiments associated with the analytical user app. Further, interface14aat432344, interface16at432348provides a visual representation of the plates associated with the experiments at interface432344. Similarly, following interface14bat432345, interface16at432348provides a visual representation of the plates associated with the experiments at interface432345. Interface19aat432361and432364provide interfaces that display all analytes associated with a given assay method. In embodiments, a reader module for running a designed experiment may be provided. The reader module may be adapted to allow a user to perform necessary functions, steps, and/or commands as they relate to the loading, reading, and unloading of plates, such as those used for ECL assays, although other experiments and/or assays are contemplated as well. In other embodiments, the Reader module relates to other equipment and/or instruments, such as medical equipment. By way of example, for medical equipment, the Reader module could be used for a Magnetic Resonance Imaging (MRI) device to assist doctors, other medical professionals, and/or technicians while using the machine. Other applications are contemplated as well. Referring specifically toFIG.43H, in certain embodiments, first portion381821can include a first menu of user-selectable choices including a Read choice and Review Recent Results choice (although other choices may also be included). The latter is explained in greater detail above with regard to the Experiment Module. In response to a selection of the Read command, a first portion or a second portion of the MUI display206is adapted to output a Play Button432370as shown, for example inFIG.43C. The Play Button432370can be embodied as a graphical-based selectable input as shown in this figure, or it take other forms as well, including a non-graphical and/or text-based selection. When embodiment in a graphical selection, other geometric shapes may be employed in addition to the ones shown in this figure. In response to a selection of the Play Button432370, a plate reader is adapted to begin reading and/or analyzing one or more plates. The read process is described in greater detail herein in conjunction with one or more of the other modules described herein. As the one or more plates are read, the MUI display206is adapted to display a timer432371as shown inFIG.43D. The timer432371is adapted to indicate, for example, in a visual representation one or more of: (a) the total amount of time to load the one or more plates; (b) the total amount of time to read the one or more plates; (c) the total amount of time to unload the one or more plates; (d) the time remaining to complete the loading of the one or more plates; (e) the time remaining to complete the reading of the one or more plates; and (f) the time remaining to complete the unloading of the one or more plates. In the embodiment shown in this figure, the timer includes three circles, each of which can provide a separate timer for the load, read, and unload processes, e.g., first, second, and third circles, respectively. In certain embodiments, the load process includes the time it takes a plate reader or other instrument to automatically load the plate to be read. Similarly, the unload process can include the time to automatically unload the plate after it has been read. Timers for these processes are not necessarily limited to automated plate-reading instruments but apply to manually fed instruments as well. In some embodiments, the timer432371can toggle between a logo, e.g., a logo containing three circles, and a countdown timer wherein the perimeter of each circle be modified as time elapses to signify a countdown. For example, a completed circle can represent the beginning time and the perimeter forming the circle can be deleted in a clockwise or counterclockwise fashion to represent that time has elapsed. This can continue until the entire perimeter of the circle vanishes, representing that the entire timer has elapsed. In other examples, the perimeter lines forming the circle can fade vis-à-vis the unexpired portions of the timer as time elapses so as to illustrate that time has elapsed, while still maintaining the perimeter line of each circle. In other embodiments, rather than fading, the lines can be highlighted, and/or colored to signify how much time has elapsed, and how much time still remains for each of the load, read, and unload processes until they are complete. In other embodiments, other geometric shapes can be used for these times, either all the same, or one or more being of a different shape from the others. In some embodiments, fewer or greater than three of these geometric shapes can be utilized for the timer function. In one embodiment, as shown inFIG.43E, the additional aspects and/or features of the Reader module can be accessed the advanced context selector381822as part of the advanced context menu381832. In this example, the advanced context menu381832is adapted to be displayed in response to a selection of an advanced context selector381822(FIG.39B) when outputted to the MUI display. When displayed, advanced context menu381832can include a plurality of commands and/or user-selectable choices arranged in a menu that may take the form of various configurations, as described in greater detail above in conjunction with the Audit Trail module. The advanced context menu381832can include one or more commands and/or user-selectable choices. For example, for the embodiment depicted inFIG.43E, the one or more command and/or user selectable choices can include eject plate432372, partial plate432373, set plate run432374, stop instrument432375, lock UI432376, and view plate information432377, although additional commands and/or user-selectable choices are contemplated as well. In response to the eject plate choice432372, the plate currently loaded into a plate-reading instrument is ejected. In one example, the plate is ejected automatically and fully from the plate-reading instrument. In the alternative, this eject choice can release the plate from the instrument, so that a user, such as a technical, lab manager, etc., can manually remove the plate from the plate reading instrument. In response to the partial plate choice432373, the first portion (FIG.43H,381821) is adapted to receive bar code information as it relates a plate selected among one or more plates. For example, if the current plate does not contain a barcode, if the barcode itself is unreadable, or only a portion of it can be read, a user can manually input the barcode information to designate the plate that the reader module is currently working in conjunction with. This information can be inputted via a touchscreen, keyboard, or any other input manner as described herein. In other examples, the barcode could be inputted automatically with the aid of a barcode reader or the like. The first portion (FIG.43H,381821) is further adapted to display a user-selectable option for verifying the authenticity of the bar code information after it is received. When selected, the reader module can verify the inputted barcode against one or more of the databases, e.g., one or more of the databases described herein, of barcode information to determine if there is a match. If a match occurs, the plate can be verified. If no match occurs, the user can either try to input the barcode information again, e.g., in case of transcription error, or can elect to proceed with the unverified plate. Additionally, in response to the partial plate choice432373, a graphical representation of the current plate can be displayed on the MUI display206, either without regard to particular sectors, e.g., groups of wells, or on a sector basis by overlaying an outline defining one or more sectors of the plate. In further response to the partial plate choice432373, the advanced context menu381832can include one or more additional commands and/or choices. For example, in the embodiment depicted inFIG.43F, the menu can include a save partial plate choice432378and a cancel partial plate432379, which can allow users to save the partial plate information or cancel the plate, e.g., terminate the current use of the plate, respectively. In response to the set plate run choice432374, the first portion (FIG.43H,381821) is adapted to receive a name for a plate run associated with a plate. For example, a user can enter, e.g., through an input device, e.g., touchscreen, keyboard etc., the name for the run associated with one or more plates that are to be read by the plate-reading instrument. In some embodiments, this information can already be encoded in the barcode, and thus, the run name will automatically populate. The run can be used for various reasons, for example to allow users to associate several plates together from a single experiment, to allow teams to more easily collaborate on experiments, assays, or analyses that involve one or more plates common to the team, etc. In response to the stop instrument choice432375, the first portion (FIG.43H,381821) is adapted to display a confirmation choice before issuing a stop instrument command. For the example shown inFIG.43G, the first portion381821can include a confirmation choice381827. This choice can be adapted to be displayed to provide one or more users with the ability to confirm whether they want to abort the current run of the plate. When presented with this confirmation choice381827, the users can be presented with a choice as to whether they wish to abort the current run of a plate by issuing the stop instrument command, e.g., selecting “Yes” from this menu, or continuing the run by disregarding the stop instrument command, e.g., selecting “No” from the menu. These options are merely exemplary as other choices and/or command prompts are contemplated as well. If the stop instrument command is issued, the users can be automatically prompted on MUI display206with a menu of choice that are available in response to the Review Recent Results choices as described above, thus allowing the user to review the results of previously completed plates. In other words, in this example, by issuing the stop instrument command, the user will be directed automatically to Review Recent Results menu as described above. If the stop instrument command is disregarded, the timer432371(FIG.43D) as described above can be re-displayed on the MUI display206throughout the remaining duration of the run in accordance with that feature as described above. In response to the lock UI choice432376, the MUI display206is adapted to be locked from user selections until receiving the current user's password. In this manner, input will be received from a user, whether it is through command and/or choice selections or other inputs, e.g., mouse clicks or scrolling, keyboard strokes, touchscreen inputs, etc., but those selects will not cause any modification to what is outputted to MUI display206, nor will commands be received based on this user input, other than the password to unlock the MUI display206. After this choice is selected, the MUI display206will remain locked throughout the duration of the plate run and will automatically unlock once the run is complete. In other embodiments, the MUI display206will remain locked until the current user's password is received. In response to the view plate information choice432377, information that relates to one or more plates can be displayed. The information includes one or more of the plate run name, as described in greater detail above, plate barcode, e.g., the barcode provided by the plate manufacturer, long side customer barcode, e.g., a customer-specific barcode affixed to the long side of the plate, short side customer barcode, e.g., a customer-specific barcode affixed to the long side of the plate, plate type, e.g., single well, multi-well, assay time, coating type, etc., operator, e.g., user, team, account, etc., and read time, e.g., read time of one or more individual plates and/or total read time of the plates for a given plate run. InFIG.44is an embodiment of a user experience flow through an audit trail module beginning with analytical user app at442400running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in analytical user app at442400being labelled “1.” as the first step. The user experience flow ofFIG.44may be facilitated by a MUI as described herein. At442401a user may select a user interface mechanism presenting one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. At442402, a work flow starts and the MUI auto-transitions to all audit events at442403to present to a user a view of all events captured for a user's current team with each event including, but not limited to, username and/or email address of originator, date and timestamp, the source, and information pertaining the event as returned from a service request made via the cloud platform. This view at442403enables a user to sort the view in forward or reverse ordering by username, date and timestamp, source, or information about the event. At442403a user may use the function selection mechanism at442401or an export command in the menu provided at442403to export the entire collection of events at442404to a file format easily importable to other computer applications such as, without limitation, Excel, Word, and/or any other computer application, such as, CSV, tab-delimited text, JSON, XML, and/or any other format. At442403a user may use the function selection mechanism at442401to export the entire collection of events at442404to a computer operating system mechanism used for copying and pasting content from one computer application to another computer application often referred to as a clipboard. InFIG.45is an embodiment of software modules in a coordinated-operation instrument app452500forming the user interface experience for the use of a coordinated-operation instrument with each module using services provided by cloud platform452504to create, read, update, and/or delete any and all data relevant to each module's processing and commanding and controlling physical hardware integrated with, or separate from, the coordinated-operation instrument, as well as, any other services needed for each module's processing. Use of the coordinated-operation instrument app452500may be facilitated by a MUI as discussed herein. Accordingly, the coordinated-operation instrument app452500may include a methodical user interface control system1102or may operate in conjunction with a methodical user interface control system1102. Operation module452501may be the active module by default when the coordinated-operation instrument app452500starts. Operation module providing the interface for executing experiments on an instrument to collect data for samples using assay methods defined in an experiment. Maintenance module452502provides the interface for executing maintenance functions on the instrument to ensure optimal operation of the instrument. A collection of system functions452503provides typical utilities in support of use of the coordinated-operation instrument such as, but not limited to, logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing software configuration, changing user password, and/or other utilities. The collection of system function452503may be provided as a separate MUI module and/or a series of software protocols that operate alongside the other discussed MUI modules. A user can log into a coordinated-operation instrument app452500through system functions452503using services provided by cloud platform452504. If authentication of a user by a login service on cloud platform452504returns that a user has more than one account and/or team, a user will be required to select the default account and/or team, but if a user does not belong to more than one account and/or team, the service on the cloud platform452504would auto-assign a user to the sole account and team for that user. On completing login, the user lands at start of the operation module and begins using the coordinated-operation instrument app as they need. In an alternative, the coordinated-operation instrument app452500can assist in performing other experiments in addition to or in place of the assay experiments described herein. InFIG.46is an embodiment of a user experience flow through an operation module in the coordinated-operation instrument app at462600running on a instrument's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in coordinated-operation instrument app at462600being labelled “1.” The user experience flow ofFIG.46may be facilitated via a MUI as discussed herein. At462601a user is logging into the coordinated-operation instrument app. After the login process the user interface transitions to start at462602and on login the MUI presents a menu of items including 1) selecting an experiment to run at462604, 2) reviewing recent results of previous runs at462611, 3) selecting a user interface mechanism at462603, 4) processing a run at462613, and 5) reviewing runs at462620. The user interface mechanism at462603presents one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. In choosing to select an experiment to run at462604the MUI presents a next menu of options to the user to select from recent experiments at462605or available experiments at462606with the default being recent at462607. The MUI may auto-transition to all experiments at462608if recent at462607is empty as returned from a service request made via the cloud platform. At462605on selection of recent experiments a user is presented a configurable amount, for example twenty five, of the most recently designed experiments to run at462607as returned from a service request made via the cloud platform, although any other number of recently designed experiments is contemplated as well. Alternatively, selection of available at462606presents to a user all designed experiments ready to be run at462608as returned from a service request made via the cloud platform with the experiments organized by username, date and time of creation, and experiment name, enabling a user to browse the various experiments and to select the experiment to run. On selecting an experiment to run either at462607or at462608, the MUI transitions to process the experiment run that has just been selected at462613through the user interface leading a user through loading consumables and samples onto the appropriate locations on the instrument for the experiment's run at462614and on completing the load automatically transition to selecting to run the experiment at462615. On selecting to run the experiment at462615the software initiates an automated inventory check of what was loaded by the user at462614through scanning of barcodes as appropriate presenting errors to the user to correct any issues that arise and on confirmation of 100% correctness of what was loaded, initiating the run and the automated execution of the experiment's assigned assay methods against the samples under test with one or more types of timers presenting the time remaining in the run while also enabling a user to see live video of the instrument running. On completion of the run the MUI presents instructions at462616for the user to unload the instrument leading the user through the process of removing consumables and samples from the instrument, as well as, emptying bulk containers used in the processing. On completion of the unload the MUI transitions to present to the user plate results at462622viewing the entire set of plates processed in the run at462622then choosing a plate to review in greater detail either at462629, at462630, and/or at462631; and finally enabling a user to return to start at462602to perform another experiment run. In an alternative to selecting an experiment to run at462604, the user may choose to review recently ran experiments at462611, cause the MUI to present a next menu of items to the user to select from runs at462621or plates at462622. Upon selecting runs at462621, a next menu provided by the MUI permits the user to select from recent ran experiments at462623or available ran experiments at462624with the default being recent at462623. The MUI may auto-transition to available experiments at462624if recent at462625is empty as returned from a service request made via the cloud platform. At462623on selection of recent a user is presented a configurable amount, for example twenty five, of the most recently ran experiments to review at462625as returned from a service request made via the cloud platform. Alternatively, selection of available experiments at462624presents to a user all ran experiments ready to be reviewed at462626as returned from a service request made via the cloud platform with the experiments organized by username, date and time of creation, and experiment name, enabling a user to browse the various experiments and to select the experiment to review. On selecting an experiment to review either at462625or at462626the user interface transitions to present to the user plate results at462622viewing the entire set of plates processed in the run at462622then choosing a plate to review in greater detail either at462629, at462630, and/or at462631. Although this embodiment describes methods for performing assays and/or plate-based tests, other experiments and tests are contemplated as well. InFIG.47is an embodiment of a user experience flow through an maintenance module focused on maintaining an instrument beginning with coordinate-operation instrument app at472700running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in coordinated-operation instrument app at472700being labelled “1.” as the first step. The experience flow ofFIG.47may be implemented via MUI as described herein. At472701a user may select a user interface mechanism presenting one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option, choosing maintenance module. On selection of maintenance module at472701the application transitions at472702to the start of the maintenance module and presents at472703an option to run a maintenance method or at472704an option review results of a previously ran maintenance method. On selecting at472703a user is presented on the user interface the set of maintenance methods to run organized in a left set of top-level maintenance categories including but not limited to initializing the instrument, issuing a component command, and running a component test and associated with each item in the left set would be a right set of one or more maintenance methods pertinent to the instrument being maintained associated with the left maintenance category from which a user would select the maintenance method to perform. Once a maintenance method is selected at472703the software transitions to process the maintenance method at472705presenting the user a run button to initiate the processing and on tapping the run button the software initiates the robotic processing associated with the maintenance method presenting a user an hours:minutes:seconds countdown timer in various animations that a user could toggle through based on their preferences, as well as, an option to a live video to watch the robotic processing associated with the maintenance method. The maintenance method process menu at472705may be accessed via the start menu or may be auto-transitioned to after completion of the menu at472703. Once the maintenance method's robotic processing completes at472706, the user interface transitions to472707for a user to review (also reachable via the start menu472702) any results reported by the maintenance method presented in a table sorted by most recently run maintenance method showing the username of the person who ran the maintenance method, the name of the maintenance method, the date and time of completion of the maintenance method, and an optional result of the maintenance method if it reports a result. A user may select start at472702to return to the option of running another maintenance method at472703or reviewing maintenance results at472704or selecting a different module to switch to at472701. On selecting reviewing maintenance results at472704the user interface is transitioned to472707to present to a user the previously disclosed maintenance method results table forFIG.47. InFIG.48is an embodiment of software modules in an individual-operation instrument app483000forming the user interface experience for the use of an individual-operation instrument with each module using services provided by cloud platform483003to create, read, update, and/or delete any and all data relevant to each module's processing and commanding and controlling physical hardware integrated with, or separate from, the individual-operation instrument, as well as any other services needed for each module's processing. Operation module483001may be the active module by default when an individual-operation instrument app483000starts. Operation module483001provides the interface for executing an operation provided by the instrument in support of processing a defined assay method on samples for ultimate collection of data from the samples under test.483002Collection of system functions483002provides typical utilities in support of use of the individual-operation instrument such as, but not limited to, logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing software configuration, changing user password, and/or other utilities. The collection of system function483002may be provided as a separate MUI module and/or a series of software protocols that operate alongside the other discussed MUI modules. As discussed above, the individual-operation instrument app483000may employ a MUI supplied by a methodical user interface control system1102for interface purposes. The operation module483001and the system functions483002may all employ a MUI for user interface purposes. A user will log into an individual-operation instrument app483000through system functions483002using services provided by cloud platform483003. If authentication of a user by a login service on cloud platform483003returns that a user has more than one account and/or team, a user will be required to select the default account and/or team, but if a user does not belong to more than one account and/or team, the service on the cloud platform483003would auto-assign a user to the sole account and team for that user. On completing login, the user lands at start of the operation module483001and begins using the individual-operation instrument app483000as needed. In the alternative, the of software modules in an individual-operation instrument app483000can support other experiments in addition to or in place of the assay experiments described herein. InFIG.49Ais an embodiment of a user experience flow through an operation module in the individual-operation instrument app at493100running on a instrument's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in individual-operation instrument app at493100being labelled “1.” The experience flow depicted inFIG.49Amay be implemented or facilitated by a MUI as discussed herein. At493101a user logs into the individual-operation instrument app483000. After the login process the user interface transitions to start at493102and the user is presented with a first menu of items, including 1) perform the operation at493104, 2) review recent results of previous performances of the operation at493105, or 3) select a user interface mechanism at493103. The user interface mechanism493103presents one or more options including, but not limited to, module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a scroll-wheel menu and/or toolbar. a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. On selection by a user to run a process at493104the MUI transitions to493106to prepare the instrument to execute the process. The MUI presents a progress indicator to keep the user apprised of progress and ultimate completion. The software may further provide a mechanism to perform the operation in a continuous mode, or repeat the operation on a new plate, if a user chooses to stack or batch plates up for processing. On completion of the operation over one or more plates, the data collected from the operation may be uploaded through services provided on the cloud platform to the user's team for review via the cloud as well as storing performance data gathered from the operation of the instrument for monitoring and support by the provider of the instrument, then the user interface would automatically transition to review at493105presenting instrument-specific results at493107of the one or more operations that just completed. Alternatively to performing the operation at493104, the user could choose review at493105to cause the MUI to transit to present instrument-specific results at493108where the user is presented a maximum set of recent results that could be the last ‘n’ plates (e.g., 25) processed, the last ‘n’ days (e.g., 30), or any other desired configuration for presenting chronologically recent set of results provided by the instrument. In an alternative to performing the operation at493104or reviewing recent results at493105, the user could choose one or more functions at493103, including configuring the operation of the instrument for ultimate use. A user may perform the operation at493104time and time again, then review the results at493105to determine if the instrument performed as expected. FIG.49Billustrates an embodiment of the flow of results review in an operation module403120specifically for a plate reader as an individual-operation instrument. The plates menu at493121is a collection of one or more plates in the order of operation execution and on selection of a plate at493121the MUI transitions to present options at493122, at493123, and at493124. At493122a user is presented a specific plate in the experiment with a heat map representation of signal for all data locations in each well of the plate. A user may choose a particular data location to view across all wells of the plate narrowing down the data to just that one data location plus a user may select a particular well to see the specific signal value for a sample in the selected well while being able to change the high and/or low signal range for the plate to alter the intensity of the heat map across all samples visible on the plate. At493123a user is presented a well-by-well table view of the data presenting but not limited to sample identifier, data location, and signal. At493124a user is optionally presented a table view of flags denoting abnormal events that may have occurred during processing of a plate potentially bringing the data's quality into question for a user only available to a user if there was at least one flag generated for a plate. Although this embodiment describes plate-reader operations and/or applications, the methods described herein can be applied in the alternative to the review of other experiments and tests in the alternative. InFIG.50is an embodiment of software modules in an workflow-aid instrument app at503200forming the user interface experience for the use of a workflow-aid instrument with each module using services provided by cloud platform at503203to create, read, update, and/or delete any and all data relevant to each module's processing and potentially commanding and controlling physical hardware integrated with the workflow-aid instrument, as well as, any other services needed for each module's processing, wherein, collect and prepare module at503201would be the active module by default when the workflow-aid instrument app at503200starts. The workflow-aid instrument app502300may employ or be implemented along with a MUI to provide user interfaces for the collect and prepare module503201and the system functions503202. At503201is a collect and prepare module providing the interface for gathering constituent components stored in potentially different climate-controlled or room temperature environments to be used in processing one or more assays in a chosen experiment, for example but not limited to, kits, antibody sets, bulk solutions, plastic-ware such as tips and microtiter plates, and/or any other component required to be used in processing one or more assays in a chosen experiment; and preparing constituents components requiring pre-processing prior to being used in the processing of one or more assays defined for an experiment, for example, rehydrating lyophilized reagents, thawing frozen reagents, pretreating samples, and/or any other step required to prepare constituent components to be used in processing one or more assays in a chosen experiment. At503202is a collection of system functions providing typical utilities in support of use of the workflow-aid instrument such as but not limited to logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing software configuration, changing user password, and/or other utilities. A user will log into a workflow-aid instrument app at503200through system functions at503202using services provided by cloud platform at503203. If authentication of a user by a login service on cloud platform at503203returns that a user has more than one account and/or team, a user will be required to select the default account and/or team, but if a user does not belong to more than one account and/or team, the service on the cloud platform at503203would auto-assign a user to the sole account and team for that user. On completing login, the user lands at start of the collect and prepare module and begins using the workflow-aid instrument app as they require. InFIG.51is an embodiment of a user experience flow through a collect and prepare module in the workflow-aid instrument app at513300running on a instrument's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in workflow-aid instrument app at513300being labelled “1.” as the first step. The experience flow ofFIG.51may be implemented via a MUI as discussed herein. At513301a user is logging into the workflow-aid instrument app. After the login process the user interface transitions to start at513302since the collect and prepare module is envisioned in this embodiment to be the default first module after a user logs in, where on login the user has four options either 1) select an experiment ready to begin collect and prepare at513304, 2) select an in-progress experiment to continue collect and prepare at513305, 3) select an experiment that was previously collected and prepared at513306, or 4) select a user interface mechanism at513303presenting one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. On selection by a user to begin an experiment ready to be collected and prepared at513304the user interface presents the set of experiments ready to be processed by calling a cloud service returning the available experiments and presenting the set of returned experiments, on user selection of a particular experiment transitioning to collect at513307. On transition by the MUI to513307the user is presented options to collect under various temperature storage modalities as required by the assay methods chosen for the experiment, in this embodiment but not limited to, −20 C at513308, −80 C at513309, 4 C at513310, and at room temperature at513311. The collect menu at513307is an example of a walk-through type execution menu, as described herein. Under each temperature zone the user will be presented a collection of assay methods each with one or more assay components to be collected from that temperature zone as returned by a call to a cloud service for the experiment being collected. The collection could be presented as a linear list to lead the user through the collection one item at a time requiring the user to check off each item as items are collected or a user could jump to the end of collection in a temperature by choosing to select a check-all control because they do not need to be led through collect. As a user works through the list of items to collect, they could be presented to the right of the list a photo or graphic representation of the item to be collected with a breakdown of its one or more constituent components if an item has one or more constituent components. To aid quick selection of an item the user could scan a barcode on the item that will automatically detect the item being collected and check it off in the list of items to be collected regardless of the item's position in the list. On checking off an item in the list a cloud service is called to store this information and the list automatically transitions to the next item to collect. Important to note a user could choose to jump around from one temperature zone to another as they wish, as well as, a function could be provided under function selection at513303to re-arrange the order of the temperature zones if a user wants a different order. A user may also be provided a function under function selection at513303(e.g., an advanced context menu) to print out the steps of collect if they prefer to have a paper copy, as well as, a function under function selection at513303to export the steps of collect to some 3rdparty software application. Once all items have been collected in a particular temperature zone a cloud service is called to update the experiment with its collection being completed and the user interface transitions to the next temperature zone continuing the process of collection until such time as the last item in the last temperature zone has been collected transitioning the user interface to prepare at513312. The prepare menu at513312is an example of a walkthrough type execution menu. On transition to prepare at513312the user is presented in this embodiment an aggregate list of the steps to be performed across all assay methods chosen for the experiment as returned by a call to a cloud service to retrieve the chosen assay methods for the experiment with the list ordered by the first step to last step defined for an assay method where assay methods sharing a common type of step in this embodiment would provide a sub-step selection for each common step type such that a user could perform the step for each assay method checking it off for each assay method or the user could check it off once for the step covering all associated assay methods. An alternative to the sub-step approach, but not intended to be limited to, would be a one-level list with one step for each step and assay method pairing. Regardless of how the steps are presented to a user, the one or more actions to be taken for the one active step to be performed by a user in this embodiment, but not intended to be limited to, would be presented to the right of the list of steps where the actions would be presented to a user as a video, one or more graphical representations, and/or text description with the intention this information helps a user properly perform the one or more actions of the step. As a user completes the actions of a step, whether for one assay method or many assay methods, they would check off the step causing a call to a cloud service to store the completed state for the step for all assay methods associated with that step. Once all steps for all assay methods have been completed, denoted by checking off the step, prepare will be complete with the user asked via a modal dialog to confirm completion, where, positive confirmation of completion causes a call to a cloud service to update the state of the experiment to indicate it has been prepared and returning the user interface to start at513302with the experiment now ready to be processed; and negative confirmation of completion returns the user to the last performed step in prepare at513312. A supplemental function available in any stage of collect and prepare under function selection at513303is the ability to display, export to 3rdparty software, and/or print the one or more sample input plates associated with an experiment. InFIG.52is an embodiment of the computing flow of software automatic update for bioanalytical user computers at513408in the analytical computing system at513402. The flow is represented in a “swim lane” diagram depicting independent computing systems operating concurrent to each other being computing system provider business system at513401, cloud platform at513403with its software release services at513406, and bioanalytical user computers at513408with its bioanalytical user update service at513423with processing swim lane for computing system provider business system at513401depicted above dotted line at513412, processing swim lane for software release services depicted between dotted lines at513412and513419, and processing swim lane for bioanalytical user update service at513423depicted below dotted line at513419. The processing of computing system provider business system at513400is depicted as out of scope for the analytical computing system at513402with the dotted-line outline of analytical computing system provider environment at513400but in this embodiment software updates originate there at513409when a new release of software is produced for deployment with one or more files associated with the new release bundled at513410and pushed to cloud platform through file upload services inherent to the cloud platform then transitioning at513411to call a web service on the cloud platform at513403to request an auto-update of the software on various bioanalytical user computers at513408. The processing of software release services at513406has two concurrent services, one service to create a software update record at513413provided for an outside agent to notify the analytical computing system at513401that an auto-software update is requested that in this embodiment occurs at513411and a second service for bioanalytical user computers to check for availability of an auto-software update at513416. The service at513413receives a requesting to create the software update record, confirms at513414the request is a valid service request from an appropriately credentialed requester and if not valid the request is rejected and not processed, but if proper a new auto-software update is created for the version of software at513415. The service at513416receives a request to check if there is an active auto-software update, confirms at513417the request is a valid service request from an appropriately credentialed requester and if not valid the request is rejected and not processed, but if valid the download link to the software update is returned to the requester. The processing of bioanalytical user update service at513423is a periodically executed service requesting availability of updates at513420via a web service call at513416and on receipt of a response checking the response at513424to either repeat the service request if not available or processing the software update if available at513421by downloading the software update via the download link provided by web service call at513416and on completion of the download executing the software install at513422and after completion of the install the bioanalytical user computer software is updated. InFIG.53is an embodiment of the computing flow of software automatic update for bioanalytical instrument computers in the analytical computing system at513502. The term “bioanalytical instrument” is used in this context to represent any and all of the before-mentioned coordinated-operation instrument, individual-operation instrument, and/or workflow-aid instrument, generalized inFIG.53for simplicity of description since they operate the same in this regard. The flow is represented in a “swim lane” diagram depicting independent computing systems operating concurrent to each other being computing system provider business system at513501, cloud platform at513503with its software release services at513506, and bioanalytical instrument computers at513508with its instrument update service at513523with processing swim lane for computing system provider business system at513501depicted above dotted line at513512, processing swim lane for software release services depicted between dotted lines at513512and513519, and processing swim lane for instrument update service at513523depicted below dotted line at513519. The processing of computing system provider business system at513500is depicted as out of scope for the analytical computing system at513502with the dotted-line outline of analytical computing system provider environment at513500but in this embodiment software updates originate there at513509when a new release of software is produced for deployment with one or more files associated with the new release bundled at513510and pushed to cloud platform through file upload services inherent to the cloud platform then transitioning at513511to call a web service on the cloud platform at513503to request an auto-update of the software on various bioanalytical instrument computers at513508. The processing of software release services at513506has two concurrent services, one service to create a software update record at513513provided for an outside agent to notify the analytical computing system at513501that an auto-software update is requested that in this embodiment occurs at513511and a second service for bioanalytical instrument computers to check for availability of an auto-software update at513516. The service at513513receives a requesting to create the software update record, confirms at513514the request is a valid service request from an appropriately credentialed requester and if not valid the request is rejected and not processed, but if proper a new auto-software update is created for the version of software at513515. The service at513516receives a request to check if there is an active auto-software update, confirms at513517the request is a valid service request from an appropriately credentialed requester and if not valid the request is rejected and not processed, but if valid the download link to the software update is returned to the requester. The processing of instrument update service at513523is a periodically executed service requesting availability of updates at513520via a web service call at513516and on receipt of a response checking the response at513524to either repeat the service request if not available or processing the software update if available at513521by downloading the software update via the download link provided by web service call at513516and on completion of the download executing the software install at513522and after completion of the install the bioanalytical instrument computer software is updated. The methods, techniques, and systems are described herein particularly with respect to instrumentation and bioinstrumentation. The methods, techniques, and systems, however, are not limited to such applications. MUIs as provided herein may be applied to any activity or process that may be structured according to a hierarchical process flow. MUIs as provided herein may be applied to processes in a variety of additional fields, including, for example, home and interior design, furniture assembly, cooking and meal design, travel planning, business planning, graphic design (e.g., business cards, invitations, crafts such as quilting, knitting, and sewing, web pages, etc.), financial planning, taxes, wills, video game design, video editing, media navigation (e.g., Netflix®, tv channel navigation), car purchase, home purchase, beer brewing, manufacturing, etc. InFIG.54is an embodiment of an example of a non-bioanalytical use of the disclosed architecture for software modules in an chef app at513600forming the primary user interface experience for creating a meal for one or more people with each module using services provided by cloud platform at513606, assuming relevant chef-related services are available on cloud platform at513606, to create, read, update, and/or delete any and all data relevant to each module's processing, as well as, any other services needed for each module's processing, wherein, meal planner module at513601would be the active module by default when the chef user app at513600starts, guiding a chef through the planning of the meal they wish to create. The chef app513600may be implemented in conjunction with a MUI to provide a user interface. At513602is an ingredient collection module providing the interface for guiding a chef and/or their designee through the purchasing and/or retrieval of all ingredients and/or anything required for the execution of the meal either used in meal preparation and/or used during eating the meal. At513603is a meal preparation module used to guide a chef and/or their designee through the steps of cooking the meal. At513604is a meal execution module used to guide a chef and/or their designee in setting the stage and mood for the meal as well as the timing of various courses of the meal. At513605is a collection of system functions providing typical utilities in support of use of the system such as but not limited to logging off, viewing help information, viewing user guide, viewing legal notices and/or documents, changing software configuration, changing user password, and/or other utilities. A user will log into the chef user app at513600through system functions at513605using services provided by cloud platform at513606. On completing login, the user lands at start of the meal planner module at513601and begins using the chef user app at513600as they need. Only the meal planner module at513601will be further disclosed for the purpose of illustration of an example of a non-bioanalytical use. InFIG.55is an embodiment of a user experience flow through an meal planner module beginning with chef app at513700running on a user's computer with each step through a user interface numbered sequentially 1 through ‘n’ to represent the stepwise flow from begin (1) to end (‘n’) for a user as depicted in chef app at513700being labelled “1.” as the first step. The user experience flow ofFIG.55may be implemented via a MUI as described herein. After the login process the user interface transitions to start at513701since the meal planner module is envisioned in this embodiment to be the default first module after a user logs in with to options to design a meal at513703or at513704select a user interface mechanism presenting one or more options including but not limited to module-specific functions, modules to select, and/or system functions being either a horizontal menu and/or toolbar, a vertical menu and/or toolbar, a dropdown menu and/or toolbar, a keyboard function, a voice-activated command, and/or any other like user interface mechanism to choose an option. In this embodiment at513703a user is presented one option to design a meal plan with a user choosing to do so transitioning the user interface to a choice of creating a new meal plan from scratch at513705or creating a meal plan from a pre-existing meal plan at513706, where choosing new at513705transitions the user interface directly to meal plan design setup at513712and choosing to base the new meal on a pre-existing meal plan at513706transitions the user interface to a choice of recent meal plans at513707or available meal plans at513708with the default being recent at513707but auto-transitioning to available at513708if recent at513707is empty as returned from a service request made via the cloud platform. At513707on selection of recent a user is presented a configurable amount, for example twenty five, of the most recently used meal plans at513709as returned from a service request made via the cloud platform. Alternatively, selection of available at513708presents to a user all meals at513710as returned from a service request made via the cloud platform with the meal plans organized by names of users creating meals plans and the name of the meal plans each user created, enabling a user to browse the various meal plans to select the meal plan of choice. On selection of a meal plan at either513709or513710the user interface transitions to meal plan design setup at513712. At513712a user is presented a system-provided default name that a user may accept or edit but a plan must have a name; a number of diners for the meal with a default of 2 and a range of 1 to 10000; and an optional monetary budget with a default of no limit and accepting any monetary value; wherein on either accepting the defaults or editing one or more of the options, a user would then select cuisine at513713causing a service call on the cloud platform to store the decision made by the user for the options before transitioning the user interface. At513713a user is presented a two-part selection user interface mechanism showing on the left a cuisine origin and on the right cuisine options for the chosen origin, for example the left selection would be but not limited to American, European, Mexican, South American, Middle Eastern, Asian or Other, wherein the right selection for American would be but not limited to Southern, New England, Amish, or Southwestern; for European would be but not limited to French, Italian, German, Greek, Spanish, Portuguese, British Isles, or Scandinavian; for Mexican would be but not limited to Traditional or Tex-Mex; for South American would be but not limited to Peruvian or Brazilian; for Middle Eastern would be but not limited to Turkish, Lebanese, or Persian; for Asian would be but not limited to Chinese, Japanese, Thai, Vietnamese, Korean, or Indian; and Other would be but not limited to Caribbean or Name Your Own for a user provide their own cuisine style; and on user selection of a cuisine option the selection is saved via a service to the cloud platform and the user interface transitions to dietary restrictions at513714. At513714a user is presented potential dietary restrictions in a scrollable outline format where at each level of the outline a user is enabled to click something as a restriction that on clicking will check-on the chosen restriction plus everything embedded underneath it in the outline, wherein the outline would be but not limited to:Vegetarian;Vegan;Allergic (Tree nuts, Write in option);Health (Lactose, Gluten, Write in option);Religious;Kosher (pork, shellfish), No dairy, meat okay, No meat, dairy okay, Pareve (no meat or dairy);Halal, Write in option; and/orTaste, Write in option. After a user completes checking all restrictions they know of, they would choose compose meal at513715causing their selections to be stored via a web service to the cloud platform and therefore eliminating certain ingredients from meal preparation based on their selections before transitioning the user interface. At513715a user is presented three options for planning the meal being defining the courses at513716, selecting side dishes at513717, and/or selecting drinks at513718. On selecting courses at513716a user is presented the three system-provided defaults in expected ultimate meal execution order of appetizer course at513719, main course at513720, and dessert course at513721but the user could alter the course selection and/or order by choosing function selection513704to see two functions options to add/remove a course at513722to either add one or more courses to the meal and/or remove one or more courses from the meal, as well as, a function to rearrange the courses of the meal for when executing the preparation and/or execution of the meal. At513719a user is presented a left-right selection control with the left side being types of dishes to be provided being but not limited to Soup, Salad, Finger Foods, Dips/Sauces, and Other for one or more user-provided choices, where when a preset option is clicked the user interface presents a collection of options set by the cuisine and dietary restrictions defined previously by the user with the options retrieved from web service(s) provided on the cloud platform from which the user may select one or more options. On completion of option selections and/or definitions at513719a user would select main course at513720with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning the user interface. At513720a user is presented a left-right selection control with the left side being types of dishes to be provided being but not limited to Poultry, Pork, Beef, Fish, Vegetarian, and Other for one or more user-provided choices, where when a preset option is clicked the user interface presents a collection of options set by the cuisine and dietary restrictions defined previously by the user with the options retrieved from web service(s) provided on the cloud platform from which the user may select one or more options. On completion of option selections and/or definitions at513720a user would select dessert course at513721with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning the user interface. At513721a user is presented a left-right selection control with the left side being types of desserts to be provided being but not limited to Cake, Cookies, Pie, Ice Cream, Pastry, and Other for one or more user-provided choices, where when a preset option is clicked the user interface presents a collection of options set by the cuisine and dietary restrictions defined previously by the user with the options retrieved from web service(s) provided on the cloud platform from which the user may select one or more options. On completion of option selections and/or definitions at513721a user would select the next course if one is available until the last course is defined then select side dishes at513717with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning the user interface. At513717a user is presented a left-right selection control with the left side being types of side dishes to be provided being but not limited to Bread, Starch, Vegetable, Dips/Sauces, and Other for one or more user-provided choices, where when a preset option is clicked the user interface presents a collection of options set by the cuisine and dietary restrictions defined previously by the user with the options retrieved from web service(s) provided on the cloud platform from which the user may select one or more options. On completion of option selections and/or definitions at513717a user would select drinks at513718with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning the user interface. At513718a user is presented a left-right selection control with the left side being types of drinks to be provided with sub-options of for alcohol at513724and non-alcohol at513725with options for alcohol being but not limited to Wine, Beer, Liquor, and Other for one or more user-provided choices and options for non-alcohol being Soda, Juice, Water, and Other for one or more user-provided choices, where when a preset option is clicked the user interface presents a collection of options set by the cuisine and dietary restrictions defined previously by the user with the options retrieved from web service(s) provided on the cloud platform from which the user may select one or more options, as well as, optionally associate each specific drink to a specific course if the user desires that specificity. On completion of option selections and/or definitions at513718a user would select confirm at513726because their meal is now defined with the software automatically storing the user's selections via web service(s) on the cloud platform before transitioning the user interface. At513726a user is presented a summary view of the meal they have planned to confirm they made all the right choices, enabling a user to navigate to a previous steps to alter any decisions they made in the process of planning the meal and if all their decision are in line with their expectations they would select confirm storing their meal plan via web service(s) to the cloud platform for future use and on completion of the invocation of web service(s) the user interface would transition back to start at513702. At513702a user could choose a function selection user interface mechanism at513704seeing they are in a meal planner module and having three other modules available to them namely, an ingredient collection module, a meal preparation module, and a meal execution module helping them follow through on their new meal plan using one or more of these other modules In another example, in a cooking and meal design MUI module, a process flow may be structured as follows. A first menu may permit a user to select a type of meal, dinner, lunch, breakfast, formal, informal, etc., that is being prepared. Selection of a type of meal may lead to a next menu permitting a user to select a number of dishes to be prepared. A next menu may permit a user to select a cuisine style. A next menu may permit a user to select dish options, filtered by the cuisine style, for each dish. After completion of menu design, a recipe module may be selected. The recipe module may use a MUI as discussed herein to permit a user to quickly navigate between recipes of dishes selected for a menu. For example, a first menu may include each dish. A second menu may include options for ingredient lists and recipe steps. In this manner, a user might access the first menu in the historical portion to quickly jump between recipes while viewing navigating ingredients and steps of each individual recipe in the active portion of the MUI. In another example, a cooking and meal design MUI module may operate as follows. A first menu may permit a user to select and define a plurality of meal parameters. For example, in a first menu, a user may select from menu items including cuisine selection, dietary restrictions, number of diners, meal design, wine pairing, and meal preparation. Selecting the cuisine selection option permits a user access to a second menu of cuisine options, including, e.g., American, European, Mexican, Caribbean, South American, Middle Eastern, and Asian. Selecting from among the second menu options may lead to a third menu, for example, the American selection may lead to Southern, Southwestern, Texan, New England, Amish, Californian, etc., the European selection may lead to French, Italian, German, Greek, Spanish, Portuguese, British Isles, Scandinavian, etc., the South American selection may lead to Peruvian, Brazilian, etc., the Asian selection may lead to Chinese, Japanese, Vietnamese, Thai, Korean, Indian, etc. In embodiments, a user may select more than one cuisine option from the second menus that may provide a filter for other menus that a user may interact with. Selecting the dietary restrictions option from the first menu permits a user to select from a second menu including options such as vegetarian, vegan, pescatarian, ovolacto vegetarian, allergies, health, religious, and taste. The vegetarian, vegan, pescatarian, and ovolacto vegetarian menus may be execution menus permitting the user to apply these restrictions as filters to meal choices and/or ingredients. The allergic and health menus lead to execution menus permitting a user to filter ingredients that should be restricted due to health or allergic reasons, such as tree nuts and shellfish (allergic), lactose and gluten (health). Both menus may further permit a user to write in additional options. The religious menu permits a user to access menus that filter based on religious dietary laws, such as Kosher or Halal restrictions. The Kosher menu selection offers a user execution menu including meat (filtering out all dairy options), pareve (filtering out all dairy and meat options), dairy (filtering out all meat options), Passover (filtering out all options including Chametz and/or Kitniyot). Executing any Kosher menu further serves to eliminate all non-Kosher ingredients, such as pork, shellfish, etc. The Halal menu selection offers a user an execution menu permitting the filtering of menu ingredients according to Halal restrictions. The taste menu is an execution menu permitting a user to filter out ingredient selections by diner taste. The number of diners menu is an execution menu permitting a user to select a number of diners. Selecting the number of diners allows the module to modify recipe amounts to match the number of people eating. In embodiments, the number of diners menu may also allow a user to select options such as light, medium, and heavy as a further modifier on an amount of food to be prepared. The meal design or meal composition selection offers a second menu of appetizer (which in turn offers a third menu of soup, salad, other), main course (which in turn offers a third menu of poultry, pork, beef, fish, vegetarian), side dishes (which in turn offers a third menu of bread, starch (rice, potatoes, other, etc.), and vegetable), and dessert. As the user drills down through these menus, they may reach additional menus providing menu items that correspond to the filters selected in the other second menus (cuisine, dietary restrictions, etc.). In embodiments, dishes may be eliminated according to the filters. In a further embodiment, dishes may include substitutes or eliminations based on the filters, e.g., oil for butter in a no-dairy dish. Each menu item leads to one or more recipe selection execution menus permitting the user to add the recipe to the final meal for preparation. The choices described here are by way of example only, and the meal composition submenu may include additional and/or different menus and hierarchy. A wine pairing selection of the first menu offers a user second menu permitting selection of wines to match the selected dishes, e.g., by appetizer, main course, dessert, etc. After selecting a course to which a user will pair wines, execution menus may be provided for a user to actively select wines by varietal, style, label, and other features according to the selected dishes for that course. The meal preparation selection of the first menu offers the user a combined walkthrough of meal preparation using the MUI menuing system. The walkthrough provides a series of second menu items including ingredient requirements, make-ahead dishes, and day-of dishes. The ingredient requirements selections provide a shopping list permitting a user to eliminate items they already have. The make-ahead dish menu and day-of dish menu are both similar and allow the user to select between integrated preparation and/or parallel preparation. The make-ahead dish menu offers a user access to preparation steps for all dishes and ingredients that may be prepared ahead of time, while the day-of dish menu provides a user access to preparation steps that are preferably not prepared ahead of time. The parallel preparation menu permits a user access to each selected recipe in its entirety. The integrated preparation menu permits a user access to the recipes in an integrated format. In the integrated preparation menu, a submenu is provided based on timing, e.g., 4 hours prior to meal-time, 3 hours prior to meal-time, 2 hours prior to meal-time, etc. For example, accessing the “4 hours prior” submenu provides the use with a list of tasks to complete 4 hours prior to the meal. The 3 hours prior submenu provides tasks for completion 3 hours prior to the meal, and so on. In this way, the multiple tasks from each recipe can be combined, for example, if the same ingredient needs chopping for more than one dish and integrated in the most efficient manner possible. In another embodiment, an integrated preparation submenu may be provided with menu items such as start main course, start appetizer, start side dish, complete main course, complete appetizer, complete side dish, etc. Accordingly, a chef's MUI module may permit a user to design a meal and then may provide a full integration of preparation steps In yet another example, a MUI as described herein may be implemented as an operating system or as an overlay to an operating system (OS). The MUI, as described herein, makes user interaction with any system or workflow more efficient by limiting exposure of items that are infrequently used. This design principle and the hierarchical menu flow may be applied, for example, to any aspect of an OS. For example, file tree navigation in Windows, Linux, Apple OS, etc., may be organized as a hierarchical menu tree as described herein, with lesser used options being limited from exposure and moved to a different menu, e.g., an advanced context menu. As discussed herein, lesser used options may refer to options not meeting the threshold percentage of usage frequency, e.g., 70%, 80%, 90%, or any other figure discussed herein. A user, therefore, would only see the file tree options that they interact with the most frequently unless they take steps to view other options. Apps on a mobile device operating system, iOS, Android, etc., may be arranged in the same way. Instead of being presented with multiple screens full of app icons, as is conventional, the system may categorize a user's app icons and present the apps to a user according to a hierarchical menu tree with limited exposure of lesser used apps. In another example, the exposure limiting design principles discussed in accordance with the MUI may be applied to PUSH notifications. In the hierarchical menu trees, menu items that do not meet a threshold percentage of user interaction have their exposure limited. Similarly, push notifications to a user, e.g., alerts and notifications related to text messages, e-mails, app alerts, etc., may be limited based on user interaction. For example, the split of 90%/10% or 80%/20% or any other split discussed herein may be applied, where types of push notifications, as characterized, e.g., by sender, subject, recipients, etc., that a user interacts with most frequently are prioritized and other notifications are moved to an auxiliary menu. The push notifications that a user interacts with or accesses 90% of the time or 80% of the time, or any suitable number, may receive prioritized treatment, include vibration alerts, ring alerts, and immediate display. Other push notifications may be collected in a menu accessed only through direct user action. In another example, a MUI as described herein may be employed for home design or remodeling. A first menu may permit a user to select a type of room, kitchen, bath, etc., to be remodeled or designed. A second menu may permit a user to select from multiple styles, modern, contemporary, traditional, etc., while a third menu may permit a user to begin selecting individual aspects of the room to be remodeled, i.e., in the case of a kitchen, cabinets, flooring, countertops, etc. In an example such as this, the MUI may interact and/or interface with more conventional design software to build and maintain a model of a user's design as they make selections and develop a design. In yet another example, a MUI as described herein may be applied to media content navigation for selecting television programs or movies to watch. For example, a first menu may permit a user to select a category, e.g., genre, release date, popularity, starring actors/actresses, etc., by which they will browse media content. In some embodiments, each successive menu may provide similar options to the first menu, permitting the user to successively filter each next menu. In a MUI applied to media content, exclusion tables may be used, for example, as a content filter to ensure that certain viewers do not have access to inappropriate content. Limitation lists, as discussed herein, may be used to filter and alter menus according to a user's typical viewing habits. Further embodiments include: Embodiment 1 is a method of interactively navigating a user through a path of menu choices on a user interface in leading the user through a computer application, the method performed automatically by at least one hardware processor, the method comprising: displaying a current menu of choices on a first portion of a user interface display; allowing a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices; displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options; and allowing the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display, wherein the first portion and the second portion are viewable concurrently on the user interface display. Embodiment 2 is the method of embodiment 1, wherein responsive to detecting a selection of a menu item from the current menu of choices, relocating the current menu of choices to the second portion of the user interface display, and displaying on the first portion of the user interface display a next level of menu choices based on the selection of the menu item, wherein the relocated current menu of choices is shown on the second portion of the user interface display as the past selected and past unselected menu items of a past menu level, and the next level of menu choices is shown on the first portion as the current menu of choices. Embodiment 3 is the method of embodiments 1 or 2, wherein the current menu of choices is displayed in first visual orientation on the first portion of the user interface display and the drilled-down levels of menu choices comprising the past selected and past unselected menu items are displayed on the second portion of the user interface display in second visual orientation. Embodiment 4 is the method of embodiments 1 to 3, wherein the second visual orientation is substantially orthogonal to the first visual orientation. Embodiment 5 is the method of embodiments 1 to 4, wherein the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. Embodiment 6 is the method of embodiment 4 or 5, wherein the first visual orientation is a horizontal orientation and the second visual orientation is a vertical orientation. Embodiment 7 is the method of embodiments 4 to 6, the drilled-down levels of menu choices relocated to the second portion are displayed as a stack of menu levels. Embodiment 8 is the method of embodiments 3 to 7, wherein the current menu of choices is displayed as a graphical rotating wheel that rotates the choices in a direction of the first visual orientation. Embodiment 9 is the method of embodiments 3 to 8, wherein a drilled-down level in the drilled-down levels of menu choices is displayed as a graphical rotating wheel that rotates choices of the drilled-down level in a direction of the second visual orientation. Embodiment 10 is the method of embodiments 1 to 9, wherein the past selected menu items in the drilled-down levels displayed on the second portion of the user interface display are displayed highlighted relative to the past unselected menu items of the drilled-down levels displayed on the second portion of the user interface display. Embodiment 11 is the method of embodiments 1 to 10, wherein the first portion and the second portion are displayed as a series of concentric circles. Embodiment 12 is the method of embodiments 1 to 11, wherein the first portion and the second portion are displayed in a graphical decision tree configuration. Embodiment 13 is the method of embodiments 1 to 12, wherein the first portion and the second portion are shifted to substantially center the first portion displaying the current menu of choices on the user interface display while fitting both the first portion and the second portion on the user interface display. Embodiment 14 is a user interface system comprising: at least one hardware processor; and a memory device operatively coupled to the hardware processor, the hardware processor operable to retrieve from the memory device a current menu of choices and to display current menu of choices on a first portion of a user interface display, the hardware processor further operable to allow a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices, the hardware processor displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options, the hardware processor further operable to allow the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display, wherein the first portion and the second portion are viewable concurrently on the user interface display. Embodiment 15 is the system of embodiment 14, wherein responsive to detecting a selection of a menu item from the current menu of choices, the hardware processor relocating the current menu of choices to the second portion of the user interface display, and displaying on the first portion of the user interface display a next level of menu choices based on the selection of the menu item, wherein the relocated current menu of choices is shown on the second portion of the user interface display as the past selected and past unselected menu items of a past menu level, and the next level of menu choices is shown on the first portion as the current menu of choices. Embodiment 16 is the system of embodiment 15, wherein the current menu of choices is displayed in first visual orientation on the first portion of the user interface display and the drilled-down levels of menu choices comprising the past selected and past unselected menu items are displayed on the second portion of the user interface display in second visual orientation. Embodiment 17 is the system of embodiment 16, wherein the second visual orientation is substantially orthogonal to the first visual orientation. Embodiment 18 is the system of embodiment 17, wherein the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. Embodiment 19 is the system of embodiments 17 or 18, wherein the first visual orientation is a horizontal orientation and the second visual orientation is a vertical orientation. Embodiment 20 is the system of embodiments 17 to 19, the drilled-down levels of menu choices relocated to the second portion are displayed as a stack of menu levels. Embodiment 21 is the system of embodiments 16 to 20, wherein the current menu of choices is displayed as a graphical rotating wheel that rotates the choices in a direction of the first visual orientation. Embodiment 22 is the system of embodiments 16 to 21, wherein a drilled-down level in the drilled-down levels of menu choices is displayed as a graphical rotating wheel that rotates choices of the drilled-down level in a direction of the second visual orientation. Embodiment 23 is the system of embodiments 14 to 22, wherein the past selected menu items in the drilled-down levels displayed on the second portion of the user interface display are displayed highlighted relative to the past unselected menu items of the drilled-down levels displayed on the second portion of the user interface display. Embodiment 24 is the system of embodiments 14 to 23, wherein the first portion and the second portion are displayed as a series of concentric circles. Embodiment 25 is the system of embodiment 14 to 24, wherein the first portion and the second portion are displayed in a graphical decision tree configuration. Embodiment 26 is the system of embodiment 14 to 25, wherein the first portion and the second portion are shifted to substantially center the first portion displaying the current menu of choices on the user interface display while fitting both the first portion and the second portion on the user interface display. Embodiment 27 is a computer readable storage medium storing a program of instructions executable by a machine to perform a method of interactively navigating a user through a path of menu choices on a user interface in leading the user through a computer application, the method comprising: displaying a current menu of choices on a first portion of a user interface display; allowing a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices; displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options; allowing the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display, wherein the first portion and the second portion are viewable concurrently on the user interface display. Embodiment 28 is the computer readable storage medium of embodiment 27, wherein responsive to detecting a selection of a menu item from the current menu of choices, relocating the current menu of choices to the second portion of the user interface display, and displaying on the first portion of the user interface display a next level of menu choices based on the selection of the menu item, wherein the relocated current menu of choices is shown on the second portion of the user interface display as the past selected and past unselected menu items of a past menu level, and the next level of menu choices is shown on the first portion as the current menu of choices. Embodiment 29 is the computer readable storage medium of embodiment 28, wherein the current menu of choices is displayed in first visual orientation on the first portion of the user interface display and the drilled-down levels of menu choices comprising the past selected and past unselected menu items are displayed on the second portion of the user interface display in second visual orientation. Embodiment 30 is the computer readable storage medium of embodiment 29, wherein the second visual orientation is substantially orthogonal to the first visual orientation. Embodiment 31 is the computer readable storage medium of embodiment 30, wherein the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. Embodiment 32 is the computer readable storage medium of embodiments 30 to 31, wherein the first visual orientation is a horizontal orientation and the second visual orientation is a vertical orientation. Embodiment 33 is the computer readable storage medium of embodiments 30 to 32, the drilled-down levels of menu choices relocated to the second portion are displayed as a stack of menu levels. Embodiment 34 is the computer readable storage medium of embodiments 29 to 33, wherein the current menu of choices is displayed as a graphical rotating wheel that rotates the choices in a direction of the first visual orientation. Embodiment 35 is the computer readable storage medium of embodiments 29 34, wherein a drilled-down level in the drilled-down levels of menu choices is displayed as a graphical rotating wheel that rotates choices of the drilled-down level in a direction of the second visual orientation. Embodiment 36 is the computer readable storage medium of embodiments 27 to 35, wherein the past selected menu items in the drilled-down levels displayed on the second portion of the user interface display are displayed highlighted relative to the past unselected menu items of the drilled-down levels displayed on the second portion of the user interface display. Embodiment 37 is the computer readable storage medium of embodiments 27 to 36, wherein the first portion and the second portion are displayed as a series of concentric circles. Embodiment 38 is the computer readable storage medium of embodiments 27 to 37, wherein the first portion and the second portion are displayed in a graphical decision tree configuration. Embodiment 39 is the computer readable storage medium of embodiments 27 to 38, wherein the first portion and the second portion are displayed in parallel in a same visual orientation. Embodiment 40 is the computer readable storage medium of embodiments 27 to 39, wherein the first portion and the second portion are shifted to substantially center the first portion displaying the current menu of choices on the user interface display while fitting both the first portion and the second portion on the user interface display. Embodiment 41 is the computer readable storage medium of embodiments 27 to 40, wherein the user interface navigates the user through an assay system while presenting a minimal number of menu choices the user needs to make for navigating through the assay system. Embodiment 42 is the method of interactively navigating a user through a path of menu choices on a user interface in leading the user through a computer application, the method performed automatically by at least one hardware processor, the method comprising: displaying a current menu of choices on a first portion of a user interface display; allowing a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices; and displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options; wherein the first portion and the second portion are viewable concurrently on the user interface display; and wherein the graphical user interface maximizes black space by making a background of the user interface display black to thereby save storage and improve speed of presentation. Embodiment 43 is the method of embodiment 42, wherein responsive to detecting a selection of a menu item from the current menu of choices, relocating the current menu of choices to the second portion of the user interface display, and displaying on the first portion of the user interface display a next level of menu choices based on the selection of the menu item, wherein the relocated current menu of choices is shown on the second portion of the user interface display as the past selected and past unselected menu items of a past menu level, and the next level of menu choices is shown on the first portion as the current menu of choices. Embodiment 44 is the method of embodiment 42 or 43, wherein the current menu of choices is displayed in first visual orientation on the first portion of the user interface display and the drilled-down levels of menu choices comprising the past selected and past unselected menu items are displayed on the second portion of the user interface display in second visual orientation. Embodiment 45 is the method of embodiment 43 or 44, wherein the second visual orientation is substantially orthogonal to the first visual orientation. Embodiment 46 is the method of embodiments 44 or 45, wherein the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. Embodiment 47 is the method of embodiments 44 to 46, wherein the first visual orientation is a horizontal orientation and the second visual orientation is a vertical orientation. Embodiment 48 is the method of embodiments 44 to 47, the drilled-down levels of menu choices relocated to the second portion are displayed as a stack of menu levels. Embodiment 49 is the method of embodiments 43 to 48, wherein the current menu of choices is displayed as a graphical rotating wheel that rotates the choices in a direction of the first visual orientation. Embodiment 50 is the method of embodiments 42 to 49, wherein a drilled-down level in the drilled-down levels of menu choices is displayed as a graphical rotating wheel that rotates choices of the drilled-down level in a direction of the second visual orientation. Embodiment 51 is the method of embodiments 42 to 50, wherein the past selected menu items in the drilled-down levels displayed on the second portion of the user interface display are displayed highlighted relative to the past unselected menu items of the drilled-down levels displayed on the second portion of the user interface display. Embodiment 52 is the method of embodiments 42 to 51, wherein the first portion and the second portion are displayed as a series of concentric circles. Embodiment 53 is the method of embodiments 42 to 52, wherein the first portion and the second portion are displayed in a graphical decision tree configuration. Embodiment 54 is the method of embodiments 42 to 53, wherein the first portion and the second portion are shifted to substantially center the first portion displaying the current menu of choices on the user interface display while fitting both the first portion and the second portion on the user interface display. Embodiment 55 is the method of embodiments 42 to 54, further comprising allowing the user to jump to a different path of menu choices by allowing the user to select a past unselected menu item from a previously navigated menu level displayed on the second portion of the user interface display. Embodiment 56 is the method of interactively navigating a user through a path of menu choices on a user interface in leading the user through a computer application, the method performed automatically by at least one hardware processor, the method comprising: displaying a current menu of choices on a first portion of a user interface display; allowing a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices; displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options, wherein the first portion and the second portion are viewable concurrently on the user interface display, wherein at least the first portion includes a search function box, a sub-first area and a sub-second area, wherein the first portion is scrollable as a whole and shows the current menu of choices, wherein responsive the detecting an entry of a search term in the search function box, the first portion is bifurcated into the sub-first area and sub-second area that are scrollable individually. Embodiment 57 is the method of embodiment 56, wherein the sub-second area displays a subset of the current menu choices that matches the search term. Embodiment 58 is the method of embodiment 56 or 57, wherein the sub-second area displays the current menu of choices. Embodiment 59 is the method of embodiments 56 to 58, wherein the sub-first area displays a recently chosen menu item. Embodiment 60 is the method of embodiments 56 to 59, wherein the first portion is rendered on the user interface display as a graphical wheel. Embodiment 61 is a method of interactively navigating a user through a path of menu choices on a user interface in leading the user through a computer application, the method performed automatically by at least one hardware processor, the method comprising: displaying a current menu of choices on a first portion of a user interface display; allowing a user to select a menu item from the current menu of choices displayed on the first portion of the user interface display and to drill down through levels of menu choices based on selecting a menu item from a prior level of menu choices; displaying on a second portion of the user interface display, past selected and past unselected menu items of the drilled-down levels, wherein the past unselected menu items are displayed as selectable options, wherein the first portion and the second portion are viewable concurrently on the user interface display, wherein the current menu of choices is displayed as a graphical rotating wheel that rotates the choices, wherein the graphical rotating wheel is rotatable from a first menu item in the current menu of choices to a last menu item in the current menu of choices, and the graphical rotating wheel is further rotatable from the last menu item to the first menu item, and the first menu item and the last menu item do not connect in the graphical rotating wheel's rotation. Embodiment 62 is the method of embodiment 61, wherein the graphical rotating wheel is a vertical wheel that rotates vertically. Embodiment 63 is the method of embodiment 61 or 62, wherein the graphical rotating wheel is a horizontal wheel that rotates horizontally. Embodiment 64 is a method executed by at least one hardware processor for navigating a path of hierarchical menu levels outputted to a graphical user interface (GUI), the method comprising: providing a first command for a first menu of user-selectable choices to be displayed on a first portion of a user interface (UI) display; and providing a second command for a second menu of user-selectable choices to be displayed on the first portion of the UI display in response to a user's selection, wherein the second portion includes one or more of a past-selected and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the first portion. Embodiment 65 is the method of embodiment 64 further comprising an advanced context menu, wherein the advanced context menu is adapted to be displayed in response to a selection of an advanced selector. Embodiment 66 is the method of embodiment 65 or 64, wherein the advanced context menu includes one or more of the following user-selected choices: Export; Admin Console; Admin Audit Trail; Terms of Use; Privacy Policy; and Log out. Embodiment 67 is the method of embodiments 64 to 66, wherein in response to a selection of Admin Audit Trail choice, the first portion is adapted to display audit information that includes one or more of the following: a timestamp, a user name and/or email address, module, record ID, type, message, category, code, and IP address of a user. Embodiment 68 is the method of embodiment 67, wherein the audit information includes information relating to one or more of all users, accounts, and teams. Embodiment 69 is the method of embodiment 67 or 68, wherein the audit information includes information relating to a particular team selected among previously added teams. Embodiment 70 is the method of embodiment 69 further comprising providing a command to display an audit menu that includes the previously added teams in response to a selection of the Admin Audit Trail choice. Embodiment 71 is the method of embodiment 67 embodiments 64 to 70, wherein the audit information is adapted to be exported to a user in response to a selection of the Export choice. Embodiment 72 is the method of embodiments 64 to 72, wherein the second portion is adapted to display an audit menu for a user to select all audit information to be available for display, or to narrow the audit information to be displayed by one or more teams and/or instruments. Embodiment 73 is the method of embodiments 66 to 72, wherein in response to a selection of Admin Audit Trail choice, the advanced context menu includes one or more of the following choices: Export and Copy to Clipboard. Embodiment 74 is the method of embodiments 64 to 73, further comprising an advanced context menu, wherein the advanced context menu is adapted to be displayed in response to a selection of an advanced selector. Embodiment 75 is the method of embodiments 64 to 74, wherein the first menu of user-selectable choices includes one or more of the following choices: Read; and Review Recent Results. Embodiment 76 is the method of embodiment 75, wherein in response to a selection of the Read command, the second portion of the UI display is adapted to output a Play Button. Embodiment 77 is the method of embodiment 76, wherein in response to a selection of the Play Button, a plate reader is adapted begin reading one or more plates. Embodiment 78 is the method of embodiment 77, wherein in further response to the selection of the Play Button, the UI display is adapted to display a timer, wherein the timer is adapted to indicate one or more of: the total amount of time to load the one or more plates; the total amount of time to read the one or more plates; the total amount of time to unload the one or more plates; the time remaining to complete the loading of the one or more plates; the time remaining to complete the reading of the one or more plates; and the time remaining to complete the unloading of the one or more plates. Embodiment 79 is the method of embodiments 74 to 78, wherein the advanced context menu includes one or more of the following user-selectable choices: Eject Plate; Partial Plate; Set Plate Run; Stop Instrument; Lock UI; and View Plate Information. Embodiment 80 is the method of embodiment 79, wherein in response to a selection of the Partial Plate choice, the first portion is adapted to receive bar code information as it relates a plate selected among the one or more plates. Embodiment 81 is the method of embodiment 80, wherein the first portion is further adapted to display a user-selectable option that, when selected, verifies the authenticity the received bar code information. Embodiment 82 is the method of embodiment 79 to 81, wherein further in response to the Partial Plate choice, the advanced context menu includes one or more of the following choices: Save Partial Plate; and Cancel Partial Plate. Embodiment 83 is the method of embodiment 79 to 83, wherein in response to a selection of the Set Plate Run choice, the first portion is adapted to receive a name for a plate run associated with a plate. Embodiment 84 is the method of embodiment 79 to 84, wherein in response to a selection of the Stop Instrument choice, which the first portion is adapted to display a confirmation choice before issuing a stop instrument command. Embodiment 85 is the method of embodiment 84, wherein the confirmation choice, when selected, is adapted to either abort the current run of a plate by issuing the stop instrument command or continuing the run by disregarding the stop instrument command. Embodiment 86 is the method of embodiment 79 to 85, wherein in response to a selection of the Lock UI choice, the UI display is adapted to be locked from user selections until receiving the current user's password. Embodiment 87 is the method of embodiment 79 to 86, wherein in response to a selection of the View Plate Information choice, the first portion is adapted to display Plate Information including one or more of the following: plate run name, plate barcode, long side customer barcode, short side customer barcode, plate type, operator, and read time. Embodiment 88 is the method of embodiment 79 to 87, wherein in response to a selection of the Eject Plate choice, a plate is ejecting from a plate-reading instrument. Embodiment 89 is the method of embodiment 64 to 88, wherein the first menu of user-selectable choices includes one or more of the following choices: Define Roles and Permissions; Add/Remove members; Assign Members to Roles; and Authorize and Inform Members. Embodiment 90 is the method of embodiment 89, wherein the first portion includes two or more subsections of user-selectable choices from the second menu. Embodiment 91 is the method of embodiment 90, wherein in response to a selection of Define Roles and Permissions, a first subsection of user-selectable choices includes one or more of the following choices: Lab Manager; Designer; Associate; Operator (Base); and Maintenance Tech (Base). Embodiment 92 is the method of embodiment 91, wherein in response to a selection of one or more of: (a) Lab Manager; (b) Designer; or (c) Associate, a second subsection of user-selectable choices includes one or more of the following choices: Analysis Method; Assay Method; Experiment; Assay Engine; Audit Trail; Maintenance; Reader; and System. Embodiment 93 is the method of embodiment 91 or 92, wherein in response to a selection of one or more of (a) Operator (Base) or (b) Maintenance Tech (Base), a second subsection of user-selectable choices includes one or more of the following choices: Assay Engine; Audit Trail; Maintenance; Reader; and System. Embodiment 94 is the method of embodiment 91 to 93, wherein in response to a selection of Analysis Method, a third subsection of user-selectable choices includes a Run Analysis Method choice. Embodiment 95 is the method of embodiment 91 to 94, wherein in response to a selection of Assay Method, a third subsection of user-selectable choices includes a Run Assay Method choice. Embodiment 96 is the method of embodiment 91 to 95, wherein in response to a selection of Experiment, a third subsection of user-selectable choices includes one or more of the following choices: Create Experiment; Edit Layout; Exclude/Include Data Points; Export Data Table; Export Sample Result Table; and View Experiment. Embodiment 97 is the method of one of embodiment 91 to 96, wherein in response to a selection of Assay Engine Method, a third subsection of user-selectable choices includes one or more of the following choices: Export Data Table; Modify Instrument Settings; Override Mesoscale Diagnostics Kit Lot Assignment; Retry Inventory Validation; Run Instrument; and Show ECL for Unverified Run. Embodiment 98 is the method of one of embodiment 91 to 97, wherein in response to a selection of Audit Trail, a third subsection of user-selectable choices includes a View Audit Trail App choice. Embodiment 99 is the method of one of embodiments 91 to 98, wherein in response to a selection of Maintenance, a third subsection of user-selectable choices includes one or more of the following choices: Run Maintenance; Run Maintenance Method; and View Maintenance Records. Embodiment 100 is the method of one of embodiments 91 to 99, wherein in response to a selection of Reader, a third subsection of user-selectable choices includes one or more of the following choices: Manage Database; Modify Instrument Settings; and Run Instrument. Embodiment 101 is the method of one of embodiments 91 to 100, wherein in response to a selection of System, a third subsection of user-selectable choices includes one or more of the following choices: Modify System Settings; and Unlock App Locked by Any User. Embodiment 102 is the method of embodiments 90 to 102, wherein in response to a selection of the Add/Remove Members choice, the second menu of user-selectable choices includes previously added usernames and/or email addresses, further wherein one or more of the usernames and/or email addresses are adapted to be deleted from the second menu of user-selectable choices in response to a user's deletion input. Embodiment 103 is the method of embodiment 102, wherein in further response to the selection of the Add/Remove Members choice, the second menu is adapted to receive new usernames and/or email addresses to add among the previously added usernames and/or email addresses. Embodiment 104 is the method of embodiment 103, wherein in response to the user's deletion input, a confirmation screen is adapted to be displayed on the first portion of the user interface display, further wherein the confirmation screen is adapted to display user-selectable choices that include Cancel and OK. Embodiment 105 is the method of embodiments 90 to 104, wherein in response to a selection of Assign Members to Roles, a first subsection of user-selectable choices includes previously added usernames and/or email addresses and the second subsection includes one or more of the following role-assignment choices: Lab Manager; Designer; Associate; Operator (Base); and Maintenance Tech (Base). Embodiment 106 is the method of embodiment 105, wherein selections from the first- and second-subsections are adapted to create an association among one or more of the previously added usernames and/or email addresses with one or more of the role-assignment choices. Embodiment 107 is the method of embodiment 106, wherein association among the one or more of the previously added usernames and/or email addresses with one or more of the role-assignment choices is adapted to be displayed on the UI in response to a selection of the Authorize and Inform Members choice. Embodiment 108 is the method of embodiment 107, wherein in response to a selection of the Authorize and Inform Member choice, the first portion is adapted to display an Authorize and Email Install Instructions choice. Embodiment 109 is the method of embodiment 108, wherein role-assignment information and/or instructions are adapted to be transmitted to the previously added email addresses in response to a selection of Authorize and Email Install Instructions choice. Embodiment 110 is the method of embodiments 64 to 109, wherein the first menu of user-selectable choices includes one or more of the following choices: Prepare Teams; Define Administrators; and Manage Teams. Embodiment 111 is the method of embodiment 110, wherein in response to a selection of Prepare Teams choice, the second menu of user-selectable choices includes one or more previously added teams. Embodiment 112 is the method of embodiments 110 to 111, wherein in response to a selection of the Prepare Teams choice, the second menu of user-selectable choices is adapted to receive one or more new teams to add among the one or more previously added teams. Embodiment 113 is the method of embodiments 110 to 112, wherein in response to a selection of the Prepare Teams choice, the second portion is adapted to display one or more of a number of available teams defined, a number of available seats assigned, a total number of available teams, and a total number of available seats. Embodiment 114 is the of embodiments 110 to 113, wherein in response to a selection of Define Administrators choice, the second menu of user-selectable choices includes previously added usernames and/or email addresses, further wherein one or more of the usernames and/or email addresses are adapted to be deleted from the second menu of user-selectable choices in response to a user's deletion input. Embodiment 115 is the method of embodiments 110 to 114, wherein further in response to a selection of the Define Administrators choice, the second menu of user-selectable choices is adapted to receive new usernames and/or email addresses to add among the previously added usernames and/or email addresses. Embodiment 116 is the method of embodiments 111 to 115, wherein the second portion is adapted to display the one or more previously added teams as a menu of choices. Embodiment 117 is the method of embodiment 116, wherein the previously added usernames and/or email addresses are associated with a particular team among the one or more previously added teams from the menu of choices. Embodiment 118 is the method of embodiment 117, wherein in response to a selection of the Define Administrators choice, the first portion is adapted to display an Authorize and Email choice. Embodiment 119 is the method of embodiment 118, wherein authorizations and/or team-assignment information are adapted to be transmitted to the previously added email addresses in response to a selection of the Authorize and Email Install Instructions choice. Embodiment 120 is the method of embodiments 111 to 119, wherein the first menu of user-selectable choices identified in embodiment 89 are displayed in response to a selection of the Manage Teams choice: Embodiment 121 is the method of embodiments 64 to 120 further comprising an advanced context menu, wherein the advanced context menu is adapted to be displayed in response to a selection of an advanced selector. Embodiment 122 is the method of embodiment 121, wherein in response to a selection of Admin Audit Trail choice, the advanced context menu includes one or more of the following choices: Resend install instruction, Import, and Change Team Name, Change Account Name, Change Password Expiration. Embodiment 123 is the method of embodiment 64 further comprising providing, by the at least one processor, a relocation command for the first menu to be relocated to the second portion of the UI display in response to the user's selection, wherein the second menu comprises a subsequent level of menu items for the user to select. Embodiment 124 is the method of any of embodiments 64 or 123, wherein the subsequent level of menu items comprises one or more user-selectable menu items at least one hierarchical menu level lower than the first menu. Embodiment 125 is the method of any of embodiments 64 or 123 to 124, wherein the subsequent level of menu items comprises one or more user-selectable menu items at more than one hierarchical menu level lower than the first menu. Embodiment 126 is the method of any of embodiments 64 or 123 to 125, wherein the past-unselected menu item includes a previously navigated hierarchical menu level. Embodiment 127 is the method of any of embodiments 64 or 123 to 126 wherein the first portion comprises an active portion, which includes one or more current, user-selectable menu items, and the second portion comprises a historical portion, which includes menu items previously made available to a user. Embodiment 128 is the method of any of embodiments 64 or 123 to 127, wherein the first portion and the second portion are adapted to be displayed in a first visual orientation and a second visual orientation, respectively. Embodiment 129 is the method of any of embodiments 64 or 123 to 128 wherein the second visual orientation is substantially orthogonal to the first visual orientation. Embodiment 130 is the method of any of embodiments 64 or 123 to 129 wherein the first visual orientation is a vertical orientation and the second visual orientation is a horizontal orientation. Embodiment 131 is the method of any of embodiments 64 or 123 to 130 wherein the first visual orientation is configured to provide one or more user-selectable menu items in one or more of a vertical, horizontal, or concentric orientation. Embodiment 132 is the method of any of embodiments 64 or 123 to 131 wherein the second visual orientation is configured to provide user-selectable menu items in one or more of a vertical, horizontal, or concentric orientation. Embodiment 133 is the method of any of embodiments 64 or 123 to 132 wherein a manner in which the menu items are adapted to be displayed is based on an attribute selected from one or more of: (a) being the selected menu item; (b) having a position in a list more central relative to other menu items in the list; (c) being available or unavailable to the user; (d) containing one or more characters typed by a user; and (e) being part of an advanced context menu. Embodiment 134 is the method of any of embodiments 64 or 123 to 133 wherein the manner in which the menu items are adapted to be displayed includes one or both of: (a) emphasizing menu items that are one or more of: the selected menu item, positioned in a decision-making zone, or available to the user; and (b) deemphasizing menu items that are one or more of: not the selected menu item, positioned away from the decision-making zone, or unavailable to the user. Embodiment 135 is the method of any of embodiments 64 or 123 to 134 wherein menu items are adapted to be emphasized by one or more of highlighting, bolding, making larger, underlining, or positioning on the UI display relative to other menu items. Embodiment 136 is the method of any of embodiments 64 or 123 to 135, wherein menu items are adapted to be deemphasized by one or more of fading, making smaller, or positioning on the UI display relative to other menu items. Embodiment 137 is the method of any of embodiments 64 or 123 to 136, wherein the decision-making zone is adapted to be displayed in a centrally located area. Embodiment 138 is the method of any of embodiments 64 or 123 to 137 wherein the first and second menus are adapted to be displayed on a background, which is adapted to be displayed in a manner that contrasts with the first and second menus. Embodiment 139 is the method of any of embodiments 64 or 123 to 138 wherein the second portion is adapted to be displayed across a smaller area than the first portion. Embodiment 140 is the method of any of embodiments 64 or 123 to 139 wherein the first visual orientation is one or more of parallel, orthogonal, vertical, horizontal, and concentric to the second visual orientation. Embodiment 141 is the method of any of embodiments 64 or 123 to 140 wherein the each of the providing steps is performed by the processor by executing a computer application stored on a machine. Embodiment 142 is the method of any of embodiments 64 or 123 to 141 wherein the computer application comprises an application for manipulating, designing, performing, reviewing, measuring, or analyzing an experiment. Embodiment 143 is the method of any of embodiments 64 or 123 to 142 wherein the experiment comprises one or more assays. Embodiment 144 is the method of any of embodiments 64 or 123 to 143 wherein the experiment comprises one or more electrochemiluminescence assays. Embodiment 145 is the method of any of embodiments 64 or 123 to 144 further comprising providing a limiting command to limit the total number of menu items to be displayed based on at least one of the following criteria: (a) frequency with which a user has previously selected the menu item while logged into his/her account; (b) frequency with which at least two users have previously selected the menu item while logged into an account; (c) frequency with which a user has previously selected the menu item while logged into an account associated with multiple accounts; (d) frequency with which at least two users have previously selected the menu item while logged into one or more accounts associated with multiple accounts; (e) frequency with which any users have previously selected the menu item while logged into any account; and (f) frequency with which any users have previously selected the menu item while logged into any account associated with multiple accounts. Embodiment 146 is the method of any of embodiments 64 or 123 to 145, wherein the multiple accounts of elements (c), (d), and (f) are accounts associated with a team and the users are team members of the one or more team associated with the multiple accounts. Embodiment 147 is the method of any of embodiments 64 or 123 to 146 further comprising providing an exclusion command to exclude menu items to be displayed based on at least one of the following criteria: (a) menu items designated as unavailable in a present module; (b) menu items designated as unavailable to a user; (c) menu items designated as unavailable to an aggregation of users; (d) menu items designated as unavailable to a particular machine storing the one or more copies of the computer application; and (e) menu items designated as unavailable to an aggregation of machines, each storing one or more copies of the computer application. Embodiment 148 is the method of any of embodiments 64 or 123 to 147 wherein the frequency is determined over a defined time period. Embodiment 149 is the method of any of embodiments 64 or 123 to 148 wherein the frequency is 50% or more. Embodiment 150 is the method of any of embodiments 64 or 123 to 149 wherein the frequency is 80% or more. Embodiment 151 is the method of any of embodiments 64 or 123 to 150, wherein the first and second menus are adapted to collectively display fewer than seven user-selectable menu items at any given point in time. Embodiment 152 is the method of any of embodiments 64 or 123 to 151, wherein the background comprises pixels, wherein at least 75% of the pixels are monochromatic. Embodiment 153 is the method of any of embodiments 64 or 123 to 152, wherein the background comprises pixels, wherein at least 75% of the pixels are black. Embodiment 154 is the method of any of embodiments 64 or 123 to 153, further comprising providing a third command for a third menu of one or more user-selectable menu items to be displayed on a third portion of the UI display, wherein the third menu is adapted to be concurrently viewed with the first and second portions of the UI display. Embodiment 155 is the method of any of embodiments 64 or 123 to 154 further comprising: providing a third command for a third menu of user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection from the second menu, wherein the one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels from the second menu is adapted to be displayed on the second portion and concurrently viewed with the first portion. Embodiment 156 is the method of any of embodiments 64 or 123 to 155, wherein the second portion further comprises a submenu, wherein the submenu comprises one or more of a past-selected sub-menu item and a past-unselected submenu item selected among at least one lower hierarchical menu level of one or more of the first, second, and third menus. Embodiment 157 is a system for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the system comprising at least one processor; a user input device; and a computer readable storage medium configured to store a computer application, wherein the at least one processor is configured to execute instructions of the computer application for providing a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display, and providing a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in the first portion. Embodiment 158 is a non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processor, cause the processor to carry out a method for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in the first portion. Embodiment 159 is the method of any of embodiments 64 or 124 to 156, further comprising an Advanced context menu, wherein the Advanced context menu is adapted to be displayed in response to a selection of an advanced selector. Embodiment 160 is the method of any of embodiments 1-35 and embodiment 159, wherein the first menu comprises one or more of Design Assay Method and Review Assay Method. Embodiment 161 is the method of embodiment 160, wherein the second menu is a submenu of Design Assay Method and comprises one or more of Manual Assay Method and Automated Assay Method in response to a selection of Design Assay Method. Embodiment 162 is the method of embodiment 161, wherein the third menu comprises one or more name of recent assay methods in response to a selection of Manual Assay Methods or Automated Assay Methods. Embodiment 163 is the method of embodiment 162, wherein the third menu includes one or more user-selectable choice selected from names of recent Assay Methods. Embodiment 164 is the method of embodiments 160-163, further comprising providing a fourth command for a fourth menu of user-selectable items to be displayed on the first portion of the UI display in response to a user's selection from the third menu, wherein the one or more of a past-selected and a past-unselected menu item of the hierarchical menu levels from the first, second, and third menu is adapted to be displayed on the second portion and concurrently viewed with the first portion. Embodiment 165 is the method of embodiment 164, wherein the second portion includes one or more of Recent and Available. Embodiment 166 is the method of164, wherein in response to a selection of a recent Assay Method, the second portion comprises one or more choices selected from Assay, Layout, Analysis Method, Protocol, and Confirm. Embodiment 167 is the method of embodiment 166, wherein the fourth menu is a sub-menu of Assay and comprises one or more of a first subsection and a second subsection, wherein the first subsection comprises a spot layout and list of associated assays for the selected Assay Method and the second sub-section comprises a list of available assays for the selected Assay Method. Embodiment 168 is the method of embodiment 166, wherein the fourth menu is a submenu of Layout and comprises a plate layout. Embodiment 169 is the method of embodiment 166, wherein the fourth menu is a submenu of Analysis Method and comprises one or more of a first and a second subsection, wherein the first subsection comprises one or more of an assay of the selected Assay Method and its algorithm, and the second subsection comprises available algorithms when an algorithm in the first subsection is selected. Embodiment 170 is the method of embodiment 166, wherein the fourth menu is a submenu of Protocol. Embodiment 171 is the of embodiment 166, wherein the fourth menu is a submenu of Confirm and comprises one or more of a first subsection and a second subsection, wherein the first subsection comprises one or more of a spot layout and an associated list of assays in the selected Assay Method and the second subsection comprises one or more of a unique Assay method name, a plate layout, and Confirm. Embodiment 172 is the method of embodiment 163, wherein the second portion comprises one or more of Recent and Available, and the in response to a selection of Available, the third menu is a submenu of Available and comprises one or more of a first, a second, and a third subsection, wherein the first subsection comprises one or more of MSD Purchased, MSD Catalog, and user name, the second subsection comprises one or more of available types of assay method filtered by the selection in the first subsection, and the third section comprises one or more of available assay methods filtered by the selections in the first and second subsections. Embodiment 173 is the method of embodiment 172, wherein the second subsection comprises one or more assay method types selected from Custom, Immunogenicity, Pharmacokinetic, S-PLEX, U-PLEX, U-PLEX Development Pack, Utility, and V-PLEX. Embodiment 174 is the method of embodiment 170, wherein the second portion comprises a further submenu of Protocol, wherein the further submenu comprises one or more of Blocking, Capture, Detection, and Read Buffer. Embodiment 175 is the method of embodiment 174, wherein the third menu is a submenu of the Blocking, Capture, Detection, or Read Buffer submenu and comprises one or more user-selectable or user-nonselectable choices. Embodiment 176 is the method of embodiment 175, wherein one or more of the choices in the third menu is user-selectable, and the user-selectable choices are adapted to be editable. Embodiment 177 is the method of embodiments 174 or 175, wherein the third submenu is a submenu of the Blocking submenu and comprises one or more of Enable Blocking, Blocking Volume, and Blocking Incubation Duration. Embodiment 178 is the method of embodiment 174 or 175, wherein the third submenu is a submenu of the Capture submenu and comprises Sample Incubation Duration. Embodiment 179 is the method of embodiment 174 or 175, wherein the third submenu is a submenu of the Detection submenu and comprises Detection Incubation Duration. Embodiment 180 is the method of embodiment 174 or 175, wherein the third submenu is a submenu of the Read Buffer submenu and comprises Read Buffer Incubation Duration. Embodiment 181 method of embodiment 171, wherein the unique Assay method name is adapted to be edited to a second unique Assay method name. Embodiment 182 is the method of any of embodiments 159-181, wherein in response to a selection of Review Assay Method, the second menu comprises one or more user-selectable choice selected from names of recent Assay Methods. Embodiment 183 is the method of embodiment 64, wherein the second portion comprises one or more of Recent and Available. Embodiment 184 is the method of embodiment 64, wherein in response to a selection of a recent Assay Method, the second portion comprises one or more of Assay Method and Definition, wherein the second menu is a submenu of Definition comprises one or more of a first and a second subsection. Embodiment 185 is the method of embodiment 184, wherein the first subsection comprises one or more of an assay layout and one or more associated assays and the second subsection comprises one or more of an Assay Method name and a plate layout. Embodiment 186 is the method of embodiment 64, wherein the second menu is a submenu of Available and comprises one or more of a first, a second, and a third subsection. Embodiment 187 is the method of embodiment 186, wherein the first subsection comprises one or more of MSD Catalog and one or more user name, the second subsection comprises one or more of assay method type filtered by a selection in the first subsection, and the third subsection comprises one or more available assay methods filtered by a selection in the first and second subsections. Embodiment 188 is the method of embodiment 187, wherein in response to a selection of an available assay method, the submenu comprises one or more of Assay Method and Definition, wherein the submenu for Definition comprises one or more of a first and a second subsection. Embodiment 189 is the method of embodiment 64, wherein the first menu of user-selectable choices includes one or more of the following choices: designed experiment; and view Recent run. Embodiment 190 is the method of any of embodiments 189, wherein in response to a selection of designed experiment, the second portion of the UI display is adapted to display one or more of the following choices: recent and available. Embodiment 191 is the method of any of embodiments 189-190, wherein in response to a selection of designed experiment, the first portion of the UI display is adapted to display one or more experiments. Embodiment 192 is the method of method of any of embodiments 189-191, wherein in response to a selection of available, the first portion of the UI display is adapted to display one or more a first-, second-, and third-subsection, wherein the first sub-section includes a user name, the second sub-section includes a date, and the third sub-section includes one or more experiment names. Embodiment 193 is the method of any of embodiments 189-192, wherein in response to an experiment, the second portion includes one or more items selected from process, run, unload, and load components. Embodiment 194 is the method of any of embodiments 189-193, wherein unload and load components are subsequent menus in response to a selection of the process item. Embodiment 195 is the method of any of embodiments 189-194, wherein the first portion of the UI display is adapted to display one or more a first- and second-subsection, wherein the first sub-section includes one or more one or more instructions for loading one or more component of an experiment and one or more choice to check all items, each with an associated checkbox adapted to allow selection of that item. Embodiment 196 is the method of any of embodiments 189-195, wherein the second-subsection includes a representation of the location for loading each component adapted to add a representation of and to highlight each component as an associated checkbox or the box for check all items is checked. Embodiment 197 is the method of method of any of embodiments 189-196, wherein in response to a selection of the check all items checkbox or a selection of all associated checkboxes, the second-subsection includes a representation of the location for loading each component, the second portion of the UI display is adapted to output a Play Button. Embodiment 198 is the method of any of embodiments 189-197, wherein in response to a selection of the Play Button, a run function is adapted to be performed. Embodiment 199 is the method of any of embodiments 64 or 124-156, wherein the first menu comprises one or more of Design Experiment and Review Experiment. Embodiment 200 is the method of embodiment 199 wherein the second menu comprises one or more of New and From Existing Experiment in response a selection of Design. Embodiment 201 is the method of any of embodiments 199-200 wherein the third menu comprises one or more subsections in response to a selection of New. Embodiment 202 is the method of any of embodiments 199-201, wherein the first subsection of the third menu comprises a first unique experiment name field and an experiment type field. Embodiment 203 is the method of any of embodiments 199-202, wherein the experiment name field is adapted to allow manual entry of a second unique name and the experiment type field is adapted to be edited when clicked. Embodiment 204 is the method of any of embodiments 199-203, wherein the second sub-section comprises Manual and Automation in response to a user clicking the experiment type. Embodiment 205 is the method of any of embodiments 199-204, wherein the second portion includes one or more of Design Experiment, Setup, Assay Method, Samples, and Confirm in response to a selection of a unique experiment name and experiment type. Embodiment 206 is the method of any of embodiments 199-205, wherein the second portion includes one or more of Design Experiment, Setup, Assay Method, Samples, and Confirm. Embodiment 207 is the method of any of embodiments 199-206 further comprising providing a fourth command for a fourth menu of user-selectable items to be displayed on the first portion of the UI display in response to a user's selection from the third menu, wherein the one or more of a past-selected and a past-unselected menu item of the hierarchical menu levels from the first, second, and third menu is adapted to be displayed on the second portion and concurrently viewed with the first portion. Embodiment 208 is the method of any of embodiments 199-207 wherein the second portion includes one or more of Recent and Available and both are subsequent menus of Assay Method. Embodiment 209 is the method of any of embodiments 199-208 wherein the fourth menu includes one or more Recent assay method. Embodiment 210 is the method of any of embodiments 199-209 wherein the first portion comprises one or more of a first, a second, and a third sub-section of the fourth menu. Embodiment 211 is the method of any of embodiments 199-210 wherein the first sub-section includes one or more of MSD Purchased, MSD Catalog, and usernames. Embodiment 212 is the method of any of embodiments 199-211 wherein the second sub-section includes one or more of Assay method types, filtered by the highlighted item in the first sub-section. Embodiment 213 is the method of any of embodiments 199-212 wherein the third sub-section includes one or more available Assay Method filtered by the highlighted item in the first and second sub-sections. Embodiment 214 is the method of any of embodiments 199-213 wherein the second portion further includes one or more of Manual and Import in response to a selection of an Assay Method, wherein Manual and Import are subsequent menus of Sample. Embodiment 215 is the method of any of embodiments 199-214 wherein the Manual choice is configured to allow entry of a number of samples. Embodiment 216 is the method of any of embodiments 199-215 wherein the Import choice is configured to allow entry of a document file path. Embodiment 217 is the method of any of embodiments 199-216 wherein in response to a selection of a number of samples or a document file path, the third menu is a subsequent menu of Confirm and includes one or more of Experiment name, total sample number, plate layout, assay method name, and Confirm. Embodiment 218 is the method of any of embodiments 199-217 wherein in response to a selection of From Existing Experiment, the third menu includes one or more of Recent and Available. Embodiment 219 is the method of any of embodiments 199-218 wherein in response to a selection of Recent, the third menu includes a list of recent Experiments. Embodiment 220 is the method of any of embodiments 199-219 wherein the third menu comprises a first, a second, and a third sub-section, wherein the first sub-section comprises one or more user names, the second sub-section comprises one or more Experiment dates filtered by highlighted user name, and the third sub-section comprises one or more names of existing Experiments filtered by highlighted user name and selected Experiment date, in response to a selection of Available. Embodiment 221 is the method of any of embodiments 199-220 wherein in response to a selection of a name of a Recent or Available Experiment, the second portion comprises one or more of Design Experiment, Setup, Assay Method, Samples, and Confirm. Embodiment 222 is the method of embodiment 199, wherein in response to a selection of Review Experiment, the second menu comprises one or more names of recent Experiments. Embodiment 223 is the method of embodiment 222, the second portion comprises Recent and Available and the third menu is a subsequent menu of Recent. Embodiment 224 is the method of embodiments 222 or 223, wherein the second menu comprises one or more of a first, a second, and a third subsection, wherein the first sub-section comprises one or more user names, the second sub-section comprises one or more Experiment dates filtered by highlighted user name, and the third sub-section comprises one or more names of existing Experiments filtered by highlighted user name and selected Experiment date, in response to a selection of Available. Embodiment 225 is the method of embodiments 222 to 224, wherein the second portion comprises one or more of Experiment, Plates, Samples, Calibrators, Controls, and Data Table in response to a user's selection of a recent Experiment or Available Experiment. Embodiment 226 is the method of embodiments 222 to 225 wherein the third menu of user-selectable choices is a subsequent menu of Plates and comprises one or more of Experiment name, total sample number, one or more plate representation, and an assay method name, wherein the one or more plate representation is adapted to be selected. Embodiment 227 is the method of embodiments 222 to 226 further comprising providing a fourth command for a fourth menu of user-selectable items to be displayed on the first portion of the UI display in response to a user's selection of one of the one or more plate representation, wherein the one or more of a past-selected and a past-unselected menu item of the hierarchical menu levels from the first, second, and third menu is adapted to be displayed on the second portion and concurrently viewed with the first portion. Embodiment 228 is the method of embodiments 222 to 227 wherein the second portion further comprises Heat Map and Data Table and the fourth menu is a subsequent menu of Heat Map and comprises one or more of a spot layout, a list of assays, a plate layout, and a graph, wherein the spots of the spot layout and wells of the plate layout are adapted to be highlighted by a user, wherein Heat Map and Data Table are subsequent menus of Plates. Embodiment 229 is the method of embodiments 222 to 228 wherein in response to a highlighting of a spot, the graph populates with data for the selected spot over the entire plate, or wherein in response to a highlighting of a spot and a well, the graph populates with data for the highlighted spot in the highlighted well. Embodiment 230 is the method of embodiments 222 to 229 wherein in response to a selection of one of the one or more patent representation, the third menu is a Table of Data for the selected plate and comprises one or more columns selected from Plate, Sample, Assay, Well, Spot, Dilution, Conc., Conc. Unit, Signal, Adj. Signal, Mean, Adj. Signal Mean, CV, Calc. Conc., Calc. Conc. Mean, Calc. Conc. CV, % Recovery, % Recovery Mean. Embodiment 231 is the method of embodiments 222 to 230 wherein the third menu is a subsequent menu of Samples and comprises one or more graphs for Sample data, and the second portion comprises one or more of Graph and Table. Embodiment 232 is the method of embodiments 222 to 231 wherein the third menu is a subsequent menu of Calibrators and comprises one or more graph for Calibrator data. Embodiment 233 is the method of embodiments 222 to 232, wherein the third menu is a subsequent menu of Controls and comprises one or more graph for Control data. Embodiment 234 is the method of embodiments 222 to 233 wherein the third menu is a subsequent menu of Data Table and comprises one or more tables for data for samples, calibrators (if any), and controls (if any). Embodiment 235 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; Embodiment 236 is the method of embodiment 234, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display. Embodiment 237 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; and wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion includes between two to five sub-sections of one or more user-selectable menu items, wherein these menu items are divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the menu items from each of the respective sub-sections. Embodiment 238 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 239 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein these menu items are divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections. Embodiment 240 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; providing, by the at least one processor, a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; providing, by the at least one processor, a progress indicator adapted to be displayed on the UI display, wherein the progress indicator comprises a series of flickering pixels to indicate that the at least one processor is processing a received response and providing, by the at least one processor, an advanced context menu, wherein the advanced context menu is adapted to divided into a plurality of portions including one or more of the following: a top portion comprising items related to the currently active menu; a middle portion comprising items related to particular modules available to a user; and a bottom portion comprising global functions comprising one or more of login/logout functionality, user manuals and help, EULA information, and privacy policy information, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the second portion further comprises an indicator bar comprising a status indicator adapted to display one or more color-coded states, the states comprising red to indicate an error state and blue to indicate a non-error state, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 241 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display. Embodiment 242 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time. Embodiment 243 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; and wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections. Embodiment 244 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment. Embodiment 245 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, an advanced context menu, wherein the advanced context menu is adapted to divided into a plurality of portions including one or more of the following: a top portion comprising items related to the currently active menu; a middle portion comprising items related to particular modules available to a user; and a bottom portion comprising global functions comprising one or more of login/logout functionality, user manuals and help, EULA information, and privacy policy information, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object. Embodiment 246 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion. Embodiment 247 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion. further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 248 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the first portion comprises no more than a single menu of items at a given point in time. Embodiment 249 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 250 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display. further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 251 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 252 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections. Embodiment 253 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation; and providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy. Embodiment 254 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; and providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 255 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, a progress indicator adapted to be displayed on the UI display, wherein the progress indicator comprises a series of flickering pixels to indicate that the at least one processor is processing a received response, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion. further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the second portion further comprises an indicator bar comprising a status indicator adapted to display one or more color-coded states, the states comprising red to indicate an error state and blue to indicate a non-error state. Embodiment 256 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the first portion comprises no more than a single menu of items at a given point in time. Embodiment 257 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display. Embodiment 258 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment. Embodiment 259 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections. Embodiment 260 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 261 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; and providing, by the at least one processor, an advanced context menu, wherein the advanced context menu is adapted to divided into a plurality of portions including one or more of the following: a top portion comprising items related to the currently active menu; a middle portion comprising items related to particular modules available to a user; and a bottom portion comprising global functions comprising one or more of login/logout functionality, user manuals and help, EULA information, and privacy policy information, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment. Embodiment 262 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, an advanced context menu, wherein the advanced context menu is adapted to divided into a plurality of portions including one or more of the following: a top portion comprising items related to the currently active menu; a middle portion comprising items related to particular modules available to a user; and a bottom portion comprising global functions comprising one or more of login/logout functionality, user manuals and help, EULA information, and privacy policy information, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object. Embodiment 263 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, a dialog box adapted to be displayed on the foreground of the UI display to prompt a user for additional information or notify the user of an error, wherein the background of the dialog box is further adapted to match the background of the first and second portions of the UI display, further wherein one or more of text, graphics, photos, and videos displayed in the background of the first and second portions of the UI display are adapted to displayed out of focus when the dialog box is being displayed on the foreground of the UI display, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 264 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; and providing, by the at least one processor, a progress indicator adapted to be displayed on the UI display, wherein the progress indicator comprises a series of flickering pixels to indicate that the at least one processor is processing a received response, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 265 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 266 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; and providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 267 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion further comprises an indicator bar comprising a status indicator adapted to display one or more color-coded states, the states comprising red to indicate an error state and blue to indicate a non-error state, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. Embodiment 268 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; providing, by the at least one processor, a progress indicator adapted to be displayed on the UI display, wherein the progress indicator comprises a series of flickering pixels to indicate that the at least one processor is processing a received response; and providing, by the at least one processor, an advanced context menu, wherein the advanced context menu is adapted to divided into a plurality of portions including one or more of the following: a top portion comprising items related to the currently active menu; a middle portion comprising items related to particular modules available to a user; and a bottom portion comprising global functions comprising one or more of login/logout functionality, user manuals and help, EULA information, and privacy policy information, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion is adapted to display at least one menu item from among one or more previously navigated menu levels and subsequent menu levels to provide a visual representation of: (1) a user's previous traversal of the menu hierarchy; and (2) future items that can be subsequently selected, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs, further wherein the at least one processor is adapted to provide an interface between and enable communications among: (a) users, accounts, or teams; and (b) non-human machines, wherein the interface and communications allow the one or more users, accounts, teams, and non-human machines to collaboratively solve one or more problems or sub-problems and notify one or more of each of the results derived from their collaboration. Embodiment 269 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an advanced context menu adapted to be displayed in response to a selection of an advanced selector, wherein the first portion is adapted to display more than 50% of the available menu items from the menu currently displayed in the first portion based on or more of: (1) the most frequently used available menu items; (2) the importance to the outcome or user; (3) choices customarily made by the user; or (4) choices customarily made in an industry, further wherein the advanced context menu is adapted to display the remaining available items from that menu; providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device; and providing, by the at least one processor, a progress indicator adapted to be displayed on the UI display, wherein the progress indicator comprises a series of flickering pixels to indicate that the at least one processor is processing a received response, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the first portion comprises no more than a single menu of items at a given point in time, further wherein the first portion includes between two to five sub-sections of one or more user-selectable items, wherein the single menu of items is divided among these sub-sections and the sub-sections are adapted to be displayed to create an association among the items from each of the respective sub-sections, further wherein the first portion is adapted to display visually represented items to aid in a user's decision-making processes, wherein the visually represented item include one or more of: a video, a picture, a graph, a table, a chart, and a graphical representation of an object. Embodiment 270 is a method executed by at least one processor for navigating a path of hierarchical menu levels adapted for output to a graphical user interface (GUI), the method comprising: providing, by at least one processor, a first command for a first menu of one or more user-selectable menu items to be displayed on a first portion of a user interface (UI) display; providing, by the at least one processor, a second command for a second menu of one or more user-selectable menu items to be displayed on the first portion of the UI display in response to a user's selection; providing, by the at least one processor, an output in response to a received response, wherein the output is adapted to be transmitted to a device communicatively connected to the processor directing the device to perform a physical movement or undergo a physical transformation; and providing, by the at least one processor, a permissions command, wherein the permissions command is adapted to manage one or more of user's and team's levels of access, security, or control, wherein the levels of access are adapted to be assigned based on one or more of a role, user, team, account, instrument, equipment, or device, wherein the first menu is adapted to be displayed on a second portion of the UI display and comprises one or more of a past-selected menu item and a past-unselected menu item of the hierarchical menu levels and is adapted to be concurrently viewed with the second menu in first portion, further wherein the first portion is an active portion of the UI display that is adapted to be consistently displayed within the same area of the UI display to optimize a user's focus while interacting with the UI display, further wherein the second portion further comprises an indicator bar, further wherein one or more of the past-selected menu items are adapted to be visually aligned with the indicator bar to designate which menu items were previously selected throughout a user's traversal of the menu hierarchy, further wherein the second portion further comprises an indicator bar comprising a status indicator adapted to display one or more color-coded states, the states comprising red to indicate an error state and blue to indicate a non-error state, further wherein the second portion is adapted to display at least one menu item from one or more of previously navigated and subsequent menu levels, wherein the items among a single menu level are adapted to be displayed in a linear fashion and the previously navigated and subsequent menu levels are adapted to be displayed in a nested fashion, further wherein the at least one processor is adapted to receive benchmark inputs from one or more users, accounts, or teams, wherein an aggregation of the benchmark inputs is adapted to collaboratively solve one or more problems, either sequentially or in parallel, further wherein each of the benchmark inputs is adapted to be based on one or more of: (a) a module; (b) a problem or sub-problem to be solved; (c) a device; (d) a physical location; (e) a tool; (f) an instrument; or (g) equipment, further wherein the at least one processor is adapted to notify the one or more users, accounts, or teams, of the results derived from one or more of the received benchmark inputs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The embodiments described above are illustrative examples and it should not be construed that the present invention is limited to these particular embodiments. It should be understood that various embodiments disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the methods or processes). In addition, while certain features of embodiments hereof are described as being performed by a single module or unit for purposes of clarity, it should be understood that the features and functions described herein may be performed by any combination of units or modules. Thus, various changes and modifications may be affected by one skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims. | 658,058 |
11861146 | The drawings, some components and/or operations can be separated into different blocks or combined into a single block when discussing some embodiments of the present technology. Moreover, while the technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the technology to the particular embodiments described herein. On the contrary, the technology is intended to cover all modifications, equivalents, and alternatives falling within the scope of the technology as defined by the appended claims. DETAILED DESCRIPTION The disclosed technology relates to a system that can actively adapt the content and layout of a computer desktop based on a live multi-channel audio stream. The content and arrangement of the content of the desktop can change to track the audio stream. In one example, the live multi-channel audio stream is a live two-channel voice conversation over a telecommunications network between a customer and a customer service agent (“agent”) of a company. The company can be a network carrier and the customer of the network carrier can be calling to speak about product or service support or information inquiries. The agent has a work station at a contact center and operates a computer that presents the desktop on a display device to aid the agent in addressing the customer's inquiries. The desktop can include graphical controls in a layout with links to resources of information that are relevant to the conversation with the customer. As such, the desktop is a portal to resources that the agent can use to address the customer's inquiries. Unlike conventional systems, the disclosed system can dynamically adapt the desktop automatically based on the spontaneous dialogue during the live audio call. A system includes backend and frontend components. A backend search system monitors a live two-channel audio dialogue between, for example, a customer and an agent at a contact center. The backend system performs natural language analysis on the dialogue to extract keywords or other speech features (e.g., indication of a problem) useful for querying a solutions database. The backend search system dynamically adapts the desktop to the current speech features of the conversation, to provide the agent with timely and relevant talking points and answers. Thus, the agent accesses the company's resources through the desktop while engaging in the spontaneous dialogue with the customer. Accordingly, the agent's judgement remains integral to engaging with the customer in a natural way while being more effective and consistent at addressing customer inquiries. The backend speech feature generator can be specifically trained for a contact center of a network carrier, and a search engine can weigh the speech features differentially. For example, customer speech features can bias a search more than agent speech features. The system then generates control signals based on search results to dynamically adapt the agent's desktop to the latest live dialogue so that information most likely to be useful to the agent in solving the customer's issues is presented. In some implementations, the backend search system can include a real-time sentiment tracker that detects a probability of a feeling or emotion expressed by the customer based on certain speech features (e.g., tone, speed, volume) of the live call. A measure of the customer's sentiment can bias the search for results relevant to the content of the dialogue and the customer's sentiment. In some implementations, the backend search system includes a memo function that automatically transcribes and stores at least a portion of the live dialogue. That is, the dialogue is converted from speech-to-text and stored for future use. For example, the transcribed speech can be used to generate feedback signals that train a machine learning model of the search engine, which can improve the probability of identifying relevant information, though any personally identifiable information related to the customer is not stored. The frontend includes the agent-facing desktop. The frontend can plug into the backend search system to obtain resources relevant to the live call. Hence, the system can dynamically adapt the desktop to showcase resources that are timely to customer inquiries as they arise or develop during the live call. In addition to content (e.g., device specifications, service terms, customer history), the resources can include software tools. The structure of the desktop can also change automatically based on other inputs including inputs by the agent to the desktop and customer alerts. In one implementation, the dynamic desktop has a browser-like interface that includes ordered tabs with associated windows that include different resources. For example, a customer that initially calls a contact center may engage an IVR system. The customer's responses can include background or context that can be used to initially structure the tabs of the desktop, including the number of tabs, their order, and content. The windows of tabs with more relevant content are placed toward the front. The order, structure, and content can adapt to the spontaneous dialogue of the live call. For example, the desktop can automatically launch relevant software tools, notifications, or pages based on key data points obtained before a live call with the agent and then change to adapt as the live call continues. Examples of the key data points include a statistical characteristic of the customer (e.g., frequency of incoming calls), a historical characteristic of the customer (e.g., customer loyalty), an indication of the customer's device, and an indication of a service plan subscribed to by the customer. The desktop can also adapt to an agent's role or business unit and offer search capabilities biased by key data points. Further, the system can track multiple sessions and searches as part of a machine learning process to improve the performance of the system to dynamically adapt the desktop with the most suitable content at the most suitable time. As a result, agents spend less time looking for content or tools that they would otherwise search for manually. Further, the desktop can control the amount of tools or content that are presented to the user to avoid the routine mistake of manually opening too many tabs or content that would otherwise require the agent to navigate through a crowded desktop. The agents can be more efficient on calls by feeding the agent the content and tools rather than needing the agent to search through information. Accordingly, the dynamic desktop provides an experience or platform that ties together the agent's collection of resources into one cohesive experience. Various embodiments of the disclosed systems and methods are described. The following description provides specific details for a thorough understanding and an enabling description of these embodiments. One skilled in the art will understand, however, that the invention can be practiced without many of these details. Additionally, some well-known structures or functions may not be shown or described in detail for the sake of brevity. The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific embodiments of the invention. Although not required, embodiments are described below in the general context of computer-executable instructions, such as routines executed by a general-purpose data processing device, e.g., a networked server computer, mobile device, or personal computer. Those skilled in the relevant art will appreciate that the invention can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, handheld devices, wearable computers, all manner of cellular or mobile phones, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, media players and the like. Indeed, the terms “computer,” “server,” and the like are generally used interchangeably herein, and refer to any of the above devices and systems, as well as any data processor. While aspects of the disclosed embodiments, such as certain functions, can be performed exclusively or primarily on a single device, some embodiments can also be practiced in distributed environments where functions or modules are shared among disparate processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. Aspects of the invention can be stored or distributed on tangible computer-readable media, including magnetically or optically readable computer discs, hard-wired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. In some embodiments, computer implemented instructions, data structures, screen displays, and other data under aspects of the invention can be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they can be provided on any analog or digital network (packet switched, circuit switched, or other scheme). The term “live,” in the context of computer desktop that adapts to a live audio stream of a telephone conversation, refers to adapting the computer desktop based on an ongoing audio dialogue. As such, the audio dialogue is created, communicated, processed, and used to adapt the computer desktop without perceptible latency or delay to the agent viewing the computer desktop, which is oftentimes referred to as occurring in “real-time” or “near-real time,” or with a tolerable delay where the desktop adapts to include resources that are useful to the agent during the conversation because the resources are based on a recent segment of the telephone conversation that continues changing. The term “computer desktop,” “desktop,” “computer data display” can refer to the working area of a computer display regarded as a representation of a notional desktop and containing icons or graphical controls that represent items such as files. As used here, a desktop can include a user interface (UI) or windows that provide working areas for an agent. For example, a computer desktop can be embodied like a web browser with portions that contain certain content and that organizes content into different tabs. A “dynamic desktop” or simply “desktop” can be used interchangeably in this description when referring to a desktop that can dynamically adapt to an ongoing multi-channel speech audio signal as it spontaneously develops. FIG.2is a block diagram that illustrates a system that can dynamically adapt an agent's desktop based on a live audio conversation with a customer. The system200includes backend components to automate searches and frontend components to configure the agent's desktop. The components can include a combination of hardware and/or software at a common location or distributed and administered by different entities. As shown, multi-channel speech audio202is fed into a speech feature generator204. Multiple speech audio channels can be logically divided into separate individual audio streams. For example, a live two-way call between a customer and agent can be streamed to the speech feature generator204as two-channel speech: a first channel includes customer speech and a second channel includes agent speech. Likewise, a three-way call can be fed to the speech feature generator204in a separate channel for each of the three participants. In some embodiments, a channel includes a separate physical transmission medium of an audio connection for each participant of a call. The speech feature generator204converts analog speech signals of the one or more channels into digital speech audio information that can be processed to create or extract speech features. As such, the speech feature generator204enables recognition and translation of spoken language into features of speech such as words, phrases, patterns, tone, amplitude, etc. For example, the speech feature generator204can include an automatic speech recognition (ASR) function or speech-to-text (STT) function that transcribes speech into text. In some embodiments, the speech feature generator204can implement a machine learning process to train a feature model that outputs speech features based on a training set of text and vocabulary for a particular type of contact center. For example, in the context of a telecommunications contact center, the training set can include text and vocabulary that is specific to telecommunications services, products, or information. As such, the speech feature generator204can analyze human voices of prior calls received by similar contact centers to fine-tune the feature model implemented by the speech feature generator204. In some embodiments, the speech feature generator204can label the search features based on the content and/or source of the speech to facilitate subsequent processing. For example, the labeled speech features of a customer can be weighted more than agent speech features, when training a search model or when searching for relevant information that addresses a customer's inquiry. Moreover, the speech feature generator204can be trained to process the customer audio channel to be speaker-independent and trained to process an agent audio channel to be speaker-dependent. As a result, the speech feature generator204can find relevant information for any customer inquiry but is customized for a particular agent. A speech feature collector206can receive the output of the speech feature generator204, which includes speech features of the multi-channel speech audio202. In some implementations, the speech feature generator204can be a third-party service such as GOOGLE voice-to-text or AMAZON transcribe, which can transcribe customer service calls and generate metadata to create a searchable archive of speech features that are fed to the speech feature collector206. In the context of a telecommunications call center, the collected speech features can include keywords or phrases related to a communications service, mobile phone device problems, device specifications, service terms, customer history, etc. The speech features of the speech feature collector206are fed to a search and analytics engine208in a query for search results. For example, the speech features of a live audio call between a customer and agent can be fed to the search and analytics engine208while the audio call is ongoing in real-time or near-real time. Examples of sources of search results include databases for knowledge base (KB) articles210, user profiles212, and tasks or actions214. Among other things, the KB articles210database can store complex structured and unstructured information used by the contact center. The user profiles212database can store specific data of particular customers including service plans, mobile device specification, preferences, and customer business histories. The user profile212can also include data about agents including their expertise and work history or experience, and associated business unit. The tasks or action214database can include tasks or actions that can be taken by the agent or customer to address an inquiry. The search and analytics engine208can search for and identify search results including items in the databases that match speech features (e.g., phrases, keywords, characters) or labels that were extracted or created based on the live conversation between the agent and customer. The search engine can include a learning algorithm that finds patterns in training data indicative of input parameters corresponding to target results. The output of the training process is a machine learning search model that can predict search results. In one implementation, the input parameters216can include feedback obtained from inputs by the agent to the agent's desktop. Examples of the input parameters include clicks or other interactions on the agent's desktop in response to search results, events that occurred on the agent's desktop, results that were selected or utilized to address a customer's inquiry, or any other analytic process or operation. The results output by the search and analytics engine208are sent to the speech feature collector206. Hence, a cycle of speech features and returning results from the search and analytics engine208can occur periodically during a live audio call between a customer and agent. For example, the speech features can be fed to the search and analytics engine208and return results every 0.01, 0.2, or 3 seconds. In some implementations, the cycle of inputting speech features and outputting results by the search and analytics engine208is continuous while the live conversation is ongoing. A results processor218manages the results generated by the search and analytics engine208to deliver suitable content or tools to the agent's desktop. For example, the results processor218can include a table that maps results such as content, tools, or other features to locations of the dynamic desktop, which is being viewed by the agent during the live call with the customer. In some implementations, the results process218creates control signals based on the search results. The control signals can control how the system can dynamically adapt the agent desktop220. The agent desktop220can be structured based on a new live call and dynamically adapt to the live call as it spontaneously develops. For example, a new call that is received at the agent's telephone can cause the agent's desktop to establish a communications socket with the results processor218to stream results by calling functions of a desktop facing application programing interface (API)222. The agent desktop220can also use the communications socket to provide feedback of input parameters to the search and analytics engine208through the results processor218and/or the speech feature collector206. In some implementations, the resources that are available to the agent desktop220can be further filtered base on whether they are associated with a measure that exceeds a predetermined threshold. For example, the results can be associated with relevancy scores and the agent desktop220may only show content that has a relevancy score greater than a threshold (e.g.,95) and only launch tools that have a relevancy score that is greater than another threshold (e.g.,80). The content and tools that do not exceed their respective thresholds can be queued and readily available but not shown or launched, respectively. As such, the agent desktop220can avoid being crowded by relevant content or tools. FIG.3is a block diagram that illustrates a platform300implemented by a system to dynamically adapt a dynamic desktop (also referred to as a “dynamic desktop”) for a customer service agent. As shown, the platform300has a lowermost layer of functional modules or engines302that includes a search module304-1, an analytics module304-2, a rules engine304-3, a live speech learning module304-4, a sentiment analysis module304-5, and an events queue304-6. The modules or engines302can be implemented with any combination of software (e.g., executable instructions, or computer code) and hardware (e.g., at least a memory and processor). Accordingly, in some examples, a module or engine is a processor-implemented module or set of code and represents a computing device having a processor that is at least temporarily configured and/or programmed by executable instructions stored in memory to perform one or more of the particular functions that are described herein. The search module304-1can be embodied as a search engine that searches for and identifies items in a database that correspond to keywords or characters indicated by a user during an ongoing dialogue. The analytics module304-2can be embodied as an engine that processes inputs (e.g., speech, computer interactions) and outputs the discovery, interpretation, and communication of meaningful patterns. It can also entail applying data patterns towards effective decision making. In other words, the analytics module304-2can be understood as the connection between data and effective decision making. The rules engine304-3performs logic-based determinations regarding how to adapt a dynamic desktop, the content items, software tools, notifications, and their placement on the dynamic desktop. For example, the rules engine304-3can determine whether the relevance of content items or software tools are shown or launched, respectively, on the dynamic desktop depending on whether their relevance exceeds one or more thresholds. The speech learning module304-4implements a learning algorithm to improve the speech learning capability of the platform300. The sentiment analysis module304-5can similarly implement a learning algorithm to improve the sentiment analysis of the platform300. The events queue304-6performs a queueing function for events that are identified in the audio stream, which can be used to adapt the dynamic desktop. The functional resources306rely on the modules or engines302to perform functions including tool access308, live search310, and suggestions312. Examples of the tool access308include menus, notification, and apps that can be launched or embedded on a dynamic desktop to adapt to a live audio conversation between a customer and an agent. Hence, the tool access308can rely on the speech learning module304-4, sentiment analysis module304-5, and other modules or engines to identify suitable tools. Likewise, the live search310can find content from a variety of databases and the suggestions312can find suggestions responsive to the live audio dialog between the customer and agent. The uppermost layer of the platform300represents the dynamic desktop314that is presented on the agent's computer. The dynamic desktop314is embodied like a web browser with portions that contain certain content and that organizes content into different tabs. That is, the dynamic desktop314includes multiple tabs and associated display areas (e.g., windows) that are ordered such that any content items of the frontmost window are more relevant to the most recently analyzed portion of the audio conversation compared to any content items of remaining windows hidden behind the frontmost window. As shown, the dynamic desktop314includes tabs316-1through316-4, wherein tab316-1is the frontmost tab that displays its content items while the content items of the windows associated with tabs316-2through316-4can include content items of progressively less relevance. Hence, the tab316-4contains the least relevant content among all the tabs316-1through316-4. FIG.4is a flow diagram that illustrates a process for dynamically adapting a computer desktop (“desktop” or “dynamic desktop”) for a customer service agent to a live audio dialogue with a customer. The process400can be performed by a contact center system (“system”) to dynamically adapt any user interface (UI) in real-time based on a live audio dialogue. In402, the system receives a customer-initiated connection request for a contact center. The connection request can include an indication of an inquiry from a customer. For example, when calling the system, the customer can provide preliminary inputs to an interactive voice response (IVR) system. The system can collect contextual or other information about the customer from a customer and/or retrieved from a customer database. The system can generate a relevancy measure based on the preliminary inputs. In404, the system initializes a desktop for a computer of the agent of the contact center based on the relevancy measure. In particular, the system causes display of the desktop on a computer in which the agent is logged-in. In406, the system establishes an audio connection (e.g., live telephone call) between the customer and the agent. In one example, a two-channel audio connection for a live audio dialogue includes a first channel for customer speech and a second channel for agent speech. A speech feature analyzer (e.g., natural language analyzer) can process segments of the live audio dialogue in accordance with a speech feature model to output speech features such as keywords. In one example, speech features can indicate substance and meaning of the segment of the live audio dialogue. In some implementations, a third-party service provider provides the speech feature analyzer. In408, the system generates a search query based on the search features (e.g., keywords). In one example, the system can predict a customer's inquiry of the live audio dialogue based on the multiple keywords, create a search term that is indicative of the inquiry, and add the search term to the search query. In some implementations, the search features can be labeled for the search query. For example, the system can label search features as telecommunications terms that are weighted more than a search feature that is not labeled as a telecommunications term. As such, the search query (and results) are biased for telecommunications terms. In another example, the search features are labeled as either customer speech or agent speech. The customer speech can be weighted more than agent speech. As such, the search query (and results) is biased for customer speech. In some implementations, the output of the speech feature analyzer includes an indication of the customer's sentiment, which can be used to bias the search query and as a feedback signal to improve the performance of the search feature analyzer. In410, the system obtains search results by querying one or more databases with the search query. The search results can include content items that are each relevant to the substance and meaning of the live audio dialogue. In addition (or alternative) to the content items, the search results can include a software tool, software application, or notification that is relevant to the live audio dialogue. In some implementations, the search results are weighted based on demographic and historical information about the customer and real-time actions performed by the agent on the desktop while engaged in the audio dialogue with the customer. In some implementations, the search results are ranked based on a statistical or historical characteristic of the customer, an indication of a customer device, or service plan subscribed to by the customer. The system can cause display on the desktop of any of the multiple content items that exceed a first threshold and cause any of the multiple software tools that exceed a second threshold to launch on the desktop. In one example, the search results are weighted based on a consumer alert associated with the customer's mobile phone, where a type or model of the customer's mobile phone was indicated in the keywords. In412the system generates one or more control signals based on the search results. The control signals are configured to control the content and placement of the content (or other resources) on the dynamic desktop during the live audio dialogue. For example, the control signals can cause the desktop to display only content items with a relevancy score that exceeds a threshold. In414, the system causes an application programming interface (API) to configure the desktop during the live audio dialogue based on the one or more control signals. In one example, the desktop includes multiple tabs and associated windows that are ordered such that any content items of the frontmost window are more relevant to the segment of the live audio dialogue compared to any content items of any other tabs. For example,FIG.5Aillustrates a screen view of a dynamic desktop500-A for a customer service agent. As shown, the dynamic desktop500-A includes four tabs502-1through502-4with content items in each respective window area. For example, the frontmost tab502-1has content items including “account details,” “billing,” etc. The content items of the window associated with tab502-2are hidden behind the content items of the tab502-1. The content items of the window associated with tab502-3are hidden behind the content items of the tab502-2, and the content items of the window associated with tab502-4are hidden behind the content items of the tab502-3. The tabs502-1through502-4can be ordered in accordance with the relevance of their content items. For example, the frontmost tab502-1includes the most relevant content items while the backmost tab502-4contains the least relevant content items. The dynamic desktop500-1displays a suggestions window504, of a software tool, that overlays the window of the frontmost tab502-1. The content of the suggestions window504is adapted to the customer's speech. For example, in the illustrated example, the customer speech includes “my phone's battery is dying too quickly.” In response to that speech, the suggestions window504slides up from the bottom of the dynamic desktop500-A and displays relevant selectable content items including articles regarding battery exchange procedures, a memo that contains the transcript of the caller's previous call, and information regarding battery life troubleshooting. The content items can include associated tags508that can be selected by the agent to indicate the usefulness or relevance of the content, which can be used later to train the suggestions engine. In416, the system generates additional control signals based on search results of subsequent segments of the live audio dialogue. The system can periodically or continuously query the database(s) for speech features that are extracted from respective segments of the live audio dialogue. For example, the system can continuously query the database(s) for speech features that are extracted continuously from the live audio dialogue as it develops spontaneously. As such, the system can continuously generate control signals based on search results that are continuously collected. In418, the system dynamically adapts the desktop in accordance with the additional control signals. For example, the desktop can adapt to the live audio dialogue by adding or removing tabs, rearranging the order of the tabs, or changing the content of the tabs. In one example, the system can move content the frontmost tab behind content of another tab, replace one tab without changing the frontmost tab, or adapt content of the tabs. For example,FIG.5Billustrates another screen view of the dynamic desktop500-B for the customer service agent. As shown, the tabs502-1,502-4, and502-2have been reordered and the tab502-3—has been replaced by tab502-6. Moreover, tab502-4is selected for display of its content on a display device. The system can include a combination of various additional features. For example, the system can collect input parameters indicative of an interaction by the agent with the desktop. The system can generate a feedback signal based on the input parameters to update the search engine that outputs search results based on speech features. Hence, the search engine is biased based on the feedback signal. In another example, the system can generate a memo of the audio dialogue between the customer and the agent and generate a feedback signal based on content of the memo. The search results that are generated in real-time during the audio dialogue between the customer and the agent can be weighted based on the content of the memo. CONCLUSION FIG.6is a block diagram illustrating an example of a processing system600in which at least some operations described herein can be implemented. The processing system600represents a system that can run any of the methods/algorithms described herein. For example, system200or any of its components can include or be part of a processing system600. The processing system600can include one or more processing devices, which can be coupled to each other via a network or multiple networks. A network can be referred to as a communication network or telecommunications network. In the illustrated implementation, the processing system600includes one or more processors602, memory604, a communication device606, and one or more input/output (I/O) devices608, all coupled to each other through an interconnect610. The interconnect610can be or include one or more conductive traces, buses, point-to-point connections, controllers, adapters and/or other conventional connection devices. Each of the processor(s)602can be or include, for example, one or more general-purpose programmable microprocessors or microprocessor cores, microcontrollers, application specific integrated circuits (ASICs), programmable gate arrays, or the like, or a combination of such devices. The processor(s)602control the overall operation of the processing system600. Memory604can be or include one or more physical storage devices, which can be in the form of random-access memory (RAM), read-only memory (ROM) (which can be erasable and programmable), flash memory, miniature hard disk drive, or other suitable type of storage device, or a combination of such devices. Memory604can store data and instructions that configure the processor(s)602to execute operations in accordance with the techniques described above. The communication device606can be or include, for example, an Ethernet adapter, cable modem, Wi-Fi adapter, cellular transceiver, Bluetooth transceiver, or the like, or a combination thereof. Depending on the specific nature and purpose of the processing system600, the I/O devices608can include devices such as a display (which can be a touch screen display), audio speaker, keyboard, mouse or other pointing device, microphone, camera, etc. While processes or blocks are presented in a given order, alternative implementations can perform routines having steps or employ systems having blocks, in a different order, and some processes or blocks can be deleted, moved, added, subdivided, combined and/or modified to provide alternative or sub-combinations, or can be replicated (e.g., performed multiple times). Each of these processes or blocks can be implemented in a variety of different ways. In addition, while processes or blocks are at times shown as being performed in series, these processes or blocks can instead be performed in parallel, or can be performed at different times. When a process or step is “based on” a value or a computation, the process or step should be interpreted as based at least on that value or that computation. Software or firmware to implement the techniques introduced here can be stored on a machine-readable storage medium and can be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable medium”, as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine can be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible medium includes recordable/non-recordable media (e.g., read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices), etc. Note that any and all of the implementations described above can be combined with each other, except to the extent that it can be stated otherwise above, or to the extent that any such implementations might be mutually exclusive in function and/or structure. Although the invention has been described with reference to specific implementations, it will be recognized that the invention is not limited to the implementations described but can be practiced with modification and alteration within the spirit and scope of the disclosed implementations. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. Physical and functional components (e.g., devices, engines, modules, and data repositories) associated with processing system600can be implemented as circuitry, firmware, software, other executable instructions, or any combination thereof. For example, the functional components can be implemented in the form of special-purpose circuitry, in the form of one or more appropriately programmed processors, a single board chip, a field programmable gate array, a general-purpose computing device configured by executable instructions, a virtual machine configured by executable instructions, a cloud computing environment configured by executable instructions, or any combination thereof. For example, the functional components described can be implemented as instructions on a tangible storage memory capable of being executed by a processor or other integrated circuit chip. The tangible storage memory can be computer-readable data storage. The tangible storage memory can be volatile or non-volatile memory. In some implementations, the volatile memory can be considered “non-transitory” in the sense that it is not a transitory signal. Memory space and storage described in the figures can be implemented with the tangible storage memory as well, including volatile or non-volatile memory. Each of the functional components can operate individually and independently of other functional components. Some or all of the functional components can be executed on the same host device or on separate devices. The separate devices can be coupled through one or more communication channels (e.g., wireless or wired channel) to coordinate their operations. Some or all of the functional components can be combined as one component. A single functional component can be divided into sub-components, each sub-component performing separate method steps or a method step of the single component. In some implementations, at least some of the functional components share access to a memory space. For example, one functional component can access data accessed by or transformed by another functional component. The functional components can be considered “coupled” to one another if they share a physical connection or a virtual connection, directly or indirectly, allowing data accessed or modified by one functional component to be accessed in another functional component. In some implementations, at least some of the functional components can be upgraded or modified remotely (e.g., by reconfiguring executable instructions that implement a portion of the functional components). Other arrays, systems and devices described above can include additional, fewer, or different functional components for various applications. Aspects of the disclosed implementations can be described in terms of algorithms and symbolic representations of operations on data bits stored in memory. These algorithmic descriptions and symbolic representations generally include a sequence of operations leading to a desired result. The operations require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electric or magnetic signals that are capable of being stored, transferred, combined, compared, and otherwise manipulated. Customarily, and for convenience, these signals are referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. These and similar terms are associated with physical quantities and are merely convenient labels applied to these quantities. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number can also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list. The above detailed description of embodiments of the system is not intended to be exhaustive or to limit the system to the precise form disclosed above. While specific embodiments of, and examples for, the system are described above for illustrative purposes, various equivalent modifications are possible within the scope of the system, as those skilled in the relevant art will recognize. For example, some network elements are described herein as performing certain functions. Those functions could be performed by other elements in the same or differing networks, which could reduce the number of network elements. Alternatively or additionally, network elements performing those functions could be replaced by two or more elements to perform portions of those functions. In addition, while processes, message/data flows, or blocks are presented in a given order, alternative embodiments can perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks can be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes, message/data flows, or blocks can be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks can instead be performed in parallel, or can be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations can employ differing values or ranges. Those skilled in the art will also appreciate that the actual implementation of a database can take a variety of forms, and the term “database” is used herein in the generic sense to refer to any data structure that allows data to be stored and accessed, such as tables, linked lists, arrays, etc. The teachings of the methods and system provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various implementations described above can be combined to provide further embodiments. Any patents and applications and other references noted above, including any that can be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the technology can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the technology. These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain implementations of the technology, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system can vary considerably in its implementation details, while still being encompassed by the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the invention under the claims. While certain aspects of the technology are presented below in certain claim forms, the inventors contemplate the various aspects of the technology in any number of claim forms. For example, while only one aspect of the invention is recited as embodied in a computer-readable medium, other aspects can likewise be embodied in a computer-readable medium. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the technology. | 45,390 |
11861147 | DETAILED DESCRIPTION Wire-transfer applications that process wire transfer requests may be large, complex applications that may require authorization credentials to perform certain operations. One example of such an application is the Money Transfer System (MTS) application. Issues may arise as wire-transfer requests are received by, processed, or transmitted from a wire-transfer application. Many users of the MTS application may not have the authorization credentials required to interact with the wire-transfer application and resolve such issues. For example, an error may arise in an authentication process for a wire-transfer request. The authentication process may fail, and the wire-transfer request may be “trapped” (e.g., remain pending) rather than becoming approved or denied. A user may identify that the wire-transfer request is trapped, but may not have the authorization credentials needed to cancel the wire-transfer request or restart the authentication process. As a result, the trapped request may remain pending indefinitely, consuming valuable computing resources. Errors, trapped requests, application failures, and other problems associated with the wire-transfer application can waste significant computing resources (e.g., memory usage and processing power), especially when they remain unresolved for long periods of time. These problems can also cascade, causing downstream operations of the wire-transfer application to also experience issues. For example, the wire-transfer application may not process subsequent wire-transfer requests as efficiently or may fail altogether. In addition to the above problems, in some cases, wire-transfer requests may be transmitted to the wire-transfer application at certain times of day via communication channels. Each communication channel can connect a service, such as a wire-initiating application, to the wire-transfer application. A user of a wire-initiating application can start up a communication channel between the wire-initiating application and the wire-transfer application. Wire-transfer requests can then be transmitted between the wire-initiating application and the wire-transfer application. But, the user may not have the authorization credentials required to start up all communication channels for all services at the same time (e.g., substantially contemporaneously). Instead, the user may have to individually log into each service and establish the communication channel. As there can be many (e.g., dozens) of communication channels, starting up each communication channel individually every day may be time-consuming and may consume unnecessary computing resources. Similarly, the user may shut down each communication channel individually at the end of the day. Functions required for end-of-day processing may also be initiated individually. Some examples of the present disclosure overcome one or more of the abovementioned problems by allowing a user to initiate wire-transfer application functionalities by providing the user with a graphical user interface that is supported by a corresponding execution service with elevated privileges in the computing environment. For example, the graphical user interface can be customized to display functions for the wire-transfer applications to the user. Functions that should not be initiated by the user, even via the graphical user interface, are not displayed in the graphical user interface. The user may not be authorized to interact with the wire-transfer application outside of the graphical user interface. Examples of functions that can be initiated via the graphical user interface can include recycling trapped wire-transfer requests, canceling wire-transfer requests, or simultaneous start-up or shut-down of communication channels for the wire-transfer application. Selecting a function via the graphical user interface can cause the execution service, which has elevated privileges or the authorization credentials required to perform the displayed functions, to interact with the wire-transfer application to initiate the function. In this way, issues relating to processing wire-transfer requests can be mitigated or avoided in the wire-transfer application without giving unauthorized users unnecessary privileges. This can result in improved performance of the wire-transfer application and reduction of wasteful consumption of bandwidth and computing resources, while also maintaining the security integrity of the computer system. Further, operations that would otherwise involve multiple, repeated initiations by the user (e.g., shutting down or starting up the communication channels) can be initiated with a single selection via the graphical user interface. In one particular example, a user may interact with a graphical user interface on a client device. The graphical user interface can display options of functions that can be performed by the wire-transfer application in a secure computing environment. The displayed options can represent functions that the user may initiate without having the authorization credentials to do so in the secure computing environment. For example, the displayed options may only be performed using an administrative ID, which the user may not have for security reasons. The administrative ID can have authorization credentials for many or all functions of the wire-transfer application, including the functions displayed as options on the graphical user interface. The displayed options may be functions that can be performed to improve the functioning of the wire-transfer application. But, because the user does not have the administrative ID, the user would normally be unable to initiate the functions. And, allowing all users of the wire-transfer application to use the administrative ID, which can also provide access to more valuable functions in the wire-transfer application, may introduce security risks. The graphical user interface can therefore be used to allow a user without the administrative ID (e.g., a limited-privilege user) to initiate certain functions for the wire-transfer application. The user can select an option on the graphical user interface for a particular function. For example, the user can select an option corresponding to canceling a group of wire-transfer requests. Because the user does not itself have the authorization credentials or privileges (such as the administrative ID) needed to initiate the functions, the client device cannot directly send a command directing the wire-transfer application to perform the function. Instead, selecting the option can cause a separate execution service, which is authorized to perform the function, to send the command on behalf of the user. For example, the execution service may access the wire-transfer application using the administrative ID. When the client device transmits the selected option, a server in the secure computing environment can generate a text file indicating the selected function. The text file may specify the group of wire-transfer requests along with instructions for cancelling the group of wire-transfer requests by the wire-transfer application. The text file can be saved into a folder that is monitored by the execution service. The execution service may have the authorization credentials required to cause the wire-transfer application to perform the functions displayed in the graphical user interface. When the execution service detects the text file in the folder, the execution service can generate and issue a command to the wire-transfer application to cause the wire-transfer application to cancel the group of wire-transfer requests. A unique identifier for the user can then be logged in association with the function to create an audit trail. Thus, security of the wire-transfer application can be maintained while allowing users to initiate certain functions to prevent or mitigate issues in the wire-transfer application. These illustrative examples are given to introduce the reader to the general subject matter discussed herein and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative aspects, but, like the illustrative aspects, should not be used to limit the present disclosure. FIG.1is a block diagram of an example of a computing environment100for enabling user initiation of functionalities for a wire-transfer application102according to some aspects of the present disclosure. The computing environment100can include the wire-transfer application102, a client device103, a server105, an execution service132, and one or more wire-initiating applications108a-bthat can communicate via a network110. The network110can be a public data network, a private data network, or some combination thereof. A data network may include one or more of a variety of different types of networks, including a wireless network, a wired network, or a combination of a wired and a wireless network. Examples of suitable networks include the Internet, a personal area network, a local area network (LAN), a wide area network (WAN), or a wireless local area network (WLAN). Examples of the client device103can include desktop computers, servers, mobile phones (e.g., cellular phones, PDAs, tablet computers, net books, laptop computers, hand-held specialized readers, and wearing devices such as smart watches. The wire-transfer application102can process and perform wire-transfers between computing systems (e.g., transfers of money between one or more entity's accounts hosted on the computer systems). For example, the wire-transfer application102can transmit or receive wire-transfer requests112a-bfrom one or more wire-initiating applications108a-b. The wire-transfer requests112a-bcan be transmitted to or received from the wire-transfer application102via communication channels114a-b, also referred to as “lines.” The communication channels114a-bcan be socket connections or message queue (MQ) connections between the wire-transfer application102and the wire-initiating applications108a-b. The wire-transfer application102may process wire-transfer requests112a-bto perform wire transfers between computing devices. Some operations performed by the wire-transfer application102, such as a set of functions116, may require selective authorization to perform. That is, a user118may not have authorization to initiate the set of functions102or to interact with the wire-transfer application102at all. For example, the user118may not have permissions in the computing environment100to startup or shutdown multiple communication channels114a-bconcurrently. More specifically, the communication channels114a-bmay only be active at certain times of day. For example, communication channels114a-bmay be started up at the beginning of the day and shut down at the end of the day to prevent wire-transfer requests112a-bfrom being processed overnight. Typically, each communication channel114may be individually started up or shut down manually by the user118. But, individually starting up or shutting down dozens of communication channels114a-bmay be time-consuming and, because of overlapping data (e.g., headers) in the requests, may waste bandwidth and computing resources in repeatedly transmitting and processing the same or similar data. So, it may be desirable for the user118to be able to startup or shutdown some or all communication channels114a-bat the same time (e.g., substantially contemporaneously). Normally, though, the user118may not have the authorization credentials required to start up or shut down all communication channels114a-bat the same time for the wire-transfer application102. As another example, a first wire-transfer request112atransferred to the wire-transfer application102may become “trapped” (e.g., experience errors that prevent the wire-transfer request112from being processed by the wire-transfer application102). The user118may determine that the first wire-transfer request112ais trapped, but may not have the authorization credentials required to cause the wire-transfer application102to address the issue. A trapped wire-transfer request112that remains unaddressed may indicate a larger issue, or may cause additional issues in the computing environment100. Similarly, the user118may wish to cancel a second wire-transfer request112bafter initiation by the second wire-initiating application108b. But, the user may not have the authorization credentials required to cancel the second wire-transfer request112b. In order to enable user initiation of the set of functions116for the wire-transfer application102, a graphical user interface104can be presented to the user118on the client device103. The graphical user interface104can display the set of functions116as options120(e.g., menu items, buttons, check boxes, etc.). The functions116may include functions that the user118would be otherwise unauthorized to initiate in the computing environment100. The user118may be authorized by an entity, such as a business, that manages the computing environment100to interact with the graphical user interface104to initiate the set of functions116. But, the wire-transfer application102may be configured to prevent most users from directly initiating the set of functions116to maintain security for the computing environment100. Thus, the graphical user interface104can allow the user118to indirectly initiate the set of functions116without jeopardizing the security of the computing environment100. The user118may interact with the graphical user interface104to make a selection122for a selected functionality124, such as to start up the communication channels114a-b. When the user118makes the selection122, the client device103can transmit the selection122to a server105in the computing environment100. The server105may not have authorization in the computing environment100to transmit the appropriate commands to the wire-transfer application102(e.g., for security reasons the server105may also be a limited-privilege user in the computing environment100). So, when the server105receives the selection122from the client device103, the server105can interact with the an execution service132that can initiate the selected functionality124. The server105may interact with the execution service132via a file, messaging, or a data structure stored in memory, or any other suitable means. For example, the server105can generate a text file126. The text file126can include data identifying the selected functionality124, such as a name128of the selected functionality124. For example, the text file126may be named “Start of Day Processing.” The server105can then store the text file126in a predefined storage location130that can be monitored by the execution service132. The execution service132can automatically detect the presence of the text file126. In some examples, the execution service132may detect the text file126based on the name128. Alternatively, rather than a text file126stored in a folder or a database, the server105can generate a data structure stored in RAM that is accessible to the execution service132. The data structure can include the data identifying the selected functionality124. The execution service132can automatically issue a command134to the wire-transfer application102based on the selected functionality124. For example, the execution service132can issue the command134based on the presence or content of the text file126, to cause the wire-transfer application102to execute the selected functionality124. In some such examples, the execution service132can extract the content from the text file126, analyze the content, and based on the content, generate and transmit the command134. Different content may be mapped to different commands using a predefined lookup table or other techniques, or the content of the text file126may itself be at least part of the command134. The execution service132can have the required authorization to cause the wire-transfer application102to execute the selected functionality124. For example, the execution service132may have elevated privileges (e.g., administrative privileges) in the computing environment100that allow the execution service132to issue commands134to the wire-transfer application102and thereby initiate the selected functionality124. Additionally or alternatively, the execution service132may have authentication information (e.g., a username and password) that confers the requisite permissions on the execution service132. The wire-transfer application102may receive the command134and execute the corresponding functionality. For instance, the wire-transfer application102may establish connections with the wire-initiating applications108a-bvia the communication channels114a-b. This can allow wire-transfer requests112a-bto be transferred between the wire-initiating applications108a-band the wire-transfer application102. In some examples, the user118may determine that the first wire-transfer request112ahas been trapped in a particular function executed by the wire-transfer application102. For example, the first wire-transfer request112amay be trapped in a function that identifies a recipient of the wire transfer. The function may be malfunctioning and may be unable to identify the recipient, which can prevent the requested wire transfer from being performed. To address the issue, the user118can interact with the graphical user interface104to view functions that can be recycled (e.g., restarted). For example, data can be extracted from a configuration file136for the wire-transfer application102. The data can indicate which functions (e.g., the set of functions116) can be restarted by the wire-transfer application102in response to commands sent by the execution service132. The graphical user interface104can be customized to include options120that correspond to the set of functions116via the configuration file136. The user118may select the option120that corresponds to the selected functionality124in which the first wire-transfer request112ais trapped. The resulting text file126stored in the predefined storage location130may indicate the selected functionality124, and that the first wire-transfer request112ashould be recycled. The command134transmitted by the execution service132can cause the wire-transfer application102to stop the selected functionality124and then to restart the selected functionality124. Alternatively, the command134can cause the wire-transfer application102to stop the first wire-transfer request112afrom being processed by the selected functionality124and then to restart the first wire-transfer request112ain the selected functionality124. Recycling the selected functionality124or the first wire-transfer request112amay allow the first wire-transfer request112ato be properly processed by the wire-transfer application102. Similarly, the user118may decide to cancel the second wire-transfer request112binitiated by the second wire-initiating application108b. The user118can interact with the graphical user interface104to select the option120that corresponds to the selected functionality124for the second wire-transfer request112b. The resulting text file126detected by the execution service132can indicate the selected functionality124to cancel the second wire-transfer request112b. Then, the execution service132can transmit the command134to the wire-transfer application102. In response, the wire-transfer application102can stop the second wire-transfer request112bfrom being processed. In yet another example, the selected functionality124may involve shutting down each of the communication channels114a-bat the same time. For example, the text file126may include a bulk entry specifying each communication channel114to shut down. The communication channels114a-bmay be shut down to perform end-of-day processing of wire-transfer requests112a-b. Such end-of-day processing may also be included in the selected functionality124. For example, after the wire-transfer application102shuts down the communication channels114a-b, the wire-transfer application102can generate a summary file138. The summary file138can include wire-transfer data140for all wire-transfer requests112a-btransmitted via the communication channels114a-bsince the communication channels114a-bwere started up. Typically, the user118may not have the required authorization to generate the summary file138for all wire-transfer requests112a-bat the same time. But, the graphical user interface104can enable user initiation of this selected functionality124. The summary file138generated for all wire-transfer requests112a-bsimultaneously can be submitted to the wire-transfer application102for further processing of the wire-transfer requests112a-b. Additionally, all functions initiated by the user118via the graphical user interface104may be recorded. For example, each time the command134is issued by the execution service132, a log142can be generated. The log142can indicate the set of selected functionalities144that have been selected by the user118via the graphical user interface104. The log142can also include a unique identifier146indicating the user118that selected each functionality. In this way, certain users118without authorization credentials can be allowed to initiate certain functions for the wire-transfer application102with an audit trail. AlthoughFIG.1depicts a certain number and arrangement of components, this is for illustrative purposes and intended to be non-limiting. Other examples may include more components, fewer components, different components, or a different arrangement of the components shown inFIG.1. And, although the examples described herein involve a wire-transfer application, similar techniques can be applied to other types of applications. FIG.2is a block diagram of an example of a computing environment200for enabling user initiation of functionalities for a wire-transfer application according to some aspects of the present disclosure. The computing environment200depicted inFIG.2includes one or more processing devices202communicatively coupled to a memory204. The processing devices202can include one processor or multiple processors. Non-limiting examples of the processing device202include a Field-Programmable Gate Array (FPGA), an application-specific integrated circuit (ASIC), a microprocessor, etc. The processing devices202can execute instructions206stored in the memory204to perform operations. In some examples, the instructions206can include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, such as C, C++, C#, etc. The memory204can include one memory or multiple memories. The memory204can be non-volatile and may include any type of memory that retains stored information when powered off. Non-limiting examples of the memory204include electrically erasable and programmable read-only memory (EEPROM), flash memory, or any other type of non-volatile memory. At least some of the memory can include a non-transitory computer-readable medium from which the processing devices202can read instructions206. A computer-readable medium can include electronic, optical, magnetic, or other storage devices capable of providing the processing device with computer-readable instructions or other program code. Examples of a computer-readable medium include magnetic disk(s), memory chip(s), ROM, RAM, an ASIC, a configured processor, optical storage, or any other non-transitory medium from which a computer processor can read the instructions206. In some examples, the processing devices202can detect a selection122by a user118of an option120in a graphical user interface104. The option120can be to initiate a selected functionality124of a wire-transfer application102in a computing environment100. The user118may not be authorized in the computing environment100to interact with the wire-transfer application102outside of the graphical user interface104. In response to detecting the selection122, the processing devices202can generate a text file126that includes data208identifying the selected functionality124to be executed by the wire-transfer application102. The processing devices202can store the text file126in a predefined storage location130that is monitored by an execution service132. The execution service132can be executed by the processing devices202to automatically detect a presence of the text file126in the predefined storage location130. In response to detecting the text file126, the execution service132can automatically issue a command134to the wire-transfer application102for causing the wire-transfer application102to execute the selected functionality124. FIG.3is a flowchart of a process for enabling user initiation of functionalities for a wire-transfer application according to some aspects of the present disclosure.FIG.3is described with references to components inFIGS.1-2. Other examples can include more steps, fewer steps, different steps, or a different order of the steps than is depicted inFIG.3. At block302, the processing devices202can detect a selection122by a user118of an option120in a graphical user interface104. The option120can be to initiate a selected functionality124of a wire-transfer application102in a computing environment100. The user118may not be authorized in the computing environment100to interact with the wire-transfer application102outside of the graphical user interface104. The graphical user interface104can be customized to include options120that correspond to functions that can be initiated for the wire-transfer application102. For example, the user118may interact with the graphical user interface104to select wire-transfer requests112a-bthat can be canceled or recycled. The processing devices202can access a configuration file136to identify a set of functions116that can be executed to recycle or cancel wire-transfer requests112a-b. The user118can then select an option120that corresponds to a selected functionality124that can recycle the first wire-transfer request112a. At block304, in response to detecting the selection122, the processing devices202can generate a text file126that includes data208identifying the selected functionality124to be executed by the wire-transfer application102. The text file126can include instructions for recycling the first wire-transfer request. At block306, the processing devices202can store the text file126in a predefined storage location130that is monitored by an execution service132. For example, the predefined storage location130may be a database or a folder into which the text file can be stored. At block308, the processing devices202can execute the execution service132to automatically detect a presence of the text file126in the predefined storage location130. In some examples, the execution service132may detect and automatically process any new text file126in the predefined storage location130. In other examples, the execution service132may monitor the predefined storage location130for particular types of text files. For example, the execution service132may detect and process a text file126that has a name128that corresponds to a function from the set of functions116, such as “recycling”. After detection, the execution service132can identify the selected functionality124based on the data208in the text file126. The execution service132may determine that the text file126initiating a recycling process for the first wire-transfer request112a. At block310, in response to detecting the text file126, the processing devices202can execute the execution service132to automatically issue a command134to the wire-transfer application102for causing the wire-transfer application102to execute the selected functionality124. For example, the command134can cause the wire-transfer application102to execute the selected functionality124by stopping the first wire-transfer request112aand then restarting the first wire-transfer request112a. In some examples, the selected functionality124can involve stopping and restarting a thread or program that is processing the first wire-transfer request112a. Recycling the first wire-transfer request112amay enable the first wire-transfer request112ato be successfully processed by the wire-transfer application102. In some examples, the selected functionality124may be recorded in a log142in association with a unique identifier146for the user118. The foregoing description of certain examples, including illustrated examples, has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of the disclosure. | 29,087 |
11861148 | DETAILED DESCRIPTION OF ONE OR MORE EMBODIMENTS The disclosure is particularly applicable to a web-based client server architecture deep search system and method for the financial industry and it is in this context that the disclosure will be described. It will be appreciated, however, that the system and method in accordance with the invention has much greater utility since it can be used for searching in other industries or with other types of pieces of content (such as the legal industry and legal documents, the medical industry and medical documents, etc.) and the system can be implemented using other computer system architectures and the system is not limited to any particular computer architecture. For illustration purposes, the deep search system and method implemented in the financial industry is now described in more detail. The system and method may be used to perform a textual search across a collection of documents in one or more electronic data sources, in the financial domain, over time, guided by concepts and scenarios pre-defined by financial experts. The system includes a context extraction engine that will a) recognize semantically defined unique and recurring scenarios within the textual material, consisting of a partial or whole sentence or multiple sentences, b) analyze and classify each scenario based on tense recognizing linguistic rules and natural language processing techniques, c) analyze sentiment and subjectivity to determine if the scenario is objective or subjective and d) determine the polarity and strength of sentiment relative to the company releasing the textual information and the likely impact on its stock price or the price of its other securities. The sentiment, subjectivity, the polarity and strength of the sentiment and the impact of the information may be stored as metadata associated with each piece of content. Based on this metadata, the system enables sophisticated searching within and across pieces of content, such as documents, SEC or other regulatory filings, transcripts of investor calls and presentations, videos, blogs, posts and the like, to find the specific information that the user is looking for. The system also scores companies in real-time on a continuous scale from negative to neutral to positive, and enables a user to rank and screen companies to generate new investment ideas and make better investment decisions. Now, an example of an implementation of the search system is described in more detail. FIG.1illustrates an example of an implementation of a search system20for efficiently conducting contextual and sentiment-aware deep search within a piece of content, such as a document, a piece of text, a blog, a posting and the like. The system may be implemented as a client/server type architecture as shown inFIG.1, but may also be implemented using other architectures, such as cloud computing, software as a service model, a mainframe/terminal model, a stand-alone computer model, a plurality of lines of code on a computer readable medium that can be loaded onto a computer system, a plurality of lines of code downloadable to a computer and the like which are within the scope of the disclosure. The system20may be one or more computing devices22(such as computing devices22a,22b, . . . ,22n) that connect to, communicate with and/or exchange data over a link24to a search system26that interact with each other to provide the contextual and sentiment-aware deep search within a piece of content. Each computing device may be a processing unit based device with sufficient processing power, memory/storage and connectivity/communications capabilities to connect to and interact with the system26. For example, each computing device22may be an Apple iPhone or iPad product, a Blackberry or Nokia product, a mobile product that executes the Android operating system, a personal computer, a tablet computer, a laptop computer and the like and the system is not limited to operate with any particular computing device. The link26may be any wired or wireless communications link that allows the one or more computing devices and the system26to communicate with each other. In one example, the link may be a combination of wireless digital data networks that connect to the computing devices and the Internet. The search system26may be implemented as one or more server computers (all located at one geographic location or in disparate locations) that execute a plurality of lines of computer code to implement the functions and operations of the search system as described below in more detail. Alternatively, the search system26may be implemented as a hardware unit in which the functions and operations of the back end system are programmed into a hardware system. In one implementation, the one or more server computers may use 4-core Intel® processors, run the Linux operating system, and execute Java, Ruby, Regular Expression, Flex 4.0, SQL etc. In the implementation shown inFIG.1, each computing device22may further comprise a display30aand a browser application30bso that the display30acan display web pages generated by the search system26and the user can fill in forms to provide search queries and the like to the search system26. The browser application30bmay be a plurality of lines of computer code executed by a processing unit of the computing device. Each computing device22may also have the usual components of a computing device such as one or more processing units, memory, permanent storage, wireless/wired communication circuitry, an operating system, etc. In the implementation shown inFIG.1, the search system26may further comprise a web server40(that may be software based or hardware based) that allows each computing device to connect to and interact with the search system26such as sending web pages and receiving information from the computing devices and a typical operating system42that is executed by one or more processing units that are part of the search system implementation. The search system26may further comprise a content extraction unit/engine44, a linguistic analysis and word/phrase tagging unit45, a sentiment analyzer46, a search engine47and a store48, that may be implemented as a software based or hardware based database, that may store the pieces of content associated with the system, the metadata generated by the search system for each piece of content, user preferences and the like. The content extraction engine/unit44may a) recognize semantically defined scenarios within the textual material, consisting of a partial or whole sentence or multiple sentences. The linguistic unit45analyzes and classifies each scenario based on linguistic rules and natural language processing techniques to determine subjectivity that are described below. The sentiment analyzer46analyzes sentiment and subjectivity to determine if the scenario is objective or subjective and determines the polarity and strength of sentiment of the sentence, paragraph or appropriate part of the piece of content relative to the company releasing the textual information and the likely impact on its stock price or the price of its other securities. The search engine47can perform searches based on the metadata, generate content to be displayed on the user interface of the system as well as generate reports of the system that are described below in more detail. In one implementation, the search engine may be the SOLR search engine which is open source enterprise search platform from the Apache Lucene project (additional information about SOLR can be found at http://lucene.apache.org/solr/ which is incorporated herein by reference.) The store48also contains an archive of “raw” pieces of content (unprocessed or tagged) and tagged piece of content. The user interface of the search system (implemented as a user interface unit/portion) allows a user to conduct topical and sentiment filter based deep searches as described below in more detail. FIG.2illustrates an overview of the deep search process50. In the process, the search system receives feeds, that may be real-time, of pieces of content (52) such as financial documents including 10K, 10Q or other SEC filings, or investor conference call transcripts, in the financial example. The content extractor unit of the system cleans the incoming pieces of content and normalizes the pieces of content (54). The content extractor unit of the system also extracts zones (particular sections of a document such as header, body, exhibits, MDA, and Footnotes in SEC filing documents) and sentences so that unique, meaningful information is separated from recurring or other boilerplate information during natural language processing. Often financial filings contain a large portion of recurring text that is repeated from the prior quarter, and this is typically less interesting to investors than new statements. In the content extractor unit and linguistic unit of the system, using thousands of structured concepts and scenarios defined through careful expert analysis, semantic tags are assigned by linguistic and machine learning processes trained by domain experts (56). The linguistic unit also discerns the topic of the content using special linguistic rules which is different from traditional search engines where a search is performed using word and phrases without contextual understanding of the text. For example, the linguistic analysis unit tags sentences based on their tense, to determine whether they talk about something that happened in the past, is continuing, or is expected to happen in the future. This is accomplished through a combination of linguistic analysis and domain-based language models that understand, for example, that a noun phrase like “deferred expenses” implies something about the future. In the system described here, the custom linguistic rules specifically designed for the financial domain, provide highly specialized and accurate context. The sentiment analyzer unit of the search system then analyzes each piece of text for subjectivity, performs textual scenario matching and filters the subjective sentences and assigns appropriate polarity based on supervised training rules, by deciding if the particular sentence or paragraph is favorable or unfavorable to the price of the asset in the case of the financial industry example (58,60). Examples of the polarities (negative, neutral and/or positive scenarios) are shown inFIG.2. The sentence or paragraph extracted from the piece of content may be marked with the topic tags, polarity tags, index markers, sentiment values etc. and stored in the store48that is coupled to the context search engine, the sentiment engine and the linguistic components. The traditional sentiment analysis is focused on the document level, helping users to find whole documents that in the aggregate have a positive or negative tone, as opposed to the sentence or paragraph level where the topic of interest is located. For example, the document level sentiment scores may be computed based on the sentence level scores as a net sentiment percentage of the total possible count. For example, Number of positive statements—Number of negative statements divided by the total number of statements may be used to determine sentiment score of the document, although other methods may be used to determine the sentiment score for the document. In the system described here, the sentiment tags and the topic tags at the sentence, sub-sentence and/or paragraph level provide the user with granular search capabilities and let them find the relevant text that can explain or help predict price changes for a given asset. The search system may then store the final results of all the tagged information in the store48associated with the search system. The system presents a user interface to the user (SeeFIG.3for example), in which the user interface may provide a reading/browsing/searching user interface62described below in more detail, a heat map user interface64described below in more detail and an aggregated sentiment user interface66described below in more detail. Thus, the user interface presents the subjective categories related to financial concepts (in the financial industry example being used for illustration purposes) along with the sentiment categories. The user interface program controls the context search engine by directing the sentiment and contextual topic analyzing subsystems to extract relevant information and return the results back to the user's machine. The information extraction based on the user's preferences may be performed at periodic intervals as new files show up at the data sources. During a search operation, the search agent reviews the text obtained from one or more information sources, identifies the document or documents relevant to the query. Then it performs the context and sentiment extraction at the sentence, paragraph, or at appropriate granular level to find the text portions that match the stated request, highlights the sentiment appropriately, filters or underlines the sentences that match the topic(s), and brings back the result in an easy to read format to the user. The users may be given the choice to quickly drill down to the specific portions and find out the sentiment level, with matching topics and retrieve relevant text that enables them to make better investment decisions in the financial industry example being used for illustration purposes. FIG.4illustrates more details of the deep search process50in the financial domain. The process shown inFIG.4may be performed for each document/piece of content. InFIG.4, some of the processes are the same as those shown inFIG.2above. Thus, in the financial domain, the retrieving/downloading process52may involve the daily or intra-day download or other periodic retrieval of financial documents, such as 10K and 10Q documents from the SEC, which are processed by the system. Once the financial documents are retrieved, the system performs a data cleansing process62in which the system, among other things, removing extra tags, removing styles, removing extra HTML code and reformatting the financial document as HTML without tags. In addition, for example for SEC packages of documents, the system may extract the HTML and text documents from the SEC package and append them into one HTML document. In more detail, the document is received as an HTML formatted document and plain text documents. In order to identify sentences of text in the documents, the system determines what chunks of text are useful statements, where a sentence starts and ends and how HTML may alter the document. In particular, to determine what text chunks are real statements that state something about a matter of affairs, such as: ComEd has no remaining costs to be recognized related to the rate relief commitment as of Sep. 30, 2010, as compared to text chunks that are titles, page footers and headers, such as: Table of Contents or (Dollars in millions, except per share data, unless otherwise noted), the content extracting unit uses a combination of sentence features, such as HTML tags, end-of-sentence punctuation signs, and length thresholds of sentences (in number of words and characters), to separate useful content from the extraneous content. To determine where a sentence begins and ends, the content extraction unit splits sentences at punctuation signs, but takes abbreviations and acronyms into account, such as Mr., Inc., and U.S. If a document is HTML, sentences can usually be expected to occur entirely within one pair of enclosing tags, such as begin and end of paragraph: <p> . . . </p>. There may be multiple sentences within one paragraph, but sentences are not usually split over multiple paragraphs. However, if a sentence is split over a page break, or if the document is plain text without any HTML formatting, the system concatenates chunks of text to reconstruct the paragraphs in the text by using some heuristics based on the spacing of the text and the occurrence of page footer and header clues, so as not to erroneously concatenate text that does not belong together, such the end of a paragraph and a following section title. When the particular document is split into sentences, each sentence is saved as plain text under TxtData/ and the document is saved as HTML with each sentence embedded with <span> tags, which are used by the search system to highlight sentences when the sentences are displayed to the user. Once the extraneous content in the document is removed, the content extraction unit extracts the key sentences/portions in the piece of content (64) (such as the Management's Discussion and Analysis (MDA) portions of an SEC filing). An SEC filing contains different sections, such as a document header, document body, and exhibits section. Within the body and exhibits, there are subsections, such as the Management's Discussion and Analysis (MD&A) and the Notes to the Financial Statements. The location of these sections are identified by a combination of regular expression patterns, and some information of the size and order of sections in the document, and some excluding patterns that disqualify matching patterns that occur in the wrong context, such as in the table of contents. The system thus extracts these key portions of the document. The content extraction unit may also extract recurring/boilerplate sentences in the content (66) (such as sentences that are the same as in prior documents for each asset in an SEC filing). As companies file on a quarterly basis, typically some of the text they submit is repeated from earlier reports. The content extraction unit identifies the recurring statements and indicate that they are “less interesting” than the new statements by coloring the recurring statements grey in the user interface when shown to the user and by storing them in the store48with an indicating that they are recurring statements. Recurring statements are identified by comparing each statement in the current filing to all statements in the previous filing of the company (through the use of the store48) and a comparison is performed on normalized statements, where some stop words and whitespace characters are ignored. Thus, the system also extracts these recurring portions of the document from the document and store them in the store48. In one implementation, information about all filings that are currently in the system for a company (in the financial example) are stored in a FORM_TBL table in the store (that may be implemented using MySql) and the recurring sentences are tagged in the files in TxtData/. As in the following steps, each file is read from TxtData/, modified, and written back to TxtData/. Once the various sentences have been extracted from the document, sentiment, topic, recurring/boilerplate classification and tagging (68) are performed in order to tag and classify each sentence in the document including tags for sentiment, topics, tense, tone, etc. Using a topic taxonomy that is specific to the industry or field in which the documents pertain, the search system identifies which topics are present in the sentences (such as Revenue, Cash flow, Risks, etc for the financial industry). The search system may also perform part-of-speech tagging using a linguistic tagger to identify the parts of speech of the words in the sentences (nouns, verbs, etc.) and the results may be saved under PosTagged/. The system may also identify sentences that are forward looking (containing present and future tense, plans, intentions, . . . ) where part-of-speech tags in combination with industry knowledge based taxonomies are used here for disambiguation (forward looking statements in SEC filings). Boilerplate sentences that typically occur in all filings (such as those explaining what “Forward looking statements” mean) may be similarly recognized and tagged for removal. The range topics for a particular industry are selected since some topics are of particular interest to financial analysts, such as Sales, Orders and Backlog, Same Store Sales or Net Interest Income. To tag the topics for a particular industry, like the financial industry, the system provides key topic search queries that have been predesigned by financial experts and that identify statements in the text that contain references to the topics. For example, the Orders and Backlog topic may correspond to the following example search query:([orders] or [sales order] or [services order] or FOLLOW(5, [order], cancellation) or [order rate] or [commercial order] or [delivery order] or [order amounts] or8orderactivity] or backlogor [task order] or [signings] or [order value] or NEAR(5, [order],customer) or [customer order] or NEAR(5, [order], delay) orNEAR(5, [order], cancellation) or FOLLOW(5, time, [order]) or [change order] or [order volumes] or [order volume] or [ordering patterns] or [order is taken] or [order size] or FOLLOW(5, [order], shipped) or FOLLOW(5, return, [order]) or [product order]or FOLLOW(5, convert, [order]) or [subscription order] or [ordergrowth] or FOLLOW(5, completion, [order]) or [average order] or [order exists] or [new order] or [order book] or [firm order] or bookings) and not ([auction rate securities] or [court] or [courts] or [court's] or [obligations] or [commitments] or [in order to]) This query contains the boolean operators or, and, and not that combine different search terms into one query. Words or phrases enclosed in square brackets are literal matches; e.g., [orders] matches the word “orders” (irrespective of character case). Words without square brackets are stemmed before matching; e.g., customer matches any inflected form of “customer”: “customer, customers, customer's”. The special functions FOLLOW and NEAR indicate a set of words that have to occur within a window of predefined size, allowing for stray words within the window that do not match any of the words in the query; e.g., FOLLOW(5, [order], cancellation) indicates that the word “cancellation” may occur at a maximum distance of 5 words from the word “order”, in the given order: “order” before “cancellation”. The function NEAR works as FOLLOW but the order of the words within the clause is free. An example of the results for tagging sentences for boiler, forward looking statements and topic may be:The document title/filing: 0001193125-10-241317.txt Company:Google IncForm type: 10QFiled on: 20101029 Sentence: On an ongoing basis, we evaluate our estimates, including those related to the accounts receivable and sales allowances, fair values of financial instruments, intangible assets and goodwill, useful lives of intangible assets and property and equipment, fair values of stock-based awards, income taxes, and contingent liabilities, among others.Sentence id: 112773 is boiler: yeszone: footnotes, document body Forward LookingSentence: yesTopics: Accounting Policies; Working Capital; Revenue; Capex & Depreciation;Capital Liquidity; Profit & Costs Sentence: A discount factor was applied over these estimated cash flows of our ARS, which is calculated based on the interpolated forward swap curve adjusted by up to 1,700 basis points to reflect the current market conditions for instruments with similar credit quality at the date of the valuation and further adjusted by up to 400 basis points to reflect a discount for the liquidity risk associated with these investments due to the lack of an active market.sentence id: 243505 is boiler: yeszone: footnotes, document bodyfls: yestopics: Cash Flow; Accounting Policies; Derivatives and Hedging; Revenue; Capital Liquidity;Risks Sentence: For all acquisitions completed during the nine months ended Sep. 30, 2010, patents and developed technology have a weighted-average useful life of 4.1 years, customer relationships have a weighted-average useful life of 3.3 years and tradenames and other have a weighted-average useful life of 4.0 years.sentence id: 384406 is boiler: nozone: footnotes, document bodyfls: notopics: Revenue As described above, the linguistic unit also discerns the topic of the content using special linguistic rules. The linguistic rules may be, for example:@MACRO@ @V-MODAL@(could|may|might|must|shall|should|will|wo|would)/MD Or the following macros:@MACRO@ @@ ( ) //Left/start edge of expression@MACRO@ @ @ ( ) //Right/end edge of expression@MACRO@ \w [\a-\z\A-\Z\_\0-\9] //A word character@MACRO@ @VB@ (\w+/VB)//Head verb base form The natural language processing may include, for example:We add linguistic knowledge to the statements by using a part-of-speech tagger or syntactic parser. An example of a statement with part-of-speech tags is:The/DT latter/JJ action/NN would/MD cause/VB some/DT delay/NN in/IN the/DT effectiveness/NN of/IN rates/NNS that/WDT might/MD otherwise/RB become/VB effective/JJ in/IN June/NNP 2011/CD ./.This linguistic annotation is used in a consequent step that assigns tense to the statement. This means that we identify whether the statement is forward-looking, referring to a future event.The system defines macros for some frequently occurring constructs, e.g., a macro for modal and auxiliary verbs examples of which are described above for the linguistic rules.The macros are regular expressions containing information on the words and the part-of-speech tags of the words in a statement. The macros can be used in rules, such as in the following rule:@@ @ADVP@? @V-MODAL@ @ADVP@? @VB@ @ADVP@? @@ For fast matching, the regular expressions are compiled into Finite State Automata using finite-state algebra. The search system may also use a syntactic parser, e.g., a dependency parser. For example, the dependency parse of the sentence: The company has available a $750 million bank credit facility that expires in December 2010. looks like this:det(company-2, The-1)nsubj (has-3, company-2)dep(has-3, available-4) det(facility-11, a-5)num(facility-11, $-6)number($-6, 750-7)number($-6, million-8)nn(facility-11, bank-9)nn(facility-11, credit-10)dep(available-4, facility-11)nsubj(expires-13, that-12)rcmod(facility-11, expires-13) prep(expires-13, in-14)pobj (in-14, December-15) num(December-15, 2010-16) Each dependency consists of a relation (e.g., det=determiner) between a head word (e.g., company) and its dependent (e.g, The). Each word token has an ID number attached to it (e.g., company-2) by which it is possible to uniquely identify that word occurrence; this is necessary if the same word occurs multiple times in the sentence, in different syntactic positions. Rules can be expressed using dependencies. For instance, the sentence above is classified as forward-looking because the dependency prep(expires-13, in-14) matches the rule: prep(({V}(expire|expires|expiring)@ID@),(at|before|in|on|within)@ID@)// expires on Example of Results The/DT latter/JJ action/NN would/MD cause/VB some/DT delay/NN in/IN the/DT effectiveness/NN of/IN rates/NNS that/WDT might/MD otherwise/RB become/VB effective/JJ in/INJune/NNP 2011/CD./.The above rule determines that the statement “The latter action would cause some delay in the effectiveness of rates that might otherwise become effective in June 2011.” is forward-looking, because it contains the constructs: would/MD cause/VB and might/MD otherwise/RB become/VB. The sentiment analyzer unit may classify sentences in a document/piece of content as objective (=neutral sentiment) vs. subjective (=positive or negative sentiment). Sentiment extraction involves three steps:1. Feature extraction: Terms, phrases, or co-occurring words that are judged to be relevant from the point of view of sentiment classification are selected by a domain expert according to the approaches described in above. Another alternative is using n-grams or a combination of features.2. Objective vs. subjective classification: Supervised machine learning is utilized to learn to distinguish between objective and subjective statements based on the features of step1. The machine learning techniques can be linear regression, Support Vector Machines, decision trees, or artificial neural networks, to name a few.3. Positive vs. negative classification: If the sentiment is in step2is classified as subjective, then a further classifier classifies the statements as positive or negative or neutral, based on pattern matching against a large database of positive, negative and neutral textual features built by financial domain experts. The open source Support Vector Machine algorithm (LibSVM) is trained based on the annotators results by letting it figure out the key features that happen to affect the predictions the most. This part of the algorithm is based on the open source implementation. The features and the guidelines that drive the annotations described earlier, determine the effectiveness of the classification results and thus distinguish our sentiment prediction from other approaches that happen to use the SVM or other machine learning techniques. In some embodiments of the system, processes2and3above may be combined into one single machine learning step. Consider, for instance, the following rule-based approaches to feature extraction for sentiment:a. A rule is expressed as an accurate search query with Boolean logic, as described above: FEATURE_OUR_PROJECTION:FOLLOW(3, [our] or [its] or company or management, estimate or estimation or target or forecast or forecasted or [projected] or [projection] or [we project] or [company projects] or [management projects] or [we estimate] or [company estimates] or [management estimates]) andnot(FOLLOW(3, sales or selling or marketing, expense or expenditure or cost)) andnot(FOLLOW(3, require, us, to, estimate) or FOLLOW(3, estimate, that, have, realized) or FOLLOW(3, we, review, our)) and not(FOLLOW(3, tax, rate) or FOLLOW(3, fair, value) or FOLLOW(3, ongoing or going, basis) or FOLLOW(3, continually or continuously, evaluates) or FOLLOW(3, useful, life) or FOLLOW(3, in, making, its) or FOLLOW(3, realizable, value) or FOLLOW(3, discounted, cash, flow))b. A rule is expressed as a regular expression taking into account both the surface forms of words and potentially their part-of-speech tags, as described above: FEATURE_REVENUE_VOLUMES_REVENUE_GROWTH:@@ (revenue|sales|royalty|business) @WORD6@ (growth|grow|expansion|ex pand|increase|increasing|enhancement|improvement|improving|improve) @ @, where the macro @WORD6@ corresponds to a sequence of stray words, minimum zero and maximum six words: @MACRO@ @WORD6@ @WORD® ? @WORD@ ? @WORD® ? @WORD® ? @WORD® ? @WORD® ? A rule is expressed as one or multiple dependency relations between words.c. If a full syntactic parser is not available or not a feasible solution (e.g., due to heavy computational requirements), shallow parses can be produced using a set of cascaded rules that are applied on a part-of-speech tagged sentence. Shallow parses are parses of chunks or phrases within a sentence, not necessarily covering the syntax of the entire sentence.If we start with the following part-of-speech tagged sentence: The/DT company/NN has/VBZ available/JJ a/DT $/$ 750/CD million/CD bank/NN credit/NN facility/NN that/WDT expires/VBZ in/IN December/NNP 2010/CD ./. We get the following shallow parse: <NP> The company</NP><VP>has</VP> available/JJ<NP>a $750 million bank credit facility</NP><NP>that</NP><VP>expires</VP><PP> in December 2010</PP>./. Which contains the chunks:DP 62 0<DP> The</DP>DP 62 1<DP>a</DP>TIME-NP 78 2 num(December,2010)<TIME-NP>December 2010</TIME-NP>NUM 85 3 number(million, 750)<NUM>$750 million</NUM>NOM 87 4 nn(facility, bank credit)<NOM>bank creditfacility</NOM>NOM 101 5 num(bank credit facility, 750 million)<NOM>$750 million bank credit facility</NOM>NP 108 6 det($750 million bank credit facility|a)<NP>a $750 million bank credit facility</NP>NP 109 7 det(company, The)<NP> The company</NP>PP 119 8 pobj (in, December)<PP> in December 2010</PP>VP 148 9<VP>has</VP>VP 148 10<VP>expires</VP>NP 150 11<NP>that</NP> To extract sentiment topic features, dependency rules can be written that operate on the dependencies discovered by the shallow parser. As a result of the processes above, the store48has a plurality of sentences for each document with each sentence or other portion of the document having one or more topics tags and sentiments associated with each portion of the document. This combined data is them used to perform the deep searches as described below and generate the user interfaces that are described below. Once the tagging and classification has been completed, the sentences, sentiments of the sentences and other asset specific information for qualitative, trend and heat map analysis may be loaded into a database (70) so that for example, the heat map as shown inFIG.7may be generated by the system. In addition, the sentences and key paragraphs may be loaded into a SOLR database (72) during indexing so that the search engine can performs its deep searches based on the portions of the documents, the topic tags and the sentiments. In more detail, XML is created that corresponds to the SOLR entries, both on sentence level (TopicXml) and filing level (FullFilingXml). In addition, the data is posted to SOLR, which makes the filing appear in the web application. For historical reasons, the sentence level info is stored in SolrTopic, and the filing level info is stored in SolrMda. In addition, the system also highlights sentiments in the sentences of the document (74) for later viewing as described below in which the sentiments are “tagged” to the sentences and some HTML is added to reflect the highlighting. FIGS.5A and5Billustrate an example of a search user interface80for the deep search system for an initial query and a list of results, respectively, in the financial industry. The user interface has a find documents portion82that allows the user to specify search criteria for the search by sectors, industries, portfolios, stocks, market capitalization ranges, date ranges, keywords, saved queries and the like. The user interface also has a viewing portion84that inFIG.5Ahas a form to enter specific search criteria and inFIG.5Bhas a list of responsive documents based on a search. The user interface80may also have a topics portion86that lists the topics associated with the particular subject matter area, the financial industry documents in this example. When the system is used in a different subject matter area, such as the law, the system would like a different set of topics in the list. The list of topics portions can be used at any time during the viewing of documents to change the topics of interest. Any time that the user changes the topics, the viewing portion84is dynamically updated as the user changes the topics, for example. InFIG.5B, if the user selects one of the returned documents shown in the viewing portion84, the user interface displays the extracted sentences and sentiments for each extracted sentence as shown inFIG.6. As shown inFIG.6, the different sentiments of the sentences are shown in different colors (red for negative sentiments90, green92for positive sentiments and plain black/uncolored for neutral sentiment statements.) Specific cue words used in determining sentiment, topic or tense may also be underlined or otherwise marked (although none are shown in the example). In addition, since the system extracts recurring sentences, the user interface may also shown recurring sentences as grayed out (although none are shown inFIG.6since the filter to hide boiler sentences has been activated inFIG.6.) The search system viewing as shown inFIG.6allows the user to rapidly review a long document, such as a new SEC filing, and quickly see sentences in the document that are positive, negative or neutral. For the financial world, this deep search engine allows the user to quickly review company SEC filings and determine the effect of the sentences in the filing on the stock price of the company, for example, or to screen a large number of filings for new negative or positive statements on a given topic of interest, where the topic of interest could be “all new negative forward looking statements made by large-cap retail companies in the most recent quarter about their sales in Asia” (which would be done by appropriate selection of filters and searches within the example system). FIG.7illustrates an example of a sentiment heat map user interface100of the deep search system. Due to the documents having sentences tagged with sentiment, the sentiment heat map is able to calculate and show the sentiment by industry (such as oil101, integrated circuits (chips)102and the beer industry104) or other criteria, the colors show the level of positive or negative or neutral outlook for the companies in the industry and the size of rectangle in any one industry corresponds to the market value of the company in the industry in which the larger rectangle indicates are larger market value company. For example, in the oil industry, the larger market value companies have negative sentiments, but a smaller company has a positive sentiment as shown by the smaller green rectangle1011. In the sentiment heat map, the user can click on any rectangle, see the company name and then select that company to see its documents and the sentiments of the sentences in the documents for the company that led the system to calculate a given sentiment score and display it on the heatmap. The user can alter selections such as the recurring, topic and tense filters, which are dynamically reflected in the heatmap display, with a new sentiment number calculated for all the companies shown. FIG.8illustrates an example of a search results user interface of the deep search system, where the viewing interface allows the user to compare documents side by side that is made possible by the deep search system and processes described above. The processes of the deep search system and method described above can be used to generate reports for the user that help the user quickly review a quick “cliff notes” summary of the document or a range of documents because the system as described above can identify relevant sentences within a large document, or many documents, based on the user's custom criteria (e.g. topic, tense, tone, recurring, keyword search, industries, market caps, etc. . . . ), and create the document summary of summary of a range of documents. Thus, using the system, a user can skim-read through a pre-highlighted document or multiple documents, focusing on what he had pre-defined as important (and what the system thus highlighted), as opposed to having to read everything. The deep search system may further have a report generator unit that generates reports, such as those shown inFIGS.9A-10Bbelow based on the processes above. FIGS.9A and9Billustrate portions of a document highlighted that is made possible by the deep search system. In particular, the document is an SEC filing for a company and the system has modified the document to make it easier for a user to quickly review. Using the content extraction, linguistic/tagging process and the sentiment determination process described above, the system highlights different sentences of the document. For example, the system highlight one or more sentence(s)110in yellow that match a user's indication of interest in the sentence based on various criteria but are recurring sentences and shown by the grayed out text, highlights one or more sentence(s)112is blue that are identified by the system but are recurring sentences and shown by the grayed out text, highlights one or more sentence(s) in yellow that match a user's indication of interest in the sentence based on various criteria and are not recurring sentences and highlights one or more sentence(s) in blue that are identified by the system and are not recurring sentences. In the example inFIGS.9A and9B, the user search term was for “new forward looking statements about revenue”—i.e. incorporating both topic and tense into same query, while filtering (or graying) out recurring statements (i.e. those that were simply repeated from the prior filing). The value of this is to help the user quickly skim-read through a pre-highlighted document, focusing on what he had pre-defined as important (and what the system thus highlighted), as opposed to having to read everything. FIGS.10A and10Billustrate an example of a multi-document summary that is made possible by the deep search system. Traditional search engines return full documents that had something potentially relevant in them, and a user has to spend endless amounts of time clicking through those documents to see if there is something potentially useful. However, using the processes of the deep search system, the user can have a custom report generated as shown inFIGS.10A and10Bwhich is a user-defined summary of what a selection of companies said about a topic of interest to the user. The example inFIGS.10A and10Bshows what chip makers said about inventory during the past quarter, an indicator of the business cycle in that industry. As shown, the portions of the documents for each company are shown side-by-side120-124with the of interest sentences (based on the user's expressed interest) are highlighted so that the user can quickly review the documents of the three companies in the same industry. In an implementation of the deep search system, the content extraction processes may include a rule parsing algorithm that emulate key functions such as NEAR, PHRASE, FOLLOW, FUZZY, EXACT, DICTIONARY etc with the rules being expressed as XML and interpreted by our rule process execution engine wherein the rules are applied to extract the topic features for each sentence. In the same implementation, the linguistic process uses an open source finite state machine, regular expression optimizers and PERL style Regular Expression generators. In the same implementation, the sentiment analyzer process uses a combination of linguistic rules and machine learning techniques such as SVM (Support Vector Machine) and Neural Network models. In the sentiment analysis, the system is seeded with the topic features based on topic extracting rules and linguistic features based on shallow and some deep parsing algorithms. Then, the machine learning algorithm select the appropriate features based on human annotated sentences. While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims. | 42,770 |
11861149 | DETAILED DESCRIPTION To make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes the present disclosure in detail with reference to the accompanying drawings. The described embodiments are not to be construed as a limitation to the present disclosure. All other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure. In the following descriptions, related “some embodiments” describe a subset of all possible embodiments. However, it may be understood that the “some embodiments” may be the same subset or different subsets of all the possible embodiments, and may be combined with each other without conflict. At present, a user who browses an interface containing a character unreadable for the user needs to open translation software, and manually enter the character unreadable for the user into the translation software for translation. For an interface containing a large quantity of unreadable characters, efficiency of translating the characters in the interface is low. Some embodiments provide an interface information processing method and apparatus, a storage medium, and a device. Efficiency of translating a character in a display interface can be improved. FIG.1is a schematic structural diagram of an interface information processing system applicable to an interface information processing method according to some embodiments. As shown inFIG.1, the interface information processing system may include a server10and a user terminal cluster. The user terminal cluster may include one or more user terminals. The quantity of the user terminals is not limited herein. As shown inFIG.1, the user terminal cluster may include a user terminal100a, a user terminal100b, a user terminal100c, . . . , and a user terminal100n. As shown inFIG.1, the user terminal100a, the user terminal100b, the user terminal100c, . . . , and the user terminal100neach may establish a network connection to the server10, such that each user terminal may perform data interaction with the server10through the network connection. Each user terminal in the user terminal cluster may include an intelligent terminal with an interface information processing function, for example, a smartphone, a tablet computer, a notebook computer, a desktop computer, a wearable device, a smart home, a head-mounted device, or an in-vehicle terminal. It is to be understood that a target application (that is, an application client) may be installed in each user terminal in the user terminal cluster shown inFIG.1, and when run in each user terminal, the application client may perform data interaction with the server10shown inFIG.1. As shown inFIG.1, the server10may be an independent physical server, or may be a server cluster or distributed system including a plurality of physical servers, or may be a cloud server providing basic cloud computing service such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, a content delivery network (CDN), or a big data and artificial intelligence platform. For ease of understanding, in some embodiments, one of the plurality of user terminals shown inFIG.1may be selected as a target user terminal. The target user terminal may include an intelligent terminal with the interface information processing function, for example, a smartphone, a tablet computer, a notebook computer, a desktop computer, or a smart television. For example, for ease of understanding, in some embodiments, the user terminal100ashown inFIG.1may be determined as the target user terminal. After a user enables a floating permission of a floating translation component for displaying in a first display interface, the floating translation component may be displayed on the first display interface. The first display interface may be a display interface including a character of a first language type in the user terminal100a. The character of the first language type may be a non-native character for the user, for example, English, Japanese, or Korean, that is, a character except a native character for the user. When the user performs a trigger operation on the floating translation component in the first display interface, a trigger progress is displayed in the floating translation component. The trigger progress is associated with trigger duration. The trigger duration is duration of the trigger operation performed by the user on the floating translation component, for example, duration of a tap operation performed by the user on the floating translation component. When the user terminal100adetects that the trigger progress in the floating translation component satisfies a full-screen translation start progress, the first display interface may be sent to the server10. After receiving the first display interface sent by the user terminal100a, the server10may translate the character of the first language type in the first display interface to obtain a character of a second language type, and generate a second display interface according to the character of the second language type. The character of the second language type may be a native character for the user. The user terminal100amay obtain a language type displayed in the user terminal100aby default as the second language type. The user terminal100asends the first display interface and the second language type to the server10, such that the server10translates the character of the first language type in the first display interface into the character of the second language type. After receiving the second display interface sent by the server, the user terminal100amay switch the first display interface to the second display interface. In this way, the character in the display interface can be translated quickly, and efficiency of translating the character in the display interface can be improved. In some embodiments, the floating translation component may be a component that floats over the display interface of the application client and that is configured to translate the character on the display interface. FIG.2is a schematic diagram of an application scenario of interface information processing according to some embodiments. As shown inFIG.2, the first display interface may be a display interface including a character of an English character type. When a target user20bbrowses an interface in a user terminal, if the target user20bdetermines that a first display interface20cdisplayed on the user terminal20aneeds to be translated, the target user20bmay perform a trigger operation on a floating translation component in the first display interface20c. That is, the target user20bmay perform a tap operation on the first display interface20c. After receiving the tap operation of the target user20b, the user terminal20amay obtain trigger duration of the tap operation performed by the target user20bon the floating translation component. The trigger duration is a duration of the trigger operation performed by the target user20bon the floating translation component. A trigger progress is displayed in the floating translation component according to the trigger duration. The trigger progress in the floating translation component is displayed in a display interface20d. The user terminal20amay detect the trigger progress in the floating translation component in real time. When the user terminal20adetects that the trigger progress in the floating translation component satisfies a full-screen translation start progress, that is, when the trigger progress in the floating translation component is displayed to be 100% in the display interface20eof the user terminal20a, the currently displayed first display interface is sent to a server20f. After receiving the first display interface sent by the user terminal20a, the server20fmay translate an English character in the first display interface to generate a second display interface. When translating the English character in the first display interface, the server20fmay directly perform default translation on the first display interface to translate the English character in the first display interface into a default character, for example, translate the English character in the first display interface into a Chinese character so as to obtain the Chinese character corresponding to the English character in the first display interface, and generate, according to the Chinese character, the second display interface including a character of the second language type. Certainly, the server20fmay also translate the English character in the first display interface into a native character for the target user20b, and generate a second display interface including a character of the second language type. After generating the second display interface including the character of the second language type, the server10may send the second display interface to the user terminal20a. The user terminal20adisplays the second display interface, that is, may output a display interface20gincluding the second display interface, such that the target user20bbrowses the first display interface including the English character. In this way, full-screen translation may be performed on the first display interface including the English character, which can improve the efficiency of translating the character in the display interface. FIG.3is a schematic flowchart of an interface information processing method according to some embodiments. The interface information processing method may be performed by a computer device. The computer device may be a server (for example, the server10inFIG.1), or a user terminal (for example, any user terminal in the user terminal cluster inFIG.1), or a system including a server and a user terminal. This is not limited in this application. As shown inFIG.3, the interface information processing method may include operations S101to S102. S101: Display, in response to a trigger operation on a floating translation component in a first display interface, a trigger progress in the floating translation component. In some embodiments, if there is a language barrier to a character in a display interface displayed in the computer device when a target user browses an interface, the trigger operation may be performed on the floating translation component on the display interface in the computer device. The computer device may translate the character in the display interface, and output a display interface corresponding to a character obtained through translation, such that the target user browses the display interface without language barriers. In some embodiments, when the target user performs the trigger operation on the floating translation component in the first display interface, a trigger progress is displayed in the floating translation component. The trigger progress is associated with trigger duration. The first display interface includes a character of a first language type unreadable for the target user. For example, when the native language of the target user is Chinese, and there is a language barrier to another language such as English, Japanese, or Korean, if the target user browses the first display interface including an English character, the trigger operation may be performed on the floating translation component in the first display interface, and the computer device may display the trigger progress in the floating translation component in response to the trigger operation of the target user. The floating translation component may float on a screen of the computer device, that is, is displayed in the display interface of the computer device. The target user may adjust a display position of the floating translation component to adjust the position of the floating translation component to a proper position, so as to avoid browsing of the display interface by the target user. When the target user performs the trigger operation on the floating translation component, a shape of the floating translation component may be enlarged, and the trigger progress may be displayed in the floating translation component. In this way, the target user easily notices that the floating translation component is being started. When the computer device displays the trigger progress in the floating translation component in response to the trigger operation on the floating translation component, the computer device may obtain the trigger duration of the trigger operation performed by the target user on the floating translation component, that is, duration of the trigger operation performed by the target user on the floating translation component, and determine the trigger progress according to the trigger duration. For example, when the trigger progress is 100%, trigger duration of 4 seconds is needed. When the trigger duration is 1 second, the trigger progress is displayed to be 25% in the floating translation component. When the trigger duration is 3 seconds, the trigger progress is displayed to be 75% in the floating translation component. Displaying the trigger progress in the floating translation component can prompt the target user that the floating translation component is being started for full-screen translation. If the target user intends to start the floating translation component for full-screen translation, the trigger operation on the floating translation component may be continued; or if the target user does not intend to start the floating translation component for full-screen translation, the trigger operation on the floating translation component is stopped. In this way, the floating translation component may be started for full-screen translation of the display interface only when the target user determines that the floating translation component needs to be started and the trigger duration for the floating translation component reaches target duration. Therefore, the floating translation component can be prevented from being started by a mistaken touch when the user does not need full-screen translation of the display interface to cause a waste of browsing time of the target user and further affect experience of the target user. In some embodiments, the trigger operation performed by the target user on the floating translation component in the first display interface may be a tap operation. When intending to start the floating translation component, the target user may keep tapping the floating translation component. The computer device may display the trigger progress in the floating translation component, that is, display a tap progress, in response to the tap operation performed by the target user on the floating translation component, so as to prompt the target user that the floating translation component is currently being started for full-screen translation. When the trigger progress in the floating translation component does not reach a full-screen translation start progress, if the target user does not intend to start the floating translation component for full-screen translation, the tap operation on the floating translation component may be stopped, and the computer device may not start the floating translation component for full-screen translation; or if the target user intends to start the floating translation component, the tap operation on the floating translation component may be continued to reach a full-screen translation start progress. The full-screen translation start progress is a start determining condition of the floating translation component. That is, the floating translation component may be started for full-screen translation only when the trigger progress in the floating translation component satisfies the full-screen translation start progress. In some embodiments, the trigger operation performed by the target user on the floating translation component in the first display interface may be a voice wake-up operation. When intending to start the floating translation component, the target user may speak a target statement to wake up the floating translation component. The computer device may display the trigger progress in the floating translation component, that is, display a voice wake-up progress, in response to the voice wake-up operation performed by the target user on the floating translation component, so as to prompt the target user that the floating translation component is currently being started for full-screen translation. When the trigger progress in the floating translation component does not reach a full-screen translation start progress, if the target user does not intend to start the floating translation component for full-screen translation, the floating translation component may be tapped to cancel start of the floating translation component for full-screen translation, and the computer device may not start the floating translation component for full-screen translation; or if the target user intends to start the floating translation component, the floating translation component may be started for full-screen translation when the trigger progress of the floating translation component reaches a full-screen translation start progress. In some embodiments, the computer device may further obtain a configured permission corresponding to the floating translation component, and detect a floating permission of the floating translation component according to the configured permission to obtain a floating permission detection result. If the floating permission detection result indicates that the floating translation component has the floating permission, the floating translation component is displayed in the first display interface. The floating translation component may be always displayed in the display interface browsed by the target user, such that the target user may start the floating translation component for full-screen translation of the display interface when encountering a language barrier. This effectively solves a problem of language barrier occurring when the target user browses the display interface, and can improve interface browsing experience of the user. In some embodiments, when displaying the floating translation component in the first display interface, the computer device may obtain the configured permission of the floating translation component, and detect the floating permission of the floating translation component according to the configured permission to obtain the floating permission detection result of the floating translation component. The configured permission of the floating translation component may be configured when the target user enables a translation service of the floating translation component for the first time. That is, when the target user enables the translation service of the floating translation component for the first time, running of the floating translation component needs authorization of the target user, and the target user may configure a relevant permission of the floating translation component according to information that is indicated by the floating translation component and that is about enabling the relevant permission, to obtain the configured permission of the floating translation component. If the computer device detects that the floating permission detection result indicates that the floating translation component has the floating permission, the floating translation component is displayed in the first display interface. In some embodiments, if the computer device detects that the floating permission detection result indicates that the floating translation component does not have the floating permission, a permission editing window for the floating translation component is output. The floating translation component is displayed in the first display interface in response to an enabling operation on a first floating authorization control in the permission editing window. In some embodiments, when the computer device detects that the floating permission detection result of the floating translation component indicates that the floating translation component does not have the floating permission, the permission editing window for the floating translation component is output. The permission editing window includes the first floating authorization control for enabling the floating permission of the floating translation component. The target user may tap the first floating authorization control in the permission editing window to enable the floating permission of the floating translation component. The computer device may display the floating translation component in the first display interface in response to the enabling operation performed by the target user on the first floating authorization control. If the target user does not perform the enabling operation on the first floating authorization control, prompt information is output, so as to prompt the target user that a full-screen translation service provided by the floating translation component is available only when the floating permission of the floating translation component is enabled. In some embodiments, the target user may set a target display interface for displaying the floating translation component in the computer device. After obtaining a setting instruction of the target user for the target display interface, the computer device may display the floating translation component on only the target display interface in response to the setting instruction of the target user. The target display interface may be a display interface including another language type except the native language of the target user, and is specified by the target user. Alternatively, the computer device may directly obtain a default character type currently displayed to the target user, and determine, according to the default character type, the target display interface in which the floating translation component may be displayed, that is, determine a display interface including a character type except the default character type as the target display interface in which the floating translation component may be displayed. FIG.4is a diagram of an application scenario of the floating translation component according to some embodiments. As shown inFIG.4, the floating translation component may be in an acceleration assistant for game acceleration of a foreign-server game. The acceleration assistant may monitor an entire running process of the foreign-server game in real time, and perform game acceleration on the foreign-server game. Therefore, the floating translation component may be set in the acceleration assistant to easily resolve a language barrier occurring when the target user enjoys the foreign-server game. This solves a problem that the target user intends to play the foreign-server game but cannot read a foreign language, breaks the language barrier, and enables more users to experience more high-quality and interesting foreign-server games without barriers. In addition, the floating translation component may perform full-screen translation on a display interface in the foreign-server game, which can improve efficiency of translating a character in the display interface and save time for the target user. Moreover, setting the floating translation component in the acceleration assistant of the foreign-server game can avoid interference of the floating translation component when a national-server game is played. FIG.5is a schematic diagram of enabling the configured permission of the floating translation component according to some embodiments. As shown inFIG.5, the computer device may set relevant information about the floating translation component in a details page of the floating translation component. As shown inFIG.5, the target user performs an enabling operation on a component authorization control for the floating translation component in an authorization interface50aoutput by the computer device to enable the configured permission of the floating translation component, that is, switch an “Off” state of the component authorization control for the floating translation component to an “On” state. The target user may further set a character type after translation by the floating translation component in the authorization interface50a, for example, set the character type after translation by the floating translation component to Chinese. Then, the floating translation component may translate any character type in the display interface into the Chinese character type. The target user may further set a default display position, a display size, and other information of the floating translation component in the authorization interface50a. When the target user performs the enabling operation on the component authorization control for the floating translation component, the computer device may switch the display interface50ainto a display interface50bin response to the enabling operation performed by the target user on the component authorization control in the authorization interface. The display interface50bdisplays a method for using the floating translation component, for example, touching and holding the floating translation component for real-time translation, and information indicating the relevant permission of the floating translation component that needs authorization, for example, the floating permission is used for displaying in the display interface, and a screenshot taking permission is used for taking a screenshot of the display interface. The target user may tap a “To authorize” control in the display interface50b, and the computer device may switch the display interface50bto a display interface50cin response to a tapping operation performed by the target user on the “To authorize” control. As shown inFIG.5, the display interface50cincludes a floating authorization control and a screenshot taking authorization control for the floating translation component. The target user may set an on/off status corresponding to the floating translation component to the “On” state, and set an on/off status corresponding to the screenshot taking authorization control to the “On” state. The computer device may enable the floating permission of the floating translation component in response to an enabling operation on the floating authorization component, the floating permission being used for indicating that the floating translation component has a permission to be displayed in the first display interface. The computer device enables the screenshot taking permission of the floating translation component in response to a trigger operation on the screenshot taking authorization control, the screenshot taking permission being used for indicating that the floating translation component has a permission to take a screenshot of the first display interface. After the target user enables the floating permission and the screenshot taking permission of the floating translation component, and taps an “OK” button, the computer device may switch the display interface50cto a display interface50din response to a determining operation of the target user, and enable the floating permission and the screenshot taking permission of the floating translation component. S102: Switch the first display interface to the second display interface in a case that the trigger progress in the floating translation component satisfies the full-screen translation start progress. In some embodiments, when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the computer device performs full-screen translation on the character of the first language type in the first display interface to obtain the character of the second language type, generates the second display interface according to the character of the second language type, and switches the first display interface to the second display interface. The second display interface includes the character of the second language type. The character of the second language type is obtained by translating the character of the first language type. The full-screen translation start progress is a start determining condition of the floating translation component. That is, the floating translation component may be started for full-screen translation only when the trigger progress in the floating translation component satisfies the full-screen translation start progress. In some embodiments, a specific manner in which the computer device switches the first display interface to the second display interface when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress may include: displaying a full-screen scanning animation on the first display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the full-screen scanning animation being used for indicating a state in which full-screen interface translation is currently being performed on the first display interface; and switching the first display interface to the second display interface in a case that the full-screen scanning animation ends. In some embodiments, the computer device displays the full-screen scanning animation on the first display interface when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the full-screen scanning animation being used for indicating the state in which full-screen interface translation is currently being performed on the first display interface. It takes specific time for the computer device to perform full-screen translation on the character of the first language type in the first display interface to generate the second display interface including the character of the second language type. Therefore, displaying the full-screen scanning animation in the first display interface to prompt the target user of the state in which full-screen interface translation is currently being performed on the first display interface can avoid the target user doubting about whether the floating translation component has been started for full-screen translation and performing a plurality of trigger operations on the floating translation component, and improve experience of the target user in using the floating translation component. In some embodiments, the second display interface includes a second interface image. A specific manner in which the computer device switches the first display interface to the second display interface when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress may include: performing screenshot taking processing on the first display interface to obtain a first interface image in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress; translating the character of the first language type in the first interface image to obtain the second interface image, the second interface image including the character of the second language type; and displaying the second interface image overlaying the first display interface. In some embodiments, when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the computer device performs screenshot taking processing on the currently displayed first display interface to obtain the first interface image corresponding to the first display interface. Alternatively, the computer device may perform screen recording on the first display interface to obtain an interface video, and selects a frame of image from the interface video as the first interface image corresponding to the first display interface. After obtaining the first interface image corresponding to the first display interface, the computer device may perform character extraction on the character of the first language type in the first interface image to obtain the character of the first language type. The computer device may invoke a language translation application program to translate the character of the first language type to obtain the character of the second language type. The second interface image including the character of the second language type is generated according to the character of the second language type. The second interface image is displayed overlaying the first display interface. In some embodiments, before the computer device performs screenshot taking processing on the first display interface to obtain the first interface image corresponding to the first display interface, the computer device may further display the authorization interface for the floating translation component. The configured permission of the floating translation component is enabled in response to the enabling operation on the component authorization control in the authorization interface. The floating translation component is displayed in the first display interface based on the configured permission in a case that the authorization interface is exited. In some embodiments, the computer device displays the authorization interface for the floating translation component. The authorization interface includes a permission that needs to be used when the floating translation component performs the translation service. The floating translation component may perform the related service when the permission is enabled by the target user. The target user may enable the configured permission of the floating translation component in response to the enabling operation on the component authorization control in the authorization interface. The computer device may enable the configured permission of the floating translation component in response to the enabling operation on the component authorization control in the authorization interface. The floating translation component is displayed in the first display interface based on the configured permission in the case that the authorization interface is exited. In some embodiments, the computer device may further obtain the configured permission corresponding to the floating translation component, and detect the screenshot taking permission of the floating translation component according to the configured permission to obtain a screenshot taking permission detection result. The operation of performing screenshot taking processing on the first display interface is executed in a case that the screenshot taking permission detection result indicates that the floating translation component has the screenshot taking permission. In some embodiments, before performing screenshot taking processing on the first display interface to obtain the first interface image corresponding to the first display interface, the computer device may further obtain the configured permission of the floating translation component. The configured permission includes permission information about whether the floating translation component has the screenshot taking permission. How to obtain the configured permission may refer to the descriptions in operation S101, and will not be elaborated herein. The computer device may detect the screenshot taking permission of the floating translation component according to the configured permission to obtain the screenshot taking permission detection result. The operation of performing screenshot taking processing on the first display interface to obtain the first interface image corresponding to the first display interface is performed in the case that the screenshot taking permission detection result of the floating translation component indicates that the floating translation component has the screenshot taking permission. In some embodiments, if the computer device detects that the screenshot taking permission detection result of the floating translation component indicates that the floating translation component does not have the screenshot taking permission, a screenshot taking permission editing window is output. The screenshot taking permission editing window includes a second screenshot taking authorization control for enabling the screenshot taking permission of the floating translation component. The target user may tap the second screenshot taking authorization control, and set a status of the second screenshot taking authorization control to the “On” state. The computer device may enable the screenshot taking permission of the floating translation component in response to a trigger operation on the second screenshot taking authorization control in the screenshot taking permission editing window. If the target user does not perform the trigger operation on the second screenshot taking authorization control in the screenshot taking permission editing window, prompt information is output, so as to prompt the target user that the floating translation component may provide the full-screen translation service for the display interface only when the screenshot taking permission of the floating translation component is enabled. In some embodiments, the component authorization control includes a second floating authorization control and a first screenshot taking authorization control. The configured permission includes the floating permission and the screenshot taking permission. A specific manner in which the computer device enables the configured permission of the floating translation component in response to the enabling operation on the component authorization control in the authorization interface may include: enabling the floating permission of the floating translation component in response to a trigger operation on the second floating authorization control in the authorization interface, the floating permission being used for indicating that the floating translation component has the permission to be displayed in the first display interface; and enabling the screenshot taking permission of the floating translation component in response to a trigger operation on the first screenshot taking authorization control in the authorization interface, the screenshot taking permission being used for indicating that the floating translation component has the permission to take the screenshot of the first display interface. In some embodiments, the authorization interface includes the second floating authorization control and the first screenshot taking authorization control. The target user may perform the trigger operation on the second floating authorization control and the first screenshot taking authorization control to enable the floating permission and the screenshot taking permission of the floating translation component. The floating permission of the floating translation component is used for indicating that the floating translation component has the permission to be displayed in the first display interface. The screenshot taking permission of the floating translation component is used for indicating that the floating translation component has the permission to take the screenshot of the first display interface. For example, the target user may tap switch buttons respectively corresponding to the second floating authorization control and the first screenshot taking authorization control, and set both the switch buttons respectively corresponding to the second floating authorization control and the first screenshot taking authorization control to the “On” state. The computer device enables the floating permission of the floating translation component in response to the trigger operation performed by the target user on the second floating authorization control in the authorization interface, The computer device may further enable the screenshot taking permission of the floating translation component in response to the trigger operation on the first screenshot taking authorization control. FIG.6is a schematic diagram of a method for obtaining the second interface image according to some embodiments. As shown inFIG.6, a specific manner in which the computer device translates the character of the first language type in the first interface image to obtain the second interface image including the character of the second language type may include operation S61to operation S63. S61: Gray the first interface image to obtain a grayed first interface image. In some embodiments, after the computer device performs screenshot taking processing on the first display interface to obtain the first interface image when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the first interface image may be grayed to obtain the grayed first interface image. Graying is a process of converting a color image into a gray image. Since the computer device directly performs screenshot taking processing on the first display interface to obtain the first interface image, the first interface image may be a color image. A color of each pixel in the color image is determined by three color components, and each color component has 255 values, so that a pixel may change within a range of more than 16 million (255*255*255) colors. The gray image may be a special color image whose three color components are the same, and a pixel changes within a range of 255 colors. Therefore, graying the first interface image can reduce a calculation amount for subsequent processing, and improve translation efficiency of full-screen translation of the display interface. S62: Perform character extraction on the grayed first interface image to obtain the character of the first language type in the first interface image. In some embodiments, the computer device performs character extraction on the grayed first interface image to obtain the character of the first language type in the first interface image. The computer device performs character extraction on the grayed first interface image, and screens extracted candidate characters to remove a non-verbal character in the candidate characters, for example, an arrow, a circle, a box, or another non-verbal character, to obtain the character of the first language type in the first interface image. S63: Translate the character of the first language type into the character of the second language type, and generate the second interface image according to the character of the second language type. In some embodiments, after obtaining the character of the first language type, the computer device may invoke the translation application program to translate the character of the first language type in the first interface image to obtain the character of the second language type, and generate the second interface image according to the character of the second language type. In some embodiments, when translating the character of the first language type, the computer device may invoke the translation application to perform default translation on the character of the first language type to translate the character of the first language type into a default character (that is, the character of the second language type). For example, the computer device may perform default translation on an English character to obtain a Chinese character. In some embodiments, the computer device may further obtain a default display character type for the target user, that is, a character type displayed by default on a user terminal of the target user, and translate the character of the first language type into a character of the default display character type. In some embodiments, a specific manner in which the computer device translates the character of the first language type into the character of the second language type and generates the second interface image according to the character of the second language type may include: performing semantic recognition on the character of the first language type to obtain character semantic information of the character of the first language type; translating the character of the first language type into the character of the second language type according to the character semantic information; obtaining character position information of the character of the first language type in the first interface image, and adding the character of the second language type to a transition interface image according to the character position information; and generating the second interface image including the character of the second language type according to the transition interface image, both the transition interface image and the second interface image having a same size as the first display interface. In some embodiments, the computer device may perform semantic recognition on the character of the first language type to obtain the character semantic information of the character of the first language type, and translates the character of the first language type into the character of the second language type according to the character semantic information. Alternatively, the computer device may directly invoke the translation application program to translate the character of the first language type to obtain the character of the second language type. The computer device may further establish a two-dimensional coordinate system by using a left-bottom corner of the first interface image as a coordinate origin, and obtain coordinate position information of the character of the first language type in the first interface image in the two-dimensional coordinate system, thereby obtaining the character position information of the character of the first language type in the first interface image. The computer device may add the character of the second language type to the transition interface according to the character position information, and generate the second interface image including the character of the second language type according to the transition interface image. Both the transition interface image and the second interface image have the same size as the first display interface. The transition interface image is a transparent display interface image of the same size as the first display interface. In some embodiments, a specific manner in which the computer device generates the second interface image including the character of the second language type according to the transition interface image may include: determining, in the transition interface image, a region indicated by the character position information, and determining the region as an addition region; and adding the character of the second language type to the addition region in the transition interface image, and generating the second interface image according to a transition interface image added with the character of the second language type. In some embodiments, after obtaining the character position information of the character of the first language type in the first interface image, the computer device may determine, in the transition interface image, the region indicated by the character position information as the addition region. The character of the second language type is added to the addition region in the transition interface image. That is, a position of the character of the first language type in the first interface image is the same as that of the character of the second language type in the transition interface image. For example, the left-bottom corner of the first interface image may be determined as the coordinate origin to establish the two-dimensional coordinate system, to obtain the character position information of the character of the first language type. A two-dimensional coordinate system is established by using a left-bottom corner of the transition interface image as a coordinate origin, a region indicated by the character position information is determined as an addition region in the transition interface image, and the character of the second language type is added to the addition region. The computer device may generate the second interface image according to the transition interface image added with the character of the second language type. For example, a background color may be added to the transition interface image added with the character of the second language type to make the transition interface image a display interface image that has a same size as the first display interface and has the background color, thereby generating the second interface image. Alternatively, optimization processing such as denoising processing is performed on the transition interface image added with the character of the second language type to generate the second interface image. In some embodiments, a specific manner in which the computer device generates the second interface image according to the transition interface image added with the character of the second language type may include: performing pixel recognition on the first interface image to obtain a region color of a region in which the character of the first language type is located in the first interface image; and setting a background color of the transition interface image added with the character of the second language type to the region color to obtain the second interface image. In some embodiments, the computer device may perform pixel recognition on the first interface image to obtain the region color of the region in which the character of the first language type is located in the first interface image, and set the background color of the transition interface image added with the character of the second language type to the region color to obtain the second interface image. After the computer device obtains the second interface image, since the second interface image is pixel byte data, if the second interface image needs to be displayed, the pixel byte data of the second interface image needs to be encapsulated, that is, the second interface image is converted into a bitmap object, and the second interface image converted into the bitmap object is output in a display screen. In some embodiments, the floating translation component may be in the game acceleration assistant. When needing to use a full-screen interface translation function provided by the floating translation component, the target user needs to turn on an enabling button of the floating translation component in the game acceleration assistant to enable the floating translation component. The computer device may enable the floating translation component in response to an enabling operation performed by the target user on the floating translation component. The target user may set the floating translation component to be displayed in only a display interface of the foreign-server game and not displayed in an interface of a national-server game. In this way, interference of the floating translation component can be avoided when the target user plays the national-server game. When the target user starts the foreign-server game, the computer device may detect whether the floating translation component is enabled. If the floating translation component is not enabled, the prompt information may be output, so as to prompt the target user to enable the floating translation component to enjoy the full-screen translation function of the floating translation component. If the floating translation component is enabled, the computer device may detect whether the floating permission of the floating translation component is enabled. If the floating permission of the floating translation component is not enabled, the floating permission editing window may be output. The floating permission editing window includes the first floating authorization control for enabling the floating permission of the floating translation component, and the prompt information for prompting the target user that the full-screen translation function of the floating translation component is available only when the floating permission of the floating translation component is enabled. The target user may tap the first floating authorization control in the floating permission editing window to enable the floating permission of the floating translation component. The computer device may enable the floating permission of the floating translation component in response to a trigger operation performed by the target user on the first floating authorization control, and display the floating translation component in the display interface of the foreign-server game. After displaying the floating translation component in the display interface of the foreign-server game, the computer device may detect whether the floating translation component is displayed for the first time. If the floating translation component is displayed for the first time, a tutorial mask layer may be output. The tutorial mask layer displays a method for using the floating translation component, for example, touching and holding the floating translation component until the full-screen scanning animation appears. If the target user does not enable the floating permission of the floating translation component, the prompt information is output, so as to prompt the target user that the full-screen translation function of the floating translation component is available only when the floating permission of the floating translation component is enabled. After displaying the floating translation component in the display interface of the foreign-server game, the computer device may detect whether the floating translation component has the screenshot taking permission for screenshot taking or screen recording when detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress. If detecting that the floating translation component has the screenshot taking permission, the computer device takes a screenshot of the display interface of the foreign-server game, translates the screenshot of the display interface of the foreign-server game, and outputs the translated display interface of the foreign-server game. If detecting that the floating translation component has the screenshot taking permission, the computer device displays the screenshot taking permission editing window. The screenshot taking permission editing window includes the second screenshot taking authorization control for enabling the screenshot taking permission of the floating translation component, and the prompt information for indicating that the full-screen translation function of the floating translation component is available only when the screenshot taking permission of the floating translation component is enabled. The target user can tap the second screenshot taking authorization control in the screenshot taking permission editing window, and set the second screenshot taking authorization control to be the “On” state. The computer device may enable the screenshot taking permission of the floating translation component in response to an enabling operation performed by the target user on the screenshot taking permission of the floating translation component, and perform an operation of performing screenshot taking processing on the display interface of the foreign-server game, thereby performing full-screen translation on the display interface of the foreign-server game. In some embodiments, the computer device may recognize a position of the character of the first language type in the first interface image to obtain character position information of the character of the first language type in the first interface image. The computer device may obtain a region color of a region in which the character of the first language type is located, and set a background color of a transition interface (for example, a transition mask layer) to the region color of the region in which the character of the first language type is located. The transition interface may be a square. A size of the transition interface is a region size of the region in which the character of the first language type is located. Alternatively, a size of the transition interface is a size of a region extending by P pixels from an edge of the region in which the character of the first language type is located, P being a positive integer. For example, P may be valued to 1, 2, 3, . . . . The character of the second language type is added to the transition interface to generate the second interface image. FIG.7is a schematic diagram of performing full-screen translation on a display interface by using the floating translation component according to some embodiments. As shown inFIG.7, the floating translation component may include performing full-screen translation on a display interface including an English character. The floating translation component may be displayed in a display interface70aincluding an English character. The floating translation component may be displayed at a position such as a right-bottom corner, a left-bottom corner, or a left-top corner of the display interface70a, so as to avoid interference to interface browsing of the target user. When the target user keeps tapping the floating translation component, the computer device may switch the display interface70ato a display interface70b. When the target user performs a trigger operation on the floating translation component, that is, taps a display identifier of the floating translation component, the computer device may enlarge the display identifier of the floating translation component, such that the target user notices that the floating translation component is currently being started. The computer device may further display a trigger progress in the floating translation component. The trigger progress is associated with a tapping operation of the target user. For example, it takes 4 seconds for the trigger progress to reach 100%. In this case, when the tapping operation of the target user lasts for 1 second, the computer device may display a trigger progress of 25% in the floating translation component. When detecting that the trigger progress in the floating translation component reaches 100%, the computer device may switch the display interface70bto a display interface70cto display that the trigger progress in the floating translation component to reach the full-screen translation start progress of 100%. After detecting that the trigger progress in the floating translation component reaches 100%, the computer device switches the display interface70cto a display interface70d, and outputs a full-screen translation scanning animation in the display interface70d, so as to indicate a state in which full-screen interface translation is currently being performed on the first display interface. When outputting the full-screen translation scanning animation, the computer device may perform screenshot taking processing on the currently displayed display interface (that is, the first display interface) to obtain a first interface image, and translate the English character in the first interface image into a Chinese character corresponding to the English character to generate a second display interface including the Chinese character. After generating the second display interface including the Chinese character, the computer device may stop outputting the full-screen translation scanning animation, and switch the display interface70dto a display interface70eto display the Chinese character obtained by translating the English character in the first display interface. FIG.8is a schematic diagram of obtaining a right configuration of the floating translation component according to some embodiments. As shown inFIG.8, before displaying the floating translation component in the first display interface, the computer device may detect whether the floating translation component is enabled80a. If detecting that the floating translation component is not enabled, the computer device may output the prompt information80i, so as to prompt the user to enable the floating translation component to improve the full-screen translation function. If detecting that the floating translation component is enabled, the computer device detects whether the floating permission of the floating translation component is currently enabled80b. If detecting that the floating permission of the floating translation component has been enabled, the computer device displays the floating translation component in the first display interface80c. When displaying the floating translation component in the first display interface, the computer device may determine whether the floating translation component appears for the first time80d. If the floating translation component appears for the first time, a tutorial is output80e, so as to indicate how to use the floating translation component. If detecting that the floating permission of the floating translation component is not enabled, the computer device displays the floating permission editing window80f, for the target user to enable the floating permission of the floating translation component. The computer device may detect whether the screenshot taking permission of the floating translation component is currently enabled80g. If it is detected that the screenshot taking permission of the floating translation component is not enabled, the screenshot taking permission editing window is displayed for the target user to enable the screenshot taking permission of the floating translation component. In some embodiments, the trigger progress is displayed in the floating translation component in response to the trigger operation on the floating translation component in the first display interface, the first display interface including the character of the first language type, the trigger progress being associated with the trigger duration, and the trigger duration being the duration of the trigger operation on the floating translation component. In this way, displaying the trigger progress in the floating translation component can indicate the state in which the floating translation component is currently being started. When the target user does not need to start the floating translation component, the trigger operation on the floating translation component may be stopped. This can avoid the floating translation component being started by a mistaken touch. The first display interface is switched to the second display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the second display interface including the character of the second language type, and the character of the second language type being obtained by translating the character of the first language type. The first display interface is switched to the second display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the second display interface including the character of the second language type, and the character of the second language type being obtained by translating the character of the first language type. Since full-screen translation may be performed on the character in the first display interface by triggering the floating translation component, efficiency of translating the character in the first display interface can be improved. FIG.9is a schematic flowchart of an interface information processing method according to some embodiments. The interface information processing method may be performed by a computer device. The computer device may be a server (for example, the server10inFIG.1), or a user terminal (for example, any user terminal in the user terminal cluster inFIG.1), or a system including a server and a user terminal. This is not limited in this application. As shown inFIG.9, the interface information processing method may include operations S201to S203. S201: Display, in response to a trigger operation on a floating translation component in a first display interface, a trigger progress in the floating translation component. S202: Switch the first display interface to a second display interface in a case that the trigger progress in the floating translation component satisfies a full-screen translation start progress. Specific content of operations S201to S202in some embodiments may refer to the content of operations S101to S102inFIG.3, and will not be elaborated herein. S203: Switch the second display interface to the first display interface in response to a touch operation on the second display interface or a trigger operation on an exit control in the second display interface, and resume display of the floating translation component in the first display interface. In some embodiments, when intending to exit the second display interface, a target user may tap the exit button in the second display interface, or touch any position in the second display interface, to exit the second display interface. The computer device may switch the second display interface to the first display interface in response to the touch operation performed by the target user on the second display interface or the trigger operation on the exit control in the second display interface, and resume display of the floating translation component in the first display interface. In some embodiments, the computer device may further obtain, in response to a dragging operation on the floating translation component, movement position information determined by the dragging operation, and update a display position of the floating translation component in the first display interface according to the movement position information. Current display position information of the floating translation component is obtained in response to detecting that the dragging operation ends, and distances between the display position information and N interface boundaries of the first display interface are obtained respectively, N being a positive integer greater than or equal to 3. An interface boundary corresponding to a minimum distance is obtained as a target interface boundary, and the floating translation component is displayed on the target interface boundary. In some embodiments, the target user may drag the floating translation component in the first display interface to any position in the first display interface, so as to adjust the display position of the floating translation component to avoid display of the floating translation component affecting browsing of the first display interface by the target user. The computer device may respond to the dragging operation performed by the target user on the floating translation component, obtain the movement position information determined by the dragging operation, and update the display position information of the floating translation component in the first display interface according to the movement position information. When detecting that the dragging operation performed by the target user on the floating translation component ends, the computer device obtains the current display position information of the floating translation component, and obtains the distances between the display position information and the N interface boundaries of the first display interface respectively. The interface boundary corresponding to the minimum distance is obtained as the target interface boundary, and the floating translation component is displayed on the target interface boundary. For example, after dragging the floating translation component to a target position (a real-time display position at which the target user releases the floating translation component, that is, a current display position) in the first display interface, the target user stops the dragging operation on the floating translation component, for example, releases the floating translation component. The computer device may detect whether the target position of the floating translation component belongs to a boundary position of the first display interface (that is, a boundary region of the first display interface). If the target position of the floating translation component does not belong to the boundary position of the first display interface, distances between the target position and interface boundaries of the first display interface are obtained respectively. For example, the first display interface includes a first interface boundary, a second interface boundary, a third interface boundary, and a fourth interface boundary. In this case, a first distance between the current target position of the floating translation component and the first interface boundary may be obtained. The computer device may obtain a second distance between the current target position of the floating translation component and the second interface boundary, a third distance between the current display position information of the floating translation component and the third interface boundary, and a fourth distance between the current target position of the floating translation component and the fourth interface boundary. The computer device may obtain a boundary interface corresponding to a minimum distance in the first distance, the second distance, the third distance, and the fourth distance as a target interface boundary, and display the floating translation component on the target interface boundary. FIG.10is a schematic diagram of resuming display of the floating translation component according to some embodiments. As shown inFIG.10, the target user may tap an “Exit” button in the second display interface to exit the second display interface, and resume display of the floating translation component. When detecting that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the computer device switches the first display interface to the second display interface, and the target user may understand and read the character of the second language type in the second display interface. When finishing reading and intending to exit the second display interface, the target user may tap the “Exit” button in the second display interface. The computer device may switch the second display interface100ato the first display interface100bin response to a trigger operation performed by the target user on the “Exit” button in the second display interface, and resume display of the floating translation component. FIG.11is a schematic diagram of performing full-screen translation on a display interface according to some embodiments. As shown inFIG.11, if encountering a display interface with a language barrier when browsing a display interface in a screen of a user terminal110a, a target user110bmay start the floating translation component to perform full-screen translation on the display interface with the language barrier, so as to browse the display interface without any language barrier. If the target user110bbrowses a display interface including a character of a first language type (that is, an English character), the user terminal displays the floating translation component in the display interface including the English character if detecting that the floating translation component has a floating permission. If detecting that the target user110btaps the floating translation component in the display interface, the user terminal110amay display a trigger progress in the floating translation component in response to a tapping operation of the target user110b. The trigger progress is associated with a continuous tapping operation performed by the target user on the floating translation component. When the computer device detects that the trigger progress in the floating translation component is the full-screen translation start progress of 100%, that is, when a display interface110cis displayed, the user terminal110amay determine the currently displayed display interface as a first display interface, and detects whether the floating translation component has a screenshot taking permission. If the floating translation component has the screenshot taking permission, a screenshot of the first display interface is taken to obtain a first interface image corresponding to the first display interface. After obtaining the first interface image110e, the user terminal may send the first interface image110eto a server110d, and send a display interface translation request to the server110d. The display interface translation request includes a target language type (that is, Chinese). The target language type is used for determining to translate the character of the first language type (that is, the English character) in the first display interface into a character of the target language type (that is, a Chinese character). If not receiving the target language type specified by the user terminal110a, the server110dmay directly perform default translation on the character of the first language type in the first display interface to obtain a character of a second language type. After receiving the first interface image110eand the display interface translation request that are sent by the user terminal110a, the server110dmay gray the first interface image110eto obtain a grayed first interface image, thereby reducing a data processing amount for translating the first interface image110e. The server110dmay perform character extraction on the grayed first interface image to obtain the English character in the first interface image110e, perform semantic recognition on the English character to obtain a semantic recognition result, and obtain the Chinese character corresponding to the English character in the first interface image110eaccording to the semantic recognition result. The server110dmay obtain character position information of the English character in the grayed first interface image, and determine a position of first position information in a transition interface as second character position information. The server110dmay add the Chinese character corresponding to the English character in the first interface image110eto a region in which the second character position information is located to generate a second interface image110f. After generating the second interface image110f, the server110dmay send the second interface image110fto the user terminal110a. After receiving the second interface image110f, the user terminal110amay display the second interface image110foverlaying the first display interface, that is, display a display interface110g. The display interface110gincludes an exit button for exiting the display interface110g. After completing browsing information in the display interface110g, the target user110bmay tap the exit button in the display interface110gto exit the display interface110g. When the target user110btaps the exit button in the display interface110g, that is, when a display interface110his displayed in the user terminal, the user terminal110amay switch the display interface110hto a display interface110ito exit the display interface including the Chinese character in response to a tapping operation of the target user110b, and resume display of the display interface including the English character and display of the floating translation component. In some embodiments, the trigger progress is displayed in the floating translation component in response to the trigger operation on the floating translation component in the first display interface, the first display interface including the character of the first language type, the trigger progress being associated with trigger duration, and the trigger duration being duration of the trigger operation on the floating translation component. In this way, displaying the trigger progress in the floating translation component can indicate a state in which the floating translation component is currently being started. When the target user does not need to start the floating translation component, the trigger operation on the floating translation component may be stopped. This can avoid the floating translation component being started by a mistaken touch. The first display interface is switched to the second display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the second display interface including the character of the second language type, and the character of the second language type being obtained by translating the character of the first language type. When the trigger progress in the floating translation component satisfies the full-screen translation start progress, the floating translation component is started to perform full-screen translation on the character of the first language type in the first display interface. In this way, the floating translation component is started only when it is determined that the target user intends to start the floating translation component. This avoids unnecessary interference of the floating translation component to the user, and can improve user experience and avoid a hardware resource waste caused by mistaken tapping of the floating translation component by the target user. In addition, after the floating translation component is started, a full-screen scanning animation may be displayed in the first display interface to prompt the target user of the state in which full-screen interface translation is currently being performed on the first display interface, full-screen translation is performed on the character of the first language type in the first display interface to generate the second display interface including the character of the second language type, and the first display interface is switched to the second display interface. This can improve efficiency of translating the character in the display interface. Moreover, the second display interface may be switched to the first display interface in response to the touch operation performed by the target user on the second display interface or the trigger operation on the exit control in the second display interface, and display of the floating translation component is resumed in the first display interface. In this way, the target user may continue to browse a subsequent display interface and use a full-screen translation function of the floating translation component again. In some embodiments, since full-screen translation may be directly performed on the character in the first display interface by triggering the floating translation component, the efficiency of translating the character in the display interface can be greatly improved. FIG.12is a schematic structural diagram of an interface information processing apparatus according to some embodiments. The interface information processing apparatus may be a computer program (including program code) running in a computer device. For example, the interface information processing apparatus is application software. The interface information processing apparatus may be configured to perform the corresponding operations in the interface information processing method provided in some embodiments. As shown inFIG.12, the interface information processing apparatus may include a trigger progress display module11, an interface switching module12, a detection module13, a first component display module14, an output module15, a second component display module16, a display resuming module17, an update module18, an obtaining module19, and a third component display module20. The trigger progress display module11is configured to display, in response to a trigger operation on a floating translation component in a first display interface, a trigger progress in the floating translation component. The first display interface includes a character of a first language type. The trigger progress is associated with trigger duration. The trigger duration is duration of the trigger operation on the floating translation component. The interface switching module12is configured to switch the first display interface to a second display interface in a case that the trigger progress in the floating translation component satisfies a full-screen translation start progress. The second display interface includes a character of a second language type. The character of the second language type is obtained by translating the character of the first language type. The interface information processing apparatus further includes: the detection module13, configured to obtain a configured permission corresponding to the floating translation component, and detect a floating permission of the floating translation component according to the configured permission to obtain a floating permission detection result; and the first component display module14, configured to display the floating translation component in the first display interface in a case that the floating permission detection result indicates that the floating translation component has the floating permission. The interface information processing apparatus further includes: the output module15, configured to output a floating permission editing window for the floating translation component in a case that the floating permission detection result indicates that the floating translation component does not have the floating permission; and the second component display module16, configured to display the floating translation component in the first display interface in response to an enabling operation on the first floating authorization control in the floating permission editing window. The interface switching module12includes: a first display unit1201, configured to display a full-screen scanning animation on the first display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the full-screen scanning animation being used for indicating a state in which full-screen interface translation is currently being performed on the first display interface; and an interface switching unit1202, configured to switch the first display interface to the second display interface in a case that the full-screen scanning animation ends. The interface information processing apparatus further includes: the display resuming module17, configured to switch the second display interface to the first display interface in response to a touch operation on the second display interface or a trigger operation on an exit control in the second display interface, and resume display of the floating translation component in the first display interface. The interface information processing apparatus further includes: the update module18, configured to obtain, in response to a dragging operation on the floating translation component, movement position information determined by the dragging operation, and update a display position of the floating translation component in the first display interface according to the movement position information; the obtaining module19, configured to obtain current display position information of the floating translation component in response to detecting that the dragging operation ends, and obtain distances between the display position information and N interface boundaries of the first display interface respectively, N being a positive integer greater than or equal to 3; and the third component display module20, configured to obtain an interface boundary corresponding to a minimum distance as a target interface boundary, and display the floating translation component on the target interface boundary. The second display interface includes a second interface image. The interface switching module12further includes: a screenshot taking processing unit1203, configured to perform screenshot taking processing on the first display interface to obtain a first interface image in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress; a translation unit1204, configured to translate the character of the first language type in the first interface image to obtain the second interface image including the character of the second language type; and a second display unit1205, configured to display the second interface image overlaying the first display interface. The interface switching module12further includes: a third display unit1206, configured to display an authorization interface for the floating translation component; a first enabling unit1207, configured to enable a configured permission of the floating translation component in response to an enabling operation on a component authorization control in the authorization interface; and a fourth display unit1208, configured to display the floating translation component in the first display interface based on the configured permission in a case that the authorization interface is exited. In some embodiments, the component authorization control includes a second floating authorization control and a first screenshot taking authorization control. The configured permission includes a floating permission and a screenshot taking permission. The first enabling unit1207is configured to: enable the floating permission of the floating translation component in response to a trigger operation on the second floating authorization control in the authorization interface, the floating permission being used for indicating that the floating translation component has a permission to be displayed in the first display interface; and enable the screenshot taking permission of the floating translation component in response to a trigger operation on the first screenshot taking authorization control in the authorization interface, the screenshot taking permission being used for indicating that the floating translation component has a permission to take a screenshot of the first display interface. The interface switching module12further includes: an obtaining unit1209, configured to obtain a configured permission corresponding to the floating translation component; a detection unit1210, configured to detect a screenshot taking permission of the floating translation component according to the configured permission to obtain a screenshot taking permission detection result; and an execution unit1211, configured to execute, in a case that the screenshot taking permission detection result indicates that the floating translation component has the screenshot taking permission, the operation of performing screenshot taking processing on the first display interface. The interface switching module12further includes: a second enabling unit1212, configured to output a screenshot taking permission editing window in a case that the screenshot taking permission detection result indicates that the floating translation component does not have the screenshot taking permission, and enable the screenshot taking permission of the floating translation component in response to a trigger operation on a second screenshot taking authorization control in the screenshot taking permission editing window. The translation unit1204is configured to: gray the first interface image to obtain a grayed first interface image; perform character extraction on the grayed first interface image to obtain the character of the first language type in the first interface image; and translate the character of the first language type into the character of the second language type, and generate the second interface image according to the character of the second language type. The translation unit1204is further configured to: perform semantic recognition on the character of the first language type to obtain character semantic information of the character of the first language type, and translate the character of the first language type into the character of the second language type according to the character semantic information; and obtain character position information of the character of the first language type in the first interface image, add the character of the second language type to a transition interface image according to the character position information, and generate the second interface image including the character of the second language type according to the transition interface image, both the transition interface image and the second interface image having a same size as the first display interface. The translation unit1204is further configured to: determine, in the transition interface image, a region indicated by the character position information as an addition region; add the character of the second language type to the addition region in the transition interface image; and generate the second interface image according to a transition interface image added with the character of the second language type. The translation unit1204is further configured to: perform pixel recognition on the first interface image to obtain a region color of a region in which the character of the first language type is located in the first interface image; and set a background color of the transition interface image added with the character of the second language type to the region color to obtain the second interface image. Specific implementations of the trigger progress display module11, the interface switching module12, the detection module13, the first component display module14, the output module15, the second component display module16, the display resuming module17, the update module18, the obtaining module19, and the third component display module20may refer to the descriptions in the embodiment corresponding toFIG.3orFIG.9, and will not be elaborated herein. According to some embodiments, each module in the interface information processing apparatus shown inFIG.12may exist respectively or be combined into one or more units. Alternatively, a certain (or some) unit in the units may be further split into multiple smaller function subunits, thereby implementing the same operations without affecting the technical effects of some embodiments. The modules are divided based on logical functions. In actual applications, a function of one module may be realized by multiple units, or functions of multiple modules may be realized by one unit. In another embodiment of this application, the interface information processing apparatus may further include other units. In actual applications, these functions may also be realized cooperatively by the other units, and may be realized cooperatively by multiple units. A person skilled in the art would understand that these “modules” and “units” could be implemented by hardware logic, a processor or processors executing computer software code, or a combination of both. The “modules” and “units” may also be implemented in software stored in a memory of a computer or a non-transitory computer-readable medium, where the instructions of each module and unit are executable by a processor to thereby cause the processor to perform the respective operations of the corresponding module and unit. In some embodiments, the trigger progress is displayed in the floating translation component in response to the trigger operation on the floating translation component in the first display interface, the first display interface including the character of the first language type, the trigger progress being associated with the trigger duration, and the trigger duration being the duration of the trigger operation on the floating translation component. In this way, displaying the trigger progress in the floating translation component can indicate the state in which the floating translation component is currently being started. When a target user does not need to start the floating translation component, the trigger operation on the floating translation component may be stopped. This can avoid the floating translation component being started by a mistaken touch. The first display interface is switched to the second display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the second display interface including the character of the second language type, and the character of the second language type being obtained by translating the character of the first language type. When the trigger progress in the floating translation component satisfies the full-screen translation start progress, the floating translation component is started to perform full-screen translation on the character of the first language type in the first display interface. In this way, the floating translation component is started only when it is determined that the target user intends to start the floating translation component. This avoids unnecessary interference of the floating translation component to the user, and can improve user experience. In addition, after the floating translation component is started, the full-screen scanning animation may be displayed in the first display interface to prompt the target user of the state in which full-screen interface translation is currently being performed on the first display interface, full-screen translation is performed on the character of the first language type in the first display interface to generate the second display interface including the character of the second language type, and the first display interface is switched to the second display interface. This can improve efficiency of translating the character in the display interface. Moreover, the second display interface may be switched to the first display interface in response to the touch operation performed by the target user on the second display interface or the trigger operation on the exit control in the second display interface, and display of the floating translation component is resumed in the first display interface. In this way, the target user may continue to browse a subsequent display interface and use a full-screen translation function of the floating translation component again. In some embodiments, since full-screen translation may be directly performed on the character in the first display interface by triggering the floating translation component, the efficiency of translating the character in the display interface can be improved. FIG.13is a schematic structural diagram of a computer device according to some embodiments. As shown inFIG.13, the computer device1000may include a processor1001, a network interface1004, and a memory1005. In addition, the computer device1000may further include a target user interface1003and at least one communication bus1002. The communication bus1002is configured to implement connection communication between these components. The target user interface1003may include a display and a keyboard. In some embodiments, the target user interface1003may further include a standard wired interface and wireless interface. The network interface1004may include a standard wired interface and a standard wireless interface (such as a wireless fidelity (Wi-Fi) interface). The memory1005may be a high-speed random access memory (RAM), or may be a non-volatile memory, for example, at least one disk memory. Alternatively, the memory1005may be at least one storage apparatus far away from the processor1001. As shown inFIG.13, as a computer-readable storage medium, the memory1005may include an operating system, a network communication module, a target user interface module, and a device control application program. In the computer device1000shown inFIG.13, the network interface1004may provide a network communication function. The target user interface1003is mainly configured to provide an input interface for a target user. The processor1001may be configured to invoke the device control application program stored in the memory1005to implement: displaying, in response to a trigger operation on a floating translation component in a first display interface, a trigger progress in the floating translation component, the first display interface including a character of a first language type, the trigger progress being associated with trigger duration, and the trigger duration being duration of the trigger operation on the floating translation component; and switching the first display interface to a second display interface in a case that the trigger progress in the floating translation component satisfies a full-screen translation start progress, the second display interface including a character of a second language type, and the character of the second language type being obtained by translating the character of the first language type. It is to be understood that the computer device2000described in some embodiments may execute the descriptions about the interface information processing method in the embodiment corresponding toFIG.3, or may execute the descriptions about the interface information processing apparatus in the embodiment corresponding toFIG.12. Elaborations are omitted herein. In some embodiments, the trigger progress is displayed in the floating translation component in response to the trigger operation on the floating translation component in the first display interface, the first display interface including the character of the first language type, the trigger progress being associated with the trigger duration, and the trigger duration being the duration of the trigger operation on the floating translation component. In this way, displaying the trigger progress in the floating translation component can indicate a state in which the floating translation component is currently being started. When the target user does not need to start the floating translation component, the trigger operation on the floating translation component may be stopped. This can avoid the floating translation component being started by a mistaken touch. The first display interface is switched to the second display interface in the case that the trigger progress in the floating translation component satisfies the full-screen translation start progress, the second display interface including the character of the second language type, and the character of the second language type being obtained by translating the character of the first language type. When the trigger progress in the floating translation component satisfies the full-screen translation start progress, the floating translation component is started to perform full-screen translation on the character of the first language type in the first display interface. In this way, the floating translation component is started only when it is determined that the target user intends to start the floating translation component. This avoids unnecessary interference of the floating translation component to the user, and can improve user experience and avoid a hardware resource waste caused by mistaken tapping of the floating translation component by the target user. In addition, after the floating translation component is started, a full-screen scanning animation may be displayed in the first display interface to prompt the target user of the state in which full-screen interface translation is currently being performed on the first display interface, full-screen translation is performed on the character of the first language type in the first display interface to generate the second display interface including the character of the second language type, and the first display interface is switched to the second display interface. This can improve efficiency of translating the character in the display interface. Moreover, the second display interface may be switched to the first display interface in response to a touch operation performed by the target user on the second display interface or a trigger operation on an exit control in the second display interface, and display of the floating translation component is resumed in the first display interface. In this way, the target user may continue to browse a subsequent display interface and use a full-screen translation function of the floating translation component again. In some embodiments, since full-screen translation may be directly performed on the character in the first display interface by triggering the floating translation component, the efficiency of translating the character in the display interface can be greatly improved. In addition, some embodiments also provide a computer-readable storage medium. The computer-readable storage medium stores a computer program executed by the interface information processing apparatus mentioned above. The computer program includes program instructions. The processor, when executing the program instructions, may execute the descriptions about the interface information processing method in the embodiment corresponding toFIG.3. Therefore, elaborations are omitted herein. In addition, the description of beneficial effects of the same method are not described herein again. Technical details that are not disclosed in the embodiment of the computer-readable storage medium involved in this application refer to the descriptions in the method embodiments of this application. As an example, the program instruction may be deployed in a computing device for execution, or executed in multiple computing devices at the same place, or executed in multiple computing devices interconnected through a communication network at multiple places. The multiple computing device interconnected through the communication network at multiple places may form a blockchain system. In addition, some embodiments also provides a computer program product or computer program. The computer program product or computer program may include computer instructions. The computer instructions may be stored in a computer-readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium. The processor may execute the computer instructions to enable the computer device to execute the descriptions about the interface information processing method in the embodiment corresponding toFIG.3orFIG.9. Elaborations are omitted herein. In addition, the description of beneficial effects of the same method are not described herein again. Technical details that are not disclosed in the embodiment of the computer program product or computer program involved in this application refer to the descriptions in the method embodiments of this application. It is to be noted that for brevity of description, each method embodiment is expressed into a combination of a series of actions. However, it is to be known by a person skilled in the art that this application is not limited to an action sequence described herein because some operations may be performed in another sequence or at the same time according to this application. Second, it is also to be known by a person skilled in the art that all of the embodiments described in this specification are some embodiments of this application, and involved actions and modules are not always necessary to this application. The operations in the method in some embodiments can be sequentially adjusted, combined, and deleted according to actual needs. The modules in the apparatus in some embodiments can be combined, divided, and deleted according to actual needs. It can be understood by a person of ordinary skill in the art that all or some of the processes in the method in the foregoing embodiments may be completed by a computer program by instructing related hardware. The computer program may be stored in a computer-readable storage medium. When the program is executed, the processes in each method embodiment may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a RAM, or the like. The foregoing embodiments are used for describing, instead of limiting the technical solutions of the disclosure. A person of ordinary skill in the art shall understand that although the disclosure has been described in detail with reference to the foregoing embodiments, modifications can be made to the technical solutions described in the foregoing embodiments, or equivalent replacements can be made to some technical features in the technical solutions, provided that such modifications or replacements do not cause the essence of corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the disclosure. | 105,365 |
11861150 | DETAILED DESCRIPTION FIG.1illustrates the computer-networking environment100suitable for use in explaining example embodiments of invention. The computer-networking environment100includes a network101such as a local area network (e.g., LAN) that interconnects a plurality of computer systems110-1through110-N that each execute respective relation managers150(application150-1and process150-2), view manager managers178(application178-1and process178-2), connections managers180(application180-1and process180-2) and IO file managers182(application182-1and process182-2) under respective control of a plurality of users108. The computer systems110may be any type of computerized device such as a personal computer, laptop, workstation, mainframe terminal, or the like. In this example, each computer system110generally includes in interconnection mechanism111such as a data bus, motherboard or other circuitry that interconnects a memory112, a processor113, an input output interface114and a communications interface115. A display130such as a computer monitor and input output mechanism116couple to the computer system110via the input output interface114. The communications interface115allows communication with other computer systems110-2through110-N over the network101. The architecture of the computer system110-1is shown inFIG.1by way of example only. It is to be understood that the details of the example computer systems110-2through110-N can be similar to those of computer system110-1but are not shown inFIG.1due to drawing space limitations. The memory112within each computer system110may be any type of computer readable medium such as random access memory (RAM), read only memory (ROM). The memory112may be fixed or removable from the computer system110, such as a floppy disk, magnetic disk, optical disk media (e.g., CD ROM) or the like. In one embodiment, the memory112is encoded with computer program logic (e.g., software code) that includes a relation manager application120-1. When the processor113executes the relation manager application150-1, the processor113produces a relation manager process150-2that executes as explained herein to produce a graphical user interface132-1(the example being produced by the relation manager150in computer110-1) on the display130for viewing by the user108. The relation manager process150-1and application150-2are collectively referred to herein as simply the relation manager150. When referring to the relation manager150, it can thus be a reference to the executing process150-2, the application code150-1, or both. Each relation manager150in combination with the view manager178, connections manager180and file manager182process files and documents and produce a graphical user interface132that provides, to the user108, visual knowledge representation, dynamically updated content, hosted conversations, and interpretation and management based in part on spatial relationships. To do so, the managers150,178,180and182include a workspace server151, a news server152, an exchange server153and a database server154that each produce, respectively, a Workspace View300, a News View305, a database view310and an exchange view315within the graphical user interface132. The relation manager150adds discovered relations to IO's in a database125(the database125may already include relations that other people or software products added). The relation manager also provides the algorithms155to examine the network of relations to discover potentially relevant information. The database125maintains a history of updates to the IOs to allow for inclusive reconstruction of a particular IO at given time. The workspace server151produces the Workspace View300that in one configuration is a graphical user work area, such as a desktop, in which a user108is able to create and manipulate graphical information objects310that graphically represent information objects of interest to user108. The Workspace View300helps users108to create, collect, organize and understand information associated with each IO320and the relationships among information represented by the IO320. In one configuration, a generic client application such as a web browser accesses such views from respective servers151through154that may execute on the same or different computer systems110. In another alternative configuration, a dedicated client application includes or provides the Workspace View, the News View, the database view, and the exchange view and implements the news server, the database server, and the exchange server. It is to be understood that the system described herein may be distributed or centralized and may operate among many computer systems with many users108. Information Objects (IOs) are flexible data structures and can include data files that can include other data files as well as meta-data (for the display of IO contents and the use of IO functions on different IVs), present themselves differently depending on the IV they are displayed on, include the same data and functionality on any IV they are displayed on. IOs can be copied or transferred between the same or different types of IVs, copied or transferred between the IVs of the same or different users, remain synchronized if copied to multiple IVs, synchronized with dynamic data sources other than source-IOs, moved from an IV to the computer desktop, file system, or 3rd party applications, thus converting an IO into a regular computer file that can be exchanged through conventional means such as email or file sharing and can be moved from the computer desktop, file system or 3rd party applications to any IV. Information Views (IVs) can include graphical user interfaces that render IOs in different ways (can be described as a optical lenses that allows users to view information in different ways) and can enable access to all IO parameters and functions. Using system110a user can interact with an information object, the system110facilitates sharing the IO among a plurality of participants; provides controls for attaching multi-media data to the IO, displays the IO in one of a plurality of views; and in response to interaction with one of the plurality of views by one of the participants, provide a communications path between at least two of the participants sharing the IO. Directing attention briefly toFIG.2, an example layout of an information object (IO)310is shown. An IO320in this example provides a visually compact, standardized, and abstract representation of information that displays, reminds of, and/or links to the original piece of information associated with that IO320. Generally, an IO320is any graphical representation of an information object and may be considered the information object itself. The term “JO” as used herein can thus include anything from a simple icon that represents something such as a web page, a document, or a live information source, or an IO can be a more complex representation of such things. Both, the layout and the functionality of IOs320are highly modular in one configuration. This means that visual components and computational features can be individually turned on and off, and that additional visual components and computational features can easily be integrated. Furthermore, the colors and fonts of all IO components are customizable. In one embodiment, IOs320are visually subdivided into six segments in one configuration that can expand and contract depending on their contents. The icon area321allows for the placement of graphical and/or textual material that can help users108to quickly visually locate IOs among many other IOs as well as to memorize and recall the data, information or knowledge associated with that IO320. Users108can copy and paste pictures and text from various computer applications into the icon area. Users can also drag and drop pictures and text from web browsers and file managers into the icon area321. Furthermore, users can directly draw and write into the icon area321. The background color of the icon area321is also customizable allowing users to visually group and highlight IOs. Typically, the icon area321includes an IO graphic such as an icon, picture or other graphical information that represents information associated with that IO. As an example, if an HTML document such as a web page is of interest to a user, that user can create an IO to represent that web page. The IO can include a picture from the web page or a reduced sized version of the web page as its graphic. The IOs320include an information bar322that accommodates an interface and visual indicators for complementary IO functions. A small rectangular box that includes an icon or in some cases a numerical counter represents each available IO function. Three distinct colors visually indicate the status of each function. In one configuration, the color gray is used for inactive functions, the color green is used for active functions and the color red is used for functions that require user attention. Every function can be individually turned on and off. The information bar322also allows for the customization and addition of functions specific to particular situations and work tasks. The following is a list of example functions and is not intended to limit the invention to information objects (e.g., IOs320) including such functions. A reference function allows users to hyperlink or to attach related information. A location function is used to associate IOs320with a geographic location. The location Algorithm indicates geographic locations for which cards (IOs) are available. In other words, if a card (IO) is associated with a geographic location then the Location Algorithm will represent this card with an icon on a map. Multiple cards in similar geographic locations do not produce multiple icons but a single icon of enlarged size. A typical application is a newspaper that marks on a world map all locations for which news is available. A control function enables automatic IO updates. A comment function allows for the addition of comments and annotations. A vote function allows collaborating users to exchange their opinions about the relevance of IOs. Every user is given one supporting vote for each card (IO). The icon color indicates the addition of recent and the presence of past votes. The numerical counter next to the icon displays the total number of supporting votes. An access log function provides users with a detailed record about the IO history. The Log Function provides users with a detailed record indicating the dates, times and names of all users that previously viewed, copied or modified the card (IO). The icon color indicates recent and past log entries. The counter next to the icon displays the number of log entries. This functionality allows users to review the evolution, collaborative use, and authorship history of cards (IOs). A personal note function is used to announce and send IOs320to specific users. A priority function allows users to categorize and highlight IOs. The date/time bar324displays the date and time of the most recent IO modification. The relation manager150can use the date/time bar indication to reconstruct the chronology of contributions from different users and sources. The author bar325displays the name of the user who last modified the IO320or the information source the IO320has been copied from. The author bar325provides an indication of the IO editor and is used to compare contributions from different users and sources. The heading bar326allows users108to complement IOs320with a brief description or some keywords such as a title. The heading bar326effectively complements the icon area321by introducing language as an additional means for the abstract representation of data, information and knowledge. While the icon area321is particularly useful for the visual navigation of large IO arrangements, the heading bar326is focused on supporting the quick and easy recollection of IO meanings and contents. The creation and use of IOs320can fundamentally shift the way analysts and decision-makers comprehend and think about information during sense-making activities. The creation of IOs320engages users108into a process of converting, standardizing, abstracting, and associating information while the use of IOs320fosters a strong focus on relating information uninfluenced by the format and location of information. Humans like to think of data, information and knowledge as objects that they collect, compare, and organize. The conversion of data and information (as well as the externalization of knowledge) into virtual and physical objects accommodates this way of thinking. Dealing with virtual and physical objects such as files on a computer desktop or documents on a table effectively increases a human's ability to deal with complex sense-making tasks. A standardized IO size and layout is convenient to collect, compare, and organize IOs320. Configurations disclosed herein are based in part on the observation that the benefits of standardized information objects are present in various everyday objects. For example, index cards, credit cards, business cards, slides, photographs and postcards are usually of equal size and layout so they can conveniently be stored, accessed, and processed. The abstract representation of data, information and knowledge with IOs320engages a human's visual recognition in ways that decreases information access time and allows for the processing of large amounts of information. Furthermore, the process of creating IOs320requires users to circumscribe the contents associated with IOs320in a visually and mentally fast accessible and easily comprehensible format thus encouraging a more careful analysis and understanding of the contents associated with IOs. The concept and use of abstract visual and textual reminders is also present in various everyday objects. For example, desktop icons and thumbnail views allow users to easily locate and organize computer files. Military ribbons use abstract visual representations to provide service, mission and award specific information on a small clothing area. Traffic signs depend on abstract visual representations that are easy to spot and understand by pedestrians and car drivers. In one configuration, an IO320does not include information per se but only serves as a reminder for the presence of a particular piece of data, information or knowledge. The separation between IOs320and content associated with IOs320allows for the compact visualization and organization of large amounts of information. AN IO320may be viewed as a meaningfully labeled hyperlink to a piece of content available in a remote location. Users108can easily arrange and rearrange IOs320in the workspace300. Users108benefit from this process by developing a good understanding of the IO contents and the relations among IOs320(e.g., context). The use of IO arrangements also benefits collaborative sense-making tasks. People of different backgrounds, interests and foci have their unique ways of relating information. The collaborative development of IO arrangements can help people to determine intersecting views as well as to develop a shared understanding of a particular information space. Information objects are thus each represented as respective IOs320in this configuration. The IO attribute indicators shown inFIG.2can indicate an IO state that can include such things as ownership of the IO, a geographic location associated with the IO, a time of creation of information associated with the IO, and other information. Depending upon the configuration, each IO320can further include features such as an instant messaging capability allowing to users in a distributed system to share access to an IO in order to exchange comments or messages between themselves concerning information associated with the IO. In one configuration, the system displays, within the IO perimeter, at least one messaging icon operable by a user to send a message to an IO associated with at least one other user, thus enabling collaboration. In one configuration, clicking on the messaging icon allows users to add a time/author stamped message to an IO320that all users108-2through108-N can see if they have a copy of this IO320. For any user108, the icon lights red if the IO320includes new messages that the user has not yet looked at, green if it includes messages that the user previously looked at, and gray if the IO includes no messages. The counter next to the icon displays the number of messages. It is to be understood a that reference to an IO320refers to an implementation of an information object, and as will be explained shortly, algorithms disclosed herein can analyze information about IOs, such as spatial relationships between IOs, and that such algorithms and processing can be applied to other representations of information objects besides IOs, such as desktop icons, physical locations of physical items such as items on a store shelf, and so forth. Returning attention back to the graphical user interface132inFIG.1, the Workspace View300presents users with an empty canvas for the creation and grouping of IOs320. In one configuration, the Workspace View300can be the desktop of a computer system110provided by an operating system, such as the Windows Desktop provided by the Windows family of operating systems made by Microsoft Corporation of Redmond Wash., USA (Windows is a registered trademark of Microsoft Corporation). The Workspace View300is designed to support individual and collaborative sense-making tasks such as information analysis, planning, decision-making, brainstorming, and problem-solving. The Workspace View300functionality will provide users108with the means to efficiently create, collect, organize, analyze, and discuss information. In one configuration, IOs320may be created manually or semi-automatically. The manual creation of IOs320requires the user108to position an empty IO320on the Workspace View300and complement it with a picture, a heading, and a reference to an information source by simply dragging and dropping pictures, texts, hyperlinks, files, or file folders onto IOs320. The semi-automatic creation of IOs320allows users to drag and drop pictures, text, hyperlinks, URLs, files, and file folders directly onto the Workspace View300. This action will cause the workspace server151to create a new IO320with the content of the dropped item linked or attached to the IO320and with a picture and a heading added to an icon area and the heading bar (to be explained) of the IO320. Furthermore, users will be able to create cards (IOs) from within various computer applications (after installing a plugin) by clicking a bottom. Users108may also copy IOs320from the News View305, the database view310or the exchange view315. Furthermore, a user108can copy IOs320from the Workspace View300to the computer desktop or file system (and vice versa) thus converting IOs320into regular computer files. This functionality has a variety of applications such as to exchange IOs320by email or to convert IOs320for use with other software applications. Automated IO creation is provided as well that allows a user for example to specify a file system or path or URL or database identity and the system can traverse records, documents, web pages, files and the like within the specified path, database or domain and can convert each record, file or page into an IO for use in the system. Below are steps a user108can perform to create a card. The relation manager150creates and displays a new card in the workspace view. The user108may have performed a drag and drop of some content onto the workspace view300, or may have selected a pull down menu in the workspace view to create a new card320. In response, the relation manager150creates a new card record160in the database to correspond to the new card in the workspace view300. The relation manager150associates an information source identified by the first user to the new card320created in the workspace view. In the case of a drag and drop, the information source can be identified via the information dragged into the workspace view300. If the user created a new card from pull down menu selection, the workspace view300can prompt the user to specify a specific information source (that can be hyperlinked or embedded) such as a file system path and filename, a URL or some other indicator. The relation manager150stores card attributes and an identity of the information source in the new card record160in the card database. The relation manager150stores a card creation time in the new card record. The relation manager150stores a card position in the new card record160. The card position indicates positional information such as a corner pixel location (or a hierarchical location such as a specific window, directory, or desktop) as well as a window size of the new card320within the workspace view300of the first user108. The relation manager150stores an identity of the first user in the new card record160. The user for a new card becomes the author of that card320. Once a card320is created, pictures and text can be dragged and dropped onto the icon area and text can be dragged and dropped on the heading bar of the card to cause those areas to automatically change. In this manner, a user108can create a card by simply dragging a hyperlink or file or other digital material onto the workspace view300. A new card320is created where the hyperlink or file is dropped, a picture is added automatically (by searching the hyperlink destination or the file contents for an appropriate graphic), the heading is added automatically (usually the web site or file name), and a link or embedded attachment is automatically created from the file or the web site reference by the hyperlink. In one embodiment the relation manager150allows a user to easily arrange, compare, and evaluate IOs320thus ensuring that users108will not be distracted by different information formats but focus on information contents. Each individual user108can determine the particular arrangement or spatial (vertical, horizontal, overlapping, proximal) layout of the graphical IOs108on his or her Workspace View that might indicate meaning to that user108. For example, a user108may group IOs that represent related information sources in a relatively close proximity to one another thus defining tight spatial relationships between those IOs. The system is able to analyze spatial relationships that exist between IOs320in each users Workspace View300in order to identify other information objects (e.g., other IOs320) that may be of interest to that user108. Based on this analysis, the relation manager150can identify other information sources by showing other IOs320that might be of interest to that user. Embodiments disclosed herein are based in part on the observation that the spatial arrangement of a first set of objects such as IOs320in the Workspace View300can be used to identify relationships between those objects and can be further used to identify other objects (such as other IOs) that may be of importance to a user who created the initial spatial relation between the first set of objects in the Workspace View300. The Workspace View300introduces several options for the grouping of IOs320. One option is to increase the size of one IO320so as to accommodate several other IOs320inside its boundaries. Moving an IO320will drag along all IOs320within its boundaries. A second option is to overlap IOs320. Moving an IO320inside a cluster of overlapping IOs320will drag along the entire cluster of IOs320. A third option is to use multiple workspaces for the grouping of IOs320. A fourth option is to link IOs320to an entire IO arrangement. A mouse-click onto such an IO320will then open another Workspace View300and display the IO arrangement. Further details on contents and layout of IOs320will be explained shortly. The news server152produces (i.e. provides information feeds for) the News View305to allow users108to collect IOs320for addition to the Workspace View300. IOs320in the News View305represent information from a variety of other information sources. Feed examples includes news, alerts, announcements, motion detections on security cameras, emails, IMs, SMSs, sensors, real time search results, and custom content. An analogy of the News View305is a “news stand” in which recent or periodic information is available. In one configuration, the News View305displays the contents IOs320from news servers151that may be distributed through the network101within one of more computer systems110. IOs320in the News View305can be organized by time and information sources (e.g., by topic or content area) as well as by geographic locations associated with the information represented by each IO320, and can be filtered for user specific keywords. Users108are able to copy IOs320between the Workspace View300and the News Views305. IOs320copied from the News View305to the Workspace View300may be static and not change in content, or such IOs320may dynamically adapt to modifications in content so that as the information source associated with an IO320produces new information. In one configuration, the system can display IOs in various arrangements within the News View305. In particular, the News View305can display a timeline arrangement of IOs320. The timeline arrangement provides a visualization that represents and organizes knowledge using a grid comprising of a timeline categorization for IOs on the horizontal axis and subject categorization on the vertical axis. In an alternative configuration, the new server152can display a map arrangement that provides a visualization that represents and organizes knowledge using a world-map or a floor plan comprising of geographical categorization for IOs. As an example, a News View can organize and arrange IOs in an ordered list or as a table with rows and columns of IOs organized horizontally by time and vertically by information sources. In such a configuration, new IOs for newly discovered information can be inserted on the left pushing existing IOs to the right. The time scale can be irregular and optimized to display the largest number of IOs possible. The rows of IOs in the News View can automatically expand and contract depending on the number of IOs to be shown in each row. In this manner, the system disclosed herein provides a novel process for information retrieval and processing based on information collection support. In one example, the News View305displays incoming information in IOs320from user selected information sources. The users108can copy relevant information from the News View305to the Workspace View300or can create new information directly in the Workspace View300by creating new IOs. The user(s)108can study, organize, and categorize the information on the Workspace View300. In one configuration, if the Workspace View is connected to a server then all content and established relations are made accessible to this server. The user can use two Workspace Views if he wishes to separate between information that is public and private. Other users108view the information on the news, database or exchange views395,310and315and can also copy relevant information from the news, database, and exchange views305,310, and315to their personal Workspace Views300. The system can track and analyze this movement of information from source to destination and can use this IO usage and movement data based on user identifiers, locations of IOs, arrangements of IOs, contents of IOs, times of movement and copying, and other such information in processing algorithms to infer or identify relationships between IOs and to suggest other IOs that reference other information that may be of interest to a user. The News View305provides users108with the technology to effectively monitor and visualize additions and modifications from information sources such as web sites, databases, security cameras, alarm systems, sensors, news feeds and so forth. Every information item is displayed as an IO320. In one configuration, the News View305can organize and arrange IOs320in an ordered list or as a table with rows and columns of IOs320organized horizontally by time and vertically by information sources, such as by geographic origination or relation of the new story associated with the IO320. In another configuration, the News View has several visualization options such as displaying the IO arrangement in a table or displaying the IO arrangement on a geographic map. In one configuration, new IOs320for newly discovered information can be inserted on the left in an appropriate row pushing existing IOs320to the right. The time scale can be irregular and optimized to display the largest number of IOs320possible. The rows of IOs320in the News View305can automatically expand and contract depending on the number of IOs320to be shown. The News View305can include an adaptor for receiving news feeds and the user interface chronologically organizes information chronologically in a subject-time matrix. The News View305row can combine multiple data streams so that a row includes a group of data streams. A News View row can also display other data formats as well (email, search results, sensor data gathered though a file transfer protocol (FTP) in addition to displaying RSS streams. The News View305offers a variety of tools for the exploration and collaborative use of information. As an example, a user108can copy IOs320from the News View305to the Workspace View300thus allowing users to easily collect, compare and organize new information. IOs320copied from the News View305to the Workspace View300may be static or dynamic as noted above. A static IO320is an exact copy of the IO on the News View305. In one configuration, a dynamic IO320continuously updates itself to reflect the most recent (left-most) IO on the News View row it was copied from.FIG.9below shows another example layout and additional IO features. In this manner, the system disclosed herein provides a novel process for exchanging information and for asynchronous, decentralized, remote, and cross-organizational collaboration. In particular, users108use their individual Workspace Views300to organize and analyze information. The exchange server153produces the exchange view315to display cards (IOs)320that are concurrently displayed in one or more workspace views300of at least one other user108. The exchange view315can display cards (IOs)320in prioritized order, or alternatively can display cards320the way they are arranged on another users Workspace. This allows users108to view each other's Workspaces300as they appear to those other users108. The exchange view315displays (in prioritized or chronological order in one configuration) the IOs320created by all collaborating users108(or those with the appropriate access permissions or security levels). The exchange server153manages the display and exchange of card (IO) information between exchange views315and can include filter features to allow a user108to only see cards (IOs) from selected users108(as opposed to all cards (IOs) in all workspaces300of all users108). The users108can copy relevant IOs320from their exchange views315to their Workspace Views300. Users108can take “ownership” over IOs320copied from their exchange views315and these IOs320are now modifiable by the user108. In one configuration, when a user takes ownership of an IO320, the IO320is no longer synchronized with a counterpart IO on the IO author's Workspace View. In another configuration, once an IO320is owned exclusively by a user108, these IOs320can no longer be used for collaboration such as by “instant messaging,” IO commenting, or “IO voting.” FIG.3show a diagram360of an IO320as viewed in several IVs. The diagram360also shows how IOs present themselves differently on different IVs and can be copied between IVs. The following are examples of the various IVs (not intended to be limiting). The News View362displays the contents of RSS streams (and other dynamic information sources) as IOs arranged in a table, organized by time in horizontal and by information sources, subject or category in vertical direction. New IOs are inserted on the left pushing existing IOs to the right. The time scale is irregular and optimized to display the largest number of IOs possible (meaning time segments without IOs are automatically removed). The rows automatically expand and contract depending on the number of news items (makes all rows display similar numbers of IOs). Users can add RSS files by simply dropping an IO or an RSS URL onto a row in News View. A News View row can include multiple RSS streams (multiple streams are automatically combined). The News View can be used to display a wide variety of information such as newspaper news, emails instant messages, motion activity on web cameras, eBay and Craigslist postings, company internal announcements and alerts. The News View can also be used as a RSS feed generator. For example, a user could create an empty News View that all users can view and add IOs to. Users can copy News View rows or individual IOs to other IVs. A copy of a News View row onto a Workspace View would result in a single IO that dynamically updates itself and that displays the most recent item in the associated RSS feeds. The primary purpose of the News View is to help users monitor large numbers of RSS feeds in a quick and easy comprehensible format. The Workspace View364displays IOs in a game IO like format. The visual components of the IOs are customizable and expandable. Users can modify IO parameters in a so-called IO editor (see Illustration3and4). For example, users can add files to IOs, view and modify ratings, or discuss the IO contents with other users through the built-in instant messaging system or shared white board. If a user moves a regular file or a URL from a computer file system or a web browser onto the Workspace View then an IO is automatically created with the file attached or the URL hyperlinked. The Workspace View is designed to allow users to easily collect, organize and compare information in different formats and locations. For example, a Workspace View may be used for web shopping allowing users to quickly drag and drop items of interest from different web sites (such as Amazon, eBay, and Craigslist) onto the Workspace View for the subsequent comparison of options and prices. Since IO components can be dynamic (meaning that they can automatically retrieve content updates from dynamic information sources) the user can also use a collection of IOs to monitor changes to prices and bids. The Map View366is a geographic map display that represents IOs as location items. A location item presents itself as (one or more) symbols (of choice) on the map. The location item allows access to all IO functionality available in other IVs. If the geographic location associated with a location item is dynamic then the symbol would automatically update its location on the map. If a location item is copied to a Workspace View then it will present itself as an IO with the geographic location accessible and modifiable through the IO Editor. The Map View is primarily designed for military use but has a wide range of commercial applications as well. For example, a user could create a public Map View with hiking paths. Other users could copy hiking paths of interest from this public Map View to their personal Map View s or other IVs, discuss hiking paths through the IO instant messaging feature, or rate the quality of hiking paths through the IO rating system. The Project View368is a time planning and calendar display (similar to MS Project) that allows users to represent IOs as time bars. IOs in this IV also maintain all the contents and functionalities of IOs displayed on other IVs. Moving an IO from the Project View to the Workspace View would create an IO that displays the time frame with a time bar on the IO or a time frame in the calendar tool of the IO Editor. The Project View is primarily designed for people that need to compare and modify time related IO parameters. For example, the Project View may be used to review and modify IOs that represent tasks scheduled for execution during particular time frames, resources that are only available during particular time frames or people that are only available during particular time frames. The Graph View370displays IOs as line or bar graphs. This particular IV is primarily designed for comparing statistical data associated with IOs such as stock quotes, bids and sensor data. The List View372is a spread sheet like display (similar to the Microsoft® Office Excel spreadsheet program) that allows users to represent IO parameters in a table format. The List View is primarily designed for people that need to compare and modify particular IO parameters such as, for example, the cost of sales items or the specifications of resources referenced by individual IOs. Now referring toFIGS.4A and4B, screen shots show News Views that display news for multiple different news papers (FIG.4A) and postings by different users (FIG.4B). FIG.5Ashows a diagram of IOs being exchanged between IVs of different users.FIG.5Bshows a diagram of IOs being exchanged between Private, Visible and Shared Views. There are various ways in which IOs can be exchanged between the IVs of the same and of different users such as:by copying IOs from the personal IV of another user (Users can make their personal IVs accessible to other users. An IV can be Shared510(read/write), Visible512(read) or Private514;by sending IOs to a particular user or group of users through the integrated instant messaging system or a commercial instant messaging system;by sending IOs to a particular user or group of users through email;by making IOs available through shared file systems; andby retrieving IOs from our database (The database automatically collects and organizes IOs from the IVs of different users. A variety of data visualization and access tools allow for the easy retrieval of IOs from the database, for example, in conjunction with an EWall Agent System; and by retrieving IOs from a Discovery View. The Discovery View presents individual users with a custom selection of potentially relevant IOs, and in one embodiment consolidates the Exchange View (315) and Database View (310). In this embodiment, the Discovery View not only displays related information in the IO database125but can also display related information found by other web services (e.g., related products for one selected product provided by online sellers or search engines). The Database Views (Discovery Views)310of individual users108display (in prioritized order) cards (IOs)320available in the database (as records160) that are related or relevant to the cards (IOs)320on a user's workspace view300based on the relations discovered via application of the relation processing algorithms155. The relation manager150is able to analyze the spatial arrangement of the cards320as provided by the user108in the workspace view300in order to deduce or infer relations between the information sources associated with the cards320. In response, the database server154can provides other cards based on matching card records160maintained in the card database125that may be of interest to that user based on the analysis of the spatial arrangement of the cards320on that user's workspace view300. The database server154is thus capable of inventing or deriving additional relations170based on the analysis of previous relations. For example if card A is related to card B and card B is related to card C, then the database server154can decide to relate card A and C because the indirect relation suggests a partial correlation. In this manner, the system disclosed herein provides a novel process for organizational knowledge management based on information merging. Users108use their individual workspace views300(or operating system desktops and file systems) to organize and analyze information. Additions to the workspace views of all users are collected in the card database125. The database items including the relations170and the card records160are combined into one coherent network of cards using the relation processing algorithms155. The number of relations170between individual cards320may differ and can indicate the “strength” or “importance” of a relation170(or its unimportance or weakness). Relations170in the database125may be complementary or conflicting, speculative or concrete. The database views310of individual users108display (in prioritized order) cards320available in the database (as records160) that are related or relevant to the cards320on a user's workspace view300based on the relations discovered via application of the relation processing algorithms155. The activities and contents on the individual users Workspace Views are monitored and recorded by the Recognition Functions. The Recognition Functions maintain a record of all active and erased cards including a list of all current and past owners and users of individual cards. The Recognition Functions record and combine all explicit relations among cards established by the users as well as all implicit relations established by the Algorithms. The Recognition Functions keep track of all interactions that take place among users. An interaction is registered if a user copies a card, views the content associated with a card, or adds a comment, vote or notification to a card. The relation manager150analyzes card (IO) records160and card relations170within the card (IO) database125. The results of this analysis can produce an indication of other card records160in the card database125that correspond to other cards320that are not currently displayed in the workspace view300of the user108, but that might be of interest to that user operating that workspace view300. A transformation algorithm155that can, for example, compare the combination of spatial relationships170that identify closely placed cards320in the workspace view300. For those cards320that are close to each other, the transformation algorithm155can compare the content of those cards (i.e., can compare the information sources referenced by these cards) to identify a common subject. Alternatively, the transformation algorithm155can compare the creation time of closely spaced cards. In another alternative, the transformation algorithm155can compare user identities of users108who have accessed these cards. Using these metrics (e.g. a common creation time window, or a common subject, common users or use patterns), the transformation algorithm155can identify, in order of relevance, a set of resultant cards not already placed in that user's workspace300. These can be shown to the user in the database view310. Additionally, the current arrangement of exchange view cards320can be reordered to reflect the newly discovered relations of the group of cards so that cards of other users that might be of importance to the user (that are already in the exchange view) are more to the front of the line or list in the exchange view to be more noticeable to that user. In one configuration, cards320displayed on views not only hint relevant information but also inform about people (the authors of the cards) with relevant knowledge on a particular topic. This can foster the creation of teams or communities of interest and help to build knowledge-based relationships between people due to information objects those people spatially organize in particular ways. The relation manager150applies at least one interpretation algorithm155between card (IO) records160in the card database125to create relations170between card records160in the card database125. The relation manager150applies a spatial relation algorithm155that discovers spatial relations170between position information of at least two card records160in the card database125that are associated with cards320displayed on the graphical user interface132in the first set of cards. The relation manager150identifies implicit relations between cards in the first set of cards displayed on the graphical user interface based on proximity of cards on the graphical user interface. Thus, how close or far apart cards are can determine a strength or week relationship between cards320. The relation manager150identifies implicit relations between cards in the first set of cards displayed on the graphical user interface based on horizontal alignment of cards320on the graphical user interface132. Horizontal alignment can be suggestive of a list of items and transformation algorithms155can perform analysis of cards320arranged horizontally to suggest other cards that might be successor cards in the list. The relation manager150identifying implicit relations between cards in the first set of cards displayed on the graphical user interface based on vertical alignment of cards on the graphical user interface. Likewise, vertical arrangements of cards320might also be suggestive of a card list. The relation manager150identifying implicit relations between cards in the first set of cards displayed on the graphical user interface based on at least some overlap of cards on the graphical user interface. Overlapping cards can suggest a grouping relationship. It is to be understood that the spatial relations noted above are not limited to vertical, horizontal and overlap. Other spatial relations can exist and be analyzed as well, such as relative size relations between cards, or exclusive (i.e. alone) or particular placement (e.g. in a certain corner, in center, etc.) of cards320in certain regions within the graphical user interface132. Additionally, spatial processing algorithms can recognize card clusters that define groups of cards in close proximity to one another that create a cluster that is distinct from other clusters of other cards320. The relation manager can detect and analyze other relations as well besides spatial relations. As an example, context relations concerning use, access (e.g. copying) or other operations on cards can be detected that are based on time of card creation or modification, which users access a card, and relation of content between cards that have similar or relating subject matter. The relation manager150applies at least one context relation interpretation algorithm155to discover contextual relations between card records160in the card database125. The contextual relations155indicate commonality between at least two card records160based one or more relationship criteria. In one configuration, the relation manager150can detect and analyze chronological context of the at least two card records (e.g., sequence of creation or modification of card information). In another configuration, the relation manager150can detect and analyze collaborative context by at least two users108of at least two cards320respectively associated with the card records160. An example of collaborative context is when two or more users share two (or more) cards such as by providing comments to those cards, or using instant messaging to discuss information associated with the cards, or via other common access (e.g., copying to workspace) to the cards by those two or more users. In another configuration, the relation manager150can detect and analyze similarities of content identified by the at least two card records using keyword or subject or title matching, for example. In this manner, using contextual relations, the relation manager150can determine, for example, that if user A and B use the same card then other cards used by user A and B are more likely to become related. Card content can include content displayed on the card, parameters associated with the card, or information content hyperlinked to the card or metadata about the card (e.g., creation time, time of last access, time of last copy, user access history, etc.) that might not be readily visible from viewing the cards in a view. In one configuration, the database contents are evaluated based on the relations between cards. The relation manager150can thus establish relationships among cards320based on comparison of content and spatial history of card usage. Content relations are established based on a comparison of card contents such as headings, contents of hyperlinked files, and image descriptions, whereas context relations are established based on the spatial location of cards, the collaborative use of cards, or the virtual organization of cards (e.g. the hierarchical organization of files in computer directories). For example, two relations between two cards can be valued more than one relation between two cards. This is because two relations between two cards means that two different algorithms both established a relation between these two cards, although such relations may have been established for different reasons. Four distinct types of Database Algorithms examine the networked contents of databases by defining a starting point, a search path, an end point, and the order in which the search results are returned: The first type of Database Algorithms defines the Start Point of a database search. The Start Point refers to a node (IO) in a database from where directly and indirectly related information is being retrieved. A Start Point may be defined though a conventional database query that contains text or graphical information. This means that a Start Point might reflect a node that contains similarities with information that a particular user is currently working with, focusing on, or searching for. A Start Point can also be the most recently active node in the database such as the most recently added, retrieved, traversed, or modified node. Furthermore, a Start Point might be defined through so-called “Stimuli” or “Focal Points” that are placed and dynamically displaced by the Database Algorithms. The Start Point continuously changes its location and multiple Start Points may exist simultaneously. In the case of multiple Start Points the search progresses in parallel from multiple locations and in multiple directions. Every search returns the node referenced by the Start Point as well as a selection of directly and indirectly connected nodes. The selection of related nodes is defined by the Search Path. By default, the second node returned is the adjacent node with the most relations. The same procedure applies for all subsequent nodes unless specified differently by the Search Path. The second type of Database Algorithms defines the Search Path. The search path determines how the network is traversed. Every search initiates at a Start Point and proceeds by examining adjacent nodes. By default, adjacent nodes with many relations are examined first. Previously examined nodes are ignored. The Database Algorithms offer a variety of options for the examination of less heavily connected nodes. For example, the Curiosity Algorithm is used to increase the probability for the examination of weaker branches. In addition, the Persistence Algorithm determines how many nodes associated with weaker branches will be considered. Other Database Algorithms dynamically define and modify the Search Path based on the relevance of currently explored nodes. The relevance of nodes is determined based on the card contents or the card authors. For example, a card may not contain relevant content but its author may be known for having created cards that contain relevant content. Thus, the author is considered a specialist in this particular area and the cards created by the author are given preference. The third type of Database Algorithms defines the End Point of a Search Path. The End Point may be determined through an analysis of the card contents, the distance from the Start Point, or the time passed since the start of an examination. In other words, the End Point represents either a satisfying search result or a state of decreasing patience or tiredness. For example, the Patience Algorithm terminates a search after the examination of certain number of nodes. The fourth type of Database Algorithms orders the search results. By default, the search result is a string of nodes that reflects the sequence in which the nodes were examined. Other criteria of organizing the search results include the distance of nodes from the Start Point, the number of relations associated with individual nodes, or the card properties such as card sizes, modification dates, authors, contents, notifications, comments, votes, accesses, font types, font styles, font sizes, pictures, and locations. The Unity Algorithm sorts cards (IOs) by the number of current and past users of individual cards. A card user is defined as somebody who created, copied, took ownership, reviewed the content, commented, or voted for a particular card. The Unity Algorithm assumes that the activity associated with a card is a meaningful way to determine the importance and popularity of the card. The Weight Algorithm sorts cards by the number of relations associated with individual cards may be related twice with card B and once with card C. Thus, card A is associated with three relations, card B with two relations, and card C with one relation. The Weight Algorithm assumes that number of links associated with a card is a meaningful way to determine the importance and popularity of the card. The Relevance Algorithm is the same as the Weight Algorithms yet only considers relations that connect with cards the user is using. For example, a card on a user's Exchange View may be related to four cards only one of which the user is using on his Workspace View. Thus, the number of associated relations for this card is assumed one rather than four. The Relevance Algorithm is particularly useful to spot cards that have some relevance with cards the user is already using. The Group Algorithm fosters the information exchange among users that have had frequent interactions in the past. The Group Algorithm considers an interaction when a user copies a card from another user, takes ownership over another user's card, or adds a comment or a vote to another user's card. The Group Algorithms orders cards based on the number of interactions that a particular user had with the individual card authors. FIG.6Ais a diagram of an exemplary GUI Organization including graphical user interface610for the management of IVs in accordance with one example embodiment disclosed herein.FIG.6Bis a more detailed diagram of the exemplary GUI Organization ofFIG.6A. In one embodiment, the GUI Organization includes three columns. The left column620includes displays622and624provided by the View Manager178and the Communication Manager180, respectively. The View Manager178displays all available IVs, allows users to open individual IVs, and enables users to modify the access rights of individual IVs. The View Manager178also displays representations of servers, databases, agent systems (e.g., EWall Agent System) and users in the collaborative environment. The Connection Manger180allows users to communicate with other users, exchange IOs with other users, access the IVs of other users, import IOs from databases and agent systems, as well as to control the export of IOs to databases and agent systems. The middle column630is used to display one or more IVs632a-632n. The right column640includes the IO Editors640a-640n. IO Editors640a-640nare tools that allow users to view and modify IO parameters and operate IO functions. The information displayed in IO Editors always refers to the currently selected IO or group of IOs. FIG.6Cis a diagram of an alternate exemplary GUI660in accordance with one example embodiment disclosed herein. Graphical user interface660for displaying an information objects includes a discussion subject602, a message indicator603, threaded discussion window607, a list of participants linked to the information object608, a collaborative rating605including a ranking of importance, a buddy list601, a discussion heading604, a list of attachments606and meta-data and shared applications609. In other embodiments the graphical user interface for displaying information objects includes a geographic map, category keywords and auction information. In other embodiments the attachments include thumbnails of multimedia data. FIG.7is a diagram of a GUI700showing several features of the system110available to the user including control710which when clicked sends or receives an IM message about a view. This is a means for enhancing a hosted conversation, for example in a chat room. The GUI700further includes control712click to send/receive an IM message about an IO, for example in a chat room. The following controls can also be used: control714allows a database to collect information from the corresponding window; control;716disallows a database to collect information from the corresponding window; control720sends an IO to a user or a group of users; control724disallows a user to view details of the corresponding view; control726allows a user to view/modify the corresponding view; control728sends or receives an IM message or IO to or from the user, for example in a chat room; and control730sends an IM message or IO to a group of users, for example in a chat room. Indicator718indicates available resources in a database (cards, visualizations, statistics, etc.) and indicator722indicates a view of another user that is open for viewing or modification. GUI700allows the users to import and export IOs from and to IM systems, other users views, shared views, e-mail tools, databases, agent systems. FIG.8is a diagram of IO File Management800which allows an IO to display the content of different types of files802including such as content from: a web site810, flash applications812, movies814, pictures, text from office documents816, PDF documents818and other documents820. In operation, the IO file manager182processes attachments, hyperlinked web content from the different file types802and remote sources806including remote RSS files822, remote objects824and remote files826to create and dynamically update IO contents. A user108can create an IO by simply dragging a hyperlink or file or other digital material onto the Workspace View300. A new IO is created where the hyperlink or file is dropped, a picture is added automatically (by searching the hyperlink destination or the file contents for an appropriate graphic), the heading is added automatically (typically the web site or file name), and a link or embedded attachment is automatically created from the file or the web site reference by the hyperlink FIG.9is a diagram of IO components expanding on the basic IO components described above inFIG.2. IOs include default and custom parameters and functions. IO parameters and functions are best explained with IOs rendered on Workspace Views. IOs can display a large number of IO parameters and function without the need for the user to access the IO Editor. The IO layout can be customized to only display the IO parameters and function the user wants to see. The following list provides a few examples of IO parameters and functions as they would appear in different sections (Bars) of an IO on a Workspace View. Picture/Text bar902allows the insertion and display of text, pictures, PDF files, active Flash applications (can run inside the Picture/Text Bar), Office documents, and additional applications, file types and multi-media data. Function Bar910displays icons that serve as status indicators and that link to particular IO Editors. A Bubble Icon912opens an IO Editor window that allows users to exchange comments, share a white board, contribute to a forum, or insert notes. The “bubble” icon912is rendered white if no user added any information, gray if one or more user added information and red if one or more users added information that the viewer has not yet looked at. An optional calendar tool allows users to compare their availabilities and schedule a “chat” session ahead of time. A Flag Icon914links to a map display in the IO Editor allowing users to view and modify geographic locations associated with an IO. A Person Icon916indicates whether a particular IO is used by other users and links to an IO Editor that allows the IO owner to view the list of IO users as well as to assign access restrictions to IO contents. A File Icon918allows users to add and access (any number of) attachments and hyperlinks to an IO and allow for contents to be partially or fully encrypted. An Arrow Icon920allows users to add, modify and control dynamic information sources (e.g., to control automatic updates to IO contents retrieved from dynamic information sources such as Email accounts, RSS feeds, sensors, motion detectors and security cameras or web sites). Users can activate one or more Reliability/Ratings/Votes/Importance/Priority Bars (collectively referred to as bars924) for the individual and collaborative evaluation of information. IOs can be evaluated on a scale from 1 to 5 (represented by 5 boxes). Filled boxes represent the average evaluations of all IO users while the arrow symbol indicates the individual rating of the IO viewer. Trade bar930and Value/Value Change bar928comprise a trading feature that allows users to trade IO contents or items referenced by IOs. For example, an IO could represent a stock (market item) and allow users to monitor, buy and sell stocks; an item on eBay or Craigslist and allow users to monitor and manage bids; or an item on Amazon and display its current purchase price. Users associate IOs with one or more categories using a Category bar932that allows for faster IO searches on IVs and databases. Users associate IOs with a time or time frame using Time Frame/Schedule bar934, for example, displayed on IOs as a point or as a bar on a time scale. The time scale is the same for all IOs and can be set in the preference menu, e.g. left side of the bar=−12 hours, middle=now, right=+12 hours. The point or bar on the time scale will move from left to right in accordance to the real time. Graphic display940shows various display options for icons on the various function bars. In one embodiment the system110provides hosted conversation among a group of participants using the integrated instant messaging system facilities which provide a communication system linking an IO to a group of participants. Each system component (IO, IV, Database, Agent System) and participant is linked to the instant messaging system (also known as a chat room). The IO can include a link to a document, links to the group of participants, meta-data related to the conversation and communication system which links the IO to the group of participants. The process of initiating a hosted conversation begins when a user starts a discussion or a collaborative project by first creating a new IO and then adding data, meta-data and linking participants to the IO. For example, to communicate about the contents of an IO the user would click the bubble icon912on this IO in the user interface on the client system to “chat” (about this IO) with other participant who maintains a copy of this IO. Clicking on the “bubble” icon912associated with a participants (or group of participants) listed in a contact list and linked by the connection manger180would initiate a “chat” session with those participants. The hosted conversation includes a list of participants linked to the at least one information object and an IV to visualize the at least on information object. Clicking on the “bubble” icon912associated with the IV in the View Manager would initiate a “chat” session (about this IV) with every other participant who has access to this IV. The integrated instant messaging system is also used to exchange IOs between users. An IO can simply be dragged and dropped onto a user icon which subsequently sends a copy of this IO to the intended recipient. The instant messaging functionality can be complemented with any other IO parameter and function such as, for example, a function that allows users to assign ratings to databases, agent systems, users or IVs. One important benefit of this system is that it allows users to stay engaged in a large number of discussions about particular subjects represented by different users, IOs or IVs. Advantageously the user can organize and monitor a large number of IOs where every IO represents a different issue that may be discussed with a different groups of participants. Thus system110provides a graphical interface for displaying an information object which includes displaying a threaded discussion window, a list of participants linked to the information object, a collaborative rating including a ranking of importance, a buddy list, a discussion heading; and a list of attachments. FIG.10is a diagram of IO operations including IO creation. IOs can be created manually or automatically. The automatic creation converts a non-JO object or file into an IO if a user imports a non-JO file from any application into an IV. For example, if a user drags a URL of an item for sale on the Amazon web site onto the Workspace View, the resulting IO would display an image of the item in the Picture Bar, the name of the item in the Heading Bar, the cost of the item in the Value Bar, and hyperlinks to the Amazon web site where the item is advertised. Filters include the rules and manage the conversion from non-JO files to IOs. Generally, IOs can be manually created such as by creating a blank IO and then specifying an information source to which that IO refers. The IO look can be complemented with a picture, explanatory text, a heading and other visual attributes by simply dragging and dropping pictures and text fragments onto the IO. Alternatively, in another mode, IOs can be semi-automatically created in a Workspace View or on a desktop by dragging an information object such as hyperlink, document, file or URL onto the Workspace View in which case a new IO is automatically formed and associated with the hyperlinked data, file or document. The IO picture and the IO heading are added automatically if an appropriate picture and heading can be extracted from the dragged information object or if an appropriate picture can be found in a database that matches the file name or within some of the content of the dragged information object. In yet another mode, an automated IO creation process can traverses a set of records or files, such as a database or file system or all web pages within a certain URL and can perform automatic creation of IOs from each database record, web page, or file. As an example, entire databases or file systems or all documents below or within a certain URL or domain on the Internet can automatically be converted into IOs using the semi-automatic creation of IOs but in a non-manual way that does not require a user dragging and dropping of each file, document or URL onto a view. The following Workspace View examples detail several exemplary user interactions with IOs and IVs. A user can drag a URL from a web browser onto a Workspace View which will create an IO that hyperlinks to that URL and automatically extracts (from the URL destination) a picture and a heading that is displayed in the IO's picture and heading bar. The user can drag an RSS URL onto a Workspace View which will create an IO that visualizes the most recent item in the RSS file/s and that continuously updates itself. The user can drag a regular document (file) onto a Workspace View which will create an IO with the document attached. Dragging a document onto an IO also attaches the document. The user can import an IO from another user into the Workspace View which will create an IO copy that is synchronized with the original IO. The user can drag an IO from the Workspace View to the computer desktop which will create an IO file. The user can drag an IO from a News View row to the Workspace View which will create an IO copy that is synchronized with the original IO. The user can drag a News View Row to the Workspace View which will create an IO that visualizes the most recent item in the RSS file/s and that continuously updates itself. In another embodiment, a user can drag a buddy icon from an instant messaging system onto the IO or drag an IO the buddy icon to initiate a conversation or to link the “buddy” to the IO. In this manner, links are formed between the information object and one of the plurality of participants by selecting the information object and selecting the participant from a displayed list of participants and the system110associates participants and information objects by indicating a connection between an icon from an instant messaging system and the information object. The following News View examples detail several exemplary user interactions with IOs and IVs. The user can drag one or more RSS URLs onto a News View row which will add and displays the content of RSS URLs on the News View row as a sequence of IOs. The user can drag and drop a Workspace View ID onto a News View row which will display IO additions and modification on that Workspace. The user can drag and drop an IO onto a News View row which will add that IO to the IO sequence. The user can drag an IO from a News View row to the computer desktop which will create an IO. The user can drag a News View row to the computer desktop which will create an IO. FIG.11diagrams the overall process of interacting with an information object. In step1110, the system provides the information object (I/O), for example on a GUI display as described above. A new IO can be created from scratch or by using a filter, or an existing IO can be shared, copied, transferred, etc. as described above. Next, in step1120, the IO is shared among a plurality of participants. In step1130, multi-media data can be optionally attached to the information object. In step1140, the information object is displayed in one of a plurality of views. Finally in step1150, in response to interaction with one of the plurality of views by one of the participants, a communications path between at least two of the participants sharing the information object is provided by the connection manager180. FIG.12is a flow chart of processing steps showing additional details in conjunction with flow char1100FIG.11, that system110performs to allow interactions with an information object in accordance with embodiments disclosed herein. In one embodiment, step1210provides the information object by filtering a web page associated with a URL to create the information object by automatically extracting meta-data including at least one of a heading, a picture, and link to the URL. In step1220, the information object is shared among a plurality of participants by transferring the information object between different information views. In step1230, the multi-media data is linked to the information object In step1240, an automated participant is provided. The automated participant interacts with the information object. In some embodiments, there is no distinction between a user and an automated participant (e.g., a participant based on a computational system). In step1250, information is retrieved based on user interaction with the information object including spatially arranging the information object relative to other information objects. In step1252, interaction with the information object includes attaching at least one document to the information object. FIG.13is a flow chart of processing steps that system110performs to support a hosted conversation among a group of participants including interactions with an information object in accordance with embodiments disclosed herein. In step1310, a link to at least one information object is stored to host a conversation among a group of participants. The information object includes at least one link to a document; a plurality of links to the group of participants; meta-data related to the conversation; and a communication system linking the at least one information object to the group of participants provided by connection manager180. In step1320, a user interface on the client system is provided. The user interface includes an icon representing the at least one information object a contact list; a list of participants linked to the at least one information object; and an information view to visualize the at least on information object. In step1330, an Information View is displayed that includes a News View having an adaptor for receiving news feeds and the user interface chronologically organizes information chronologically in a subject-time matrix. The user interface further includes a zoomable, graphical timeline including automatic compression of the subject-time matrix and in one embodiment the news feed is a syndication feed. While configurations of the system and method have been particularly shown and described with references to configurations thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention. As an example, the order of processing steps in the flow charts is not limited to the order shown herein. Accordingly, the present invention is not limited by the example configurations provided above. | 73,559 |
11861151 | DETAILED DESCRIPTION Configuration First, an overall configuration of information providing system1according to an embodiment of the present invention will be described. Information providing system1is a system which enables a user to visually grasp the relationship between smell of an object and an expression of the smell, and to provide information for grasping what kind of smell the user himself/herself prefers. As shown inFIG.1, information providing system1includes information processing device10, display device20, sensor device40, first tray50a, second tray50b, and plural samples60. Information processing device10, display device20and sensor device40are connected each other to communicate. Information processing device10is an example of an information processing device according to the present invention, and is a device for central control in information providing system1. Display device20is an example of a display device for displaying information to a user. Sensor device40is an example of an operation device that receives an input operation of a user. Display device20and sensor device40constitute user interface device30that receives information provision to the user and instructions from the user. Each of plural samples60is an object that emits a smell that stimulates the user's sense of smell. For example, sample60may be a smelled natural product (e.g., a plant itself such as a lavender) or an artificial product containing its smell (e.g., a volatile liquid from which the smell of a lavender has been extracted, a sheet impregnated with the liquid, or the like) In this embodiment, as sample60, a cylindrical small bottle with a lid containing a liquid called an aroma oil containing a smell extracted from a natural product emitting a smell is used. Each sample60is provided with a seal or tag bearing the name or ID of the sample (e.g., the plant name “Lavender” or the number “1”). The user can visually identify each sample by referring to this name or ID. First tray50aand second tray50bfunction as an operating tool for switching the display mode of display device20. Both first tray50aand second tray50bhave a size and a shape that allow sample60to be placed thereon. Both first tray50aand second tray50bhave a size and shape so as to be superposed on each other. FIG.2is a side view illustrating a structure of information providing system1, and more specifically, a positional relationship when display device20, sensor device40, first tray50a, and sample60are viewed from the horizontal direction. Display device20has, for example, a thin rectangular plate shape, the upper surface of which has a horizontal display surface. Sensor device40is incorporated in a portion of display device20. In first tray50aand second tray50b, tag51a(first tray tag) and tag51b(second tray tag) are provided respectively, each of which is a storage medium storing identification information (called a tray ID) for identifying each tray. As illustrated inFIG.2, if first tray50ais placed on the display surface (that is, a user interface surface corresponding to a predetermined position of the user interface device30, hereafter referred to as “sensing surface”) of display device20of the area in which sensor device40can sense the tray, for example, sensor device40reads the tray ID stored in tag51aof first tray50ausing a short-distance wireless communication standard called NFC (Near Field Communication) and identifies that first tray50ahas been placed. In this case, the display mode of display device20is switched to a first display mode called a “single-sample display mode”. The single-sample display mode is a display mode on the assumption that only one sample60is placed on the sensing surface. In addition, second tray50bmay be stacked on first tray50a. If first tray50aand second tray50bare placed on the sensing surface, sensor device40reads the two tray IDs stored in tag51aof first tray50aand tag51bof second tray50b, thereby identifying that first tray50aand second tray50bare placed. In this case, the display mode of display device20is switched to a second display mode called a “multi-sample display mode”. The multiple sample display mode is a display mode on the assumption that plural samples60are placed on the sensing surface. Sample60is provided with a tag61(object tag) which is a storage medium storing identification information (referred to as a sample ID) for identifying each sample. When sample60is placed on the sensing surface, sensor device40identifies the placed sample60by reading the sample ID stored in the tag61of sample60. Display device20also functions as a so-called touch screen, and detects a touched position on the display surface if, for example, a user touches the display surface with his/her finger or a predetermined device. The position touched by the user is expressed as X and Y coordinates in a two-dimensional coordinate plane with a certain position of the display surface as the origin. FIG.3exemplary shows a hardware configuration of information processing device10. Information processing device10is a computer that includes CPU101(Central Processing Unit), ROM (Read Only Memory)102, RAM (Random Access Memory)103, auxiliary storage device104, and communication IF (Interface)105. CPU101is a processor that performs various operations. ROM102is, for example, a non-volatile memory that stores a program and data used for starting information processing device10. RAM103is a volatile memory that functions as a work area while CPU101executes a program. The secondary storage device104is a nonvolatile storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores programs and data used in information processing device10. CPU101executes this program to implement the functions described later, and also executes the operations described later. The communication IF105is an interface for performing communication according to a predetermined communication standard. The communication standard may be a standard of wired communication or a standard of wireless communication. In addition to the configuration illustrated inFIG.3, information processing device10may include other elements such as a display device (for example, as a liquid crystal display) or an input device (for example, a keyboard). The auxiliary storage device104stores an expression database (hereinafter, the term “database” is referred to as a DB) as shown inFIG.4. In this expression DB, a sample ID and one or more expressions (that is, one or more expressions which expressed the smell of sample60by the character strings) about the smell stimulated by sample60corresponding to the sample ID, are recorded with their correspondence. That is, the expression is used as a means for communicating the smell of sample60to others when the user smells the smell. This expression may be an expression using any part of speech, such as a noun or adjective, and covers from direct to indirect expression of the smell. Here, the direct expression refers to an expression commonly used to recall an idea of a smell, for example, “sweet” or “fruity”, while the indirect expression refers to an expression that is not commonly used to recall an expression of a smell as compared with the direct expression described above, for example, “spring”, “morning” or “walking”. The indirect expression is a secondary expression that is recalled from a direct expression with respect to the direct expression, and may be an expression that abstractly represents a smell compared to the direct expression. Further, the expression DB includes information of an appearance for displaying each expression. The appearance includes, for example, a position where the expression is displayed, a color with which the expression is displayed, a size of the displayed expression, a font of the expression, a modification for displaying the expression, a time when the expression is displayed, a time to display an expression, a time period when the expression is displayed, a motion (including a spatial or temporal change in an expression) of the expression, a language used for the expression, and the like. This appearance changes depending on the relationship between sample60and the expression. The relationship between sample60and the expression is the intensity or amount of the smell (more strictly, the component contained in the smell of the sample) represented by the expression thereof in sample60, or the abstraction degree of the expression thereof with respect to the smell, and the like. For example, for sample60having strong sweet smell, the expression “sweet” is displayed near sample60, with a large font or a conspicuous color, or with a motion in which the expression is vibrating. These are examples of the appearance that varies depending on the relationship between sample60and the expression changes. In addition, for example, for a sample having a strong smell of “sweet” and a weak smell of “fruity”, an expression of “sweet” is displayed near the sample and the expression of “fruity” is displayed far from the sample. This is another example of an appearance which changes in accordance with the relationship between sample60and the expression. Further, in the case where a direct expression and an indirect expression are associated with a certain sample, the distance between sample60and the expression changes in accordance with the abstraction of the expression, for example, a direct expression is displayed near the sample and an indirect expression is displayed far from the sample. This is yet another example of an appearance which changes in accordance with the relationship between sample60and the expression. In short, the expression is displayed with an appearance such that the content of the relationship between sample60and the expression (specifically, the strength and weakness of the relationship and the manner of the relationship) can be visually recognized. InFIG.4, for sample60having the sample ID “ID001,” the record has the first expression “sweet,” the second expression “fresh,” and the third expression “flower.” Among these expressions, the appearance of the first expression, which is “sweet,” corresponds to the position (X, Y coordinates) at which the expression is displayed is (X1, Y1), the color of the character is “red”, the font of the character is “gothic”, the size of the character is “25 points”, and the motion is “flashing.” It is of note that the appearance shown inFIG.4is merely an example, and this appearance is arbitrarily determined by the system designer. FIG.5is a block diagram showing an example of the functional configuration of information processing device10. Information processing device10includes functions of sample identifying means11, switching means12, expression DB storage means13, and display control means14. If sample60selected by the user is placed on the sensing surface of sensor device40, sample identifying means11specifics which sample60is placed on the sensing surface based on the sample ID stored in the tag61of sample60and read by the sensor device40. If first tray50aor second tray50bis placed on the sensing surface of sensor device40, switching means12mutually switches the single-sample display mode and the multi-sample display mode on the basis of the tray ID stored in tag51aof first tray50aor tag51bof second tray50band read by the sensor device40. In the single-sample display mode and the multi-sample display mode, display control means14controls display device20to display a group of expressions relating to the olfactory sense stimulated by sample60specific by sample identifying means11in the display region corresponding to the periphery of the position where sample60is placed, i.e., the sensing surface, in the user interface device30. At this time, display control means14displays the expression with an appearance corresponding to sample60specific by sample identifying means11in the expression DB (FIG.4) stored in expression DB storage means13. The display area corresponding to the periphery of the position where sample60is placed is, for example, a circular area having a radius of 30 cm or less with respect to the sensing surface on which sample60is placed. In the single-sample display mode, if the user selects one of the expressions in the displayed expression group, display control means14displays a relationship image, which is an image indicating the relationship between the selected expression and other samples60corresponding to the sense of smell associated with the expression. In addition, in the multi-sample display mode, display control means14controls to display, for each of plural samples60identified by sample identifying means11, a group of expressions relating to the sense of smell stimulated by each of the samples60, and to display the expression common to plural samples60among the group of expressions so as to be distinguishable from the expression common to plural samples60. The display control by display control means14will be described in detail later with reference toFIGS.7to13. Operation Next, the operation of the present embodiment will be described with reference to the flowchart shown inFIG.6.FIG.7is a plan view showing the user interface surface of the user interface device30viewed from above. Plural samples60(nine samples60ato60iare shown inFIG.7) are placed at predetermined positions on the user interface surface. Which sample60is to be placed at which position is determined in advance, and an image indicating the position where each sample60is to be placed (e.g., an image indicating the same ID as the ID of sample60) is displayed on the user interface surface (i.e., the display surface). The user places each sample60a-60iat the position indicated by the images. It is assumed that the correspondence between each sample60and the position where sample60is placed is stored in advance in expression DB storage means13. If first tray50ais placed on the sensing surface SA as shown inFIG.7, switching means12of information processing device10determines (in step S11) that first tray50ais placed on the basis of the reading result of the tray ID by sensor device40, and switches (in step S12) the display mode to the single-sample display mode. Although the position, shape, and size of the sensing surface by sensor device40are arbitrarily determined, the user knows in advance where the sensing surface is located in the user interface device30, or informs the user of the method by means of display, voice guidance, or the like. The user may select any of the samples60a-60ito smell the smell and place sample60of the smell he or she prefers on first tray50aon the sensing surface SA. Sample identifying means11of information processing device10identifies (in step S13) which sample60is placed on the sensing surface, based on the reading result (in step S13) of the sample ID by sensor device40. In step S14, display control means14searches (in step S14) the expression DB for an expression corresponding to the sample ID using the identified sample ID as a key. In step S15, display control means14determines whether the display mode is the single-sample display mode or the multi-sample display mode and the display mode corresponding to each of the searched expressions with reference to the expression DB. Then, display control means14controls display device20, the display area around the sensing surface SA, the expression retrieved in step S14, and displays (in step S16) the display mode determined in step S15. For example, if sample60ais placed on first tray50a, as shown inFIG.8, a expression of the smell of sample60ais displayed within a fan shape of an arbitrary size centered on, for example, the sensing surface, i.e., the position where sample60ais placed. The appearance of each expression at this time is an appearance corresponding to the relationship between the expression and sample60g. The user can know how to express the smell of sample60awhile watching these expressions. At the same time, the user can also know the relationship between the smell of sample60gand each expression with reference to the appearance of each expression. For example, in the example ofFIG.8, the user can recognize that the smell of sample60ais typically a smell expressed as “sweet” or “relaxation”, but also a component of the smell expressed as “flower”, “flower” or “fruit”, and further an abstract event such as “spring” is associated from the smell. Further, if there is any expression of the smell which the user is concerned about in the displayed expression group, the user selects the expression by performing an operation of touching the expression. If such a touch operation is performed, display control means14identifies the expression selected by the user based on the position at which the touch operation is performed and the display position of each expression. In this case, the expression is surrounded by some graphic image, the background of the expression is displayed in a specific color, or the expression is highlighted so that the user can know which expression selection is accepted. Then, display control means14searches the expression DB for the sample ID associated with the selected expression. As a result of the search, if there is a sample ID associated with the selected expression, display device20is controlled to display a relationship image in step S16. The relational image is, for example, an image showing the relationship between the expression selected by the user in the user interface plane and other sample60corresponding to the retrieved sample ID, and is an image corresponding to the position where sample60is placed. For example, as illustrated inFIG.9, if the expression “sweet” is selected by the user, an annular image surrounding the expression “sweet” is displayed, and an annular image surrounding the position of other samples (here, samples60b,60d, and60g) represented by the expression “sweet” is displayed. By displaying such an annular image (related image), the user can know that there are samples60b,60d, and60gin addition to sample60aas the smell expressed as “sweet”. The appearance such as the color, thickness, size, and motion of the relationship image may correspond to the relation between the expression selected by the user and other samples represented by the expression. For example, if the relationship between the expression selected by the user and the other samples represented by the expression is strong, the display mode includes a color, a thickness, a size, a motion, or the like in which the relationship image becomes more conspicuous, and if the relationship is weak, the appearance becomes the opposite. If the user wants to know more about the expression, by performing an operation of touching the expression, the process described above is then performed for another expression. Here, from the state ofFIG.9, it is assumed that the user removes sample60afrom the top of first tray50a, smells the smell of another sample related to “sweet”, and places, for example, sample60bon first tray50aon the sensing surface SA as a sample of the smell that he/she prefers. Sample identifying means11identifies (in step S13) that sample60bis placed on the sensing surface based on the result (in step S11) of reading the sample ID by sensor device40. Thus, as shown inFIG.10, the expression relating to the smell of sample60bis displayed inside the fan figure centered on the sensing surface (step S16). The user can know how to express the smell of sample60bwhile viewing these expressions. Further, when the user performs an operation of touching the “fruit” as the expression of the smell which the user is concerned about in the displayed expression group, as shown inFIG.11, a ring image surrounding the expression of the “fruit” is displayed, and a ring image surrounding the position of another sample (here, samples60gand60i) represented by the expression of the “fruit” is displayed. The user can know that samples60gand60iare present in addition to sample60bas the smell expressed as “fruit”. InFIG.11, an annular image surrounding the position of sample60awhich is a sample represented by the expression “fruit” is not displayed. In this regard, in a series of operations, sample60placed on the sensing surface in the past is stored in the expression control unit14, and sample60placed in the past may not be displayed as illustrated inFIG.11, or sample60placed in the past may also be displayed as an annular image. In addition, from the state ofFIG.11, a user removes sample60bfrom above the first tray to sniff the smell of other samples associated with the “fruit” and as a sample of the smell he likes, e.g.,60gof sample, on top of first tray50aof sensing surface SA. Sample identifying means11of information processing device10, based on the reading result (in step S11) of the sample ID by sensor device40, identifies (in step S13) that sample60gis placed on the sensing surface. As a result, as shown inFIG.12, the expression relating to the smell of sample60gis displayed inside the fan figure centered on the sensing surface in step S16. The user can know how to express the smell of sample60gwhile viewing these expressions. By passing the display processing in the single-sample display mode as described above a plurality of times, the user can know that the samples60a,60b, and60gare samples of the smell that he/she prefers. Next, the user lays second tray50bon first tray50aon the sensing surface SA. Switching means12of information processing device10determines that first tray50aand second tray50bare placed on the basis of the reading result (in step S11) of the tray ID by sensor device40, and switches (in step S12) the display mode to the plurality of sample display modes. Next, the user places the selected samples60a,60b, and60gas a sample of the smell that they prefer in the single-sample display mode over first tray50aand second tray50bon the sensing surface SA. Sample identifying means11of information processing device10, based on the reading result (in step S11) of the sample ID by sensor device40, sample60a,60b, to identify (in step S13) that60gis placed on the sensing surface. In step S14, display control means14searches the expression DB for the expression corresponding to the sample ID by using the identified sample IDs of the samples60a,60b, and60gas keys. In step S15, display control means14determines whether the display mode is the single-sample display mode or the multi-sample display mode and the display mode corresponding to each of the searched expressions with reference to the expression DB. In step S16, display control means14controls display device20to display the expression retrieved in step S14for each of the samples60a,60b, and60gin the display area around the sensing surface SA in the display mode determined in step S15. Further, at this time, display control means14controls display device20to display a expression common to the samples60a,60b, and60gso as to be distinguishable from a expression not common to the samples60a,60b, and60g. For example, as illustrated inFIG.13, if the expression “fruit” is a common expression in the samples60a,60b, and60g, a triplet circular ring image surrounding the expression “fruit” is displayed. If the expressions “sweet”, “flower” and “relax” are all common expressions in the samples60aand60b, a double ring image surrounding the expressions “sweet”, “flower” and “relax” is displayed. The other expressions “fresh,” “sea,” “pink,” “refreshment,” “flower,” “transparency,” “sea,” “vacation,” “spring,” “summer,” and “grassland” are not common expressions in any two or more of the samples60a,60b, and60g, and therefore, the above-mentioned ring image is not displayed. By confirming the common expression of plural samples60selected by such a user, it is possible to more clearly understand what smell the user himself/herself prefers. According to the embodiment described above, the user can visually grasp the relationship between the smell of the object and the expression of the smell. In addition, users can visually understand what kind of smell they prefer. MODIFICATION The present invention is not limited to the embodiments described above. The embodiment described above may be modified as follows. Further, the following two or more modified examples may be implemented in combination. First Modification The present invention is applicable not only to the smell sense but also to the taste sense (e.g., wine, sake, spice, seasoning, etc.). That is, the present invention may be implemented by replacing the “smell” in the embodiment with the “taste”. Second Modification The appearance to display each expression may correspond to the relationship between the expression and the user who sees the display of the expression. The relationship between the expression and the user includes, for example, the degree of agreement between the smell of the smell expression by the expression and the user's preference related to the smell, and a history in which the user uses the expression as an expression of the smell. For example, for a user who prefers a “sweet” smell, an example may be considered in which the expression “sweet” is displayed near sample60placed on the sensing surface of sensor device40, the expression “sweet” is displayed in a large or noticeable color, or the expression “sweet” is displayed so as to be noticeable by moving in a vibrating manner. In this case, the user's preferences related to smell is stored in advance collected and database in the auxiliary storage device104or the like, display control means14shall refer to it. Further, for example, for a user who has used the expression “sweet” in many cases in the past, there may be considered an example in which the expression “sweet” is displayed near sample60placed on the sensing surface of sensor device40, the expression “sweet” is displayed in a large or conspicuous color, or the expression “sweet” is displayed so as to be conspicuous by moving so as to vibrate. In this case, the history of the expression used by the user for the smell is stored in advance in the auxiliary storage device104and the like is collected and stored in a database, display control means14shall refer to it. Third Modification The appearance at the time of displaying each expression may correspond to the attribute of the smell expression by the expression. Attributes of the smell include, for example, whether it is a top note, a middle note, or a last note, intensity/weakness of stimulus of the smell, attractiveness, peculiarity, or scarcity of the smell. The top note, middle note, and initial note are the first perceived smell in time, the next perceived smell, and the next perceived smell as the smell changes. For example, examples of displaying expressions corresponding to top notes, middle notes, and last notes in order from near to far of sample60placed on the sensing surface of sensor device40, and examples of switching the display of expressions corresponding to top notes, middle notes, and last notes in chronological order are conceivable. In addition, examples of expressions relating to a smell with a strong stimulus or a rare smell may be displayed in a specific color, font, or motion. Fourth Modification The appearance at the time of displaying each expression may correspond to the attribute of the expression. The expression attributes include, for example, an image to be received from an expression, an expression of a part of speech, a type of character to be used for expression (hiragana, katakana, kanji, alphabet, etc.), the number of characters/the number of words to be used in the expression, and the like. For example, an example in which the expression “sweet” is displayed in a warm color is conceivable. Fifth Modification If displaying the above-mentioned related image (seeFIG.9), display control means14may display the related image in a display manner according to the user's preference for the smell expression by the expression selected by the user. For example, in a relational image emphasizing the expression “sweet” selected by the user and other samples represented by the expression, the appearance such as a color, a thickness, a size, a motion, and the like is changed in accordance with the user's preference for the “sweet” smell. Specifically, in the case where the user prefers a “sweet” smell, an example in which an appearance such as a color, a thickness, a size, a motion, or the like in which a relationship image becomes more conspicuous is conceivable. In this case, the user's preferences related to smell is stored in advance collected and database in the auxiliary storage device104or the like, display control means14shall refer to it. Note that the relationship image is not limited to the image illustrated inFIG.9, and any image may be used as long as it can be understood that the expression selected by the user and the sample represented by the expression are related to each other. Sixth Modification Display control means14may display expressions having good compatibility in a display manner such that expressions having good compatibility can be identified by color, motion, or the like when displaying expressions for each of plural samples60. For example, with respect to sample60corresponding to the expression “sweet” and sample60corresponding to the expression “marana”, on the assumption that “sweet” and “marana” are compatible, the expressions “sweet” and “marana” are displayed in the same color, and the expressions having no good relationship with each other are displayed in different colors. Similarly, display control means14may display expressions that are incompatible with each other in a display manner that can be identified by color or motion. For example, with respect to sample60corresponding to the expression “sweet” and sample60corresponding to the expression “bitter”, on the assumption that “sweet” and “bitter” are incompatible with each other, an example may be considered in which the expressions “sweet” and “bitter” are displayed in the same violent movement, and the expression having no incompatible relationship is displayed without movement. In this manner, display control means14may display the expression having a predetermined relationship among the expression relating to the first object and the expression relating to the second object in a display manner distinguishable from the expression not having the predetermined relationship. The expression in which a plurality of expressions are in a predetermined relationship referred to herein may be, in addition to the relationship of good/bad compatibility as described above, an example in which any of the plurality of expressions matches/does not match the preference of the user or belongs/does not belong to the expression group of similar smells. In this case, the user's preferences and similar smell expression group about the smell is stored in advance in the collection and database in the auxiliary storage device104or the like, display control means14shall refer to it. Seventh Modification In the above embodiment, switching means12switches to the “single-sample display mode” if first tray50ais placed on the sensing surface, and switches to the “multi-sample display mode” if first tray50aand second tray50bare placed on the sensing surface. The method of using the tray as the operation tool for switching the display mode is not limited to the example of the above embodiment. For example, only one tray provided with a tag storing a tray ID may be prepared, and switching means12may switch to the “single-sample display mode” if the tray is not placed on the sensing surface, and may switch to the “multi-sample display mode” if the tray is placed on the sensing surface. Conversely, switching means12may switch to the “single-sample display mode” if the tray is placed on the sensing surface, and switch to the “multi-sample display mode” if the tray is not placed on the sensing surface. As described above, if the tray is placed at a predetermined position of the user interface device30, switching means12reads a tag provided in the tray, and switches between the first display mode called a single-sample display mode and the second display mode called a multi-sample display mode. Eighth Modification The display device and the operation device is not limited to display device20and sensor device40illustrated in FIG. For example, the display device may project an image onto a certain display surface. Further, the display device may be intended to display an image on the wall surface (including the case of projecting an image on the wall surface or the wall itself is a display device). The display device may also be a display device that realizes so-called augmented reality. Specifically, if sample60is imaged by an imaging device of a smart phone, tablet, or glass type wearable device, a corresponding expression group may be displayed around sample60in the imaged image. The operation device may be a device that detects a user's operation using an image recognition technique. Ninth Modification In the embodiment, the expression relating to sample60is displayed if sample60is placed on the sensing surface of sensor device40, but the expression relating to sample60may be displayed if the user opens the lid of the capped vial containing aroma oil corresponding to sample60, for example. Similarly, for example, a transparent cover may be put on a dish with a natural object itself, and if the user removes the transparent cover, a expression relating to the smell of the natural object may be displayed. Such user actions can be detected, for example, using well-known image recognition techniques. Tenth Modification The display form of the expression may be a two-dimensional display or a three-dimensional display. The displayed “expression” is not limited to characters, and may be a color, an abstract image, an image such as a person/scene, or the like. The present invention may be provided as an information processing method including steps of processing performed in information processing device10. The present invention may also be provided as a program executed in information processing device10. Such a program can be provided in the form of being recorded on a recording medium such as an optical disk, or in the form of being downloaded to a computer via a network such as the Internet and installed and made available. While the present invention has been described in detail above, it is apparent to those skilled in the art that the present invention is not limited to the embodiments described herein. The present invention may be practiced as modifications and modifies without departing from the spirit and scope of the invention as defined by the appended claims. Accordingly, the description herein is for illustrative purposes and does not have any limiting meaning on the present invention. DESCRIPTION OF REFERENCE NUMERALS 1. . . Information-providing systems,20. display devices,30. user-interface devices,40. sensor devices,50a. first tray,51a. tag,50b. second tray,51b. tag,60,60a-60i. sample,61. tag,101. . . CPU,102. . . ROM,103. . . RAM,104. secondary storage device,105. communication IF,11. sample identifying means,12. switching means,13. expression DB storage unit,14. display control means. | 35,555 |
11861153 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION A videoconference connection and participation process can provide a level of access control to a videoconference. In some cases, this level of access control may be desirable when the information to be discussed during the videoconference is highly sensitive and confidential. A desired feature for a videoconferencing system can be the ability for one or more participants in the videoconference to project content from respective computing devices into the videoconference without requiring a multitude of complicated authentication steps for allowing the participant access to the videoconference using the respective computing device. For example, multiple users can be sitting in a conference room that includes a videoconferencing system. Each of the multiple users can have a one or more computing devices along with them in the conference room. The computing devices can include, but are not limited to, desktop computers, laptop computers, tablet computing devices, smartphones, and personal digital assistants. One or more features can be incorporated into videoconferencing systems and applications that can simplify the process of setting up, scheduling, connecting to and providing/projecting content into a videoconference. In some implementations, each computing device may not have information about a context for the videoconference. A user (e.g., a videoconference host) can follow a series of steps in order to establish a context about the videoconference. For example, the context can include establishing, by a user using a computing device, a unique identifier (e.g., a Universal Resource Locator (URL)) for the meeting. This user can be considered the host of the videoconference and can distribute the information about the videoconference to the other videoconference participants (share the unique identifier for the meeting with the other videoconference participants). An example of these implementations is described with reference toFIGS.2A-F. In some implementations, a computing device can establish a context for the videoconference based on a detected proximity of the computing device to the videoconferencing system. For example, a videoconferencing system can detect approximate proximity of each respective computing device for the users sitting in a conference room that includes the videoconferencing system. For example, the videoconferencing system can detect the approximate proximity of each respective computing device using technologies such as Wi-Fi beacons or Bluetooth communication protocols. The videoconferencing system can confirm a physical co-location of a computing device of a user with the videoconferencing system based on the detected proximity (e.g., based on a threshold distance) of the computing device to the videoconferencing system. In some implementations, the basis of the authentication on the detected proximity can be that the computing device is within a threshold distance of the videoconferencing system. In some implementations, the basis of the authentication on the detected proximity can be that the computing device is within a communication range of a particular communication protocol. In some implementations, the basis of the authentication on the detected proximity can be that the computing device is in the same room as the videoconferencing system. In some implementations, the computing device of a second user being within a threshold distance or within a communication range of a particular communication protocol to a computing device of a first user that is already authenticated to the videoconferencing system can result in the authentication of the second user to the videoconferencing system. This automatic authentication process can eliminate the need for a manual authentication process that may involve multiple steps and the need for a unique identifier to be established and provided by a host for the videoconference. In effect, any authenticated user can project content for a respective computing device into the videoconference. An example of these implementations and the authentication processes are described with reference to at leastFIGS.2A-D. In some implementations, one or more users can participate in the videoconference from a location remote from the location of the videoconferencing system. For example, a participant can be located outside of the conference room that includes the videoconferencing system. The participant can be located in an office, at home, or in a hotel room, for example. In some cases, the remote user may be provided with information (e.g., a URL) for accessing the videoconference along with any additional authentication information (e.g., a password or pass code) that may be needed to authenticate the user for access to the videoconference. In some cases, the remote user can access an application running on a computing device that can provide an identifier for the videoconference that the user can select in order to remotely join (and project content into) the videoconference. An example of these implementations is described with reference toFIGS.2A-F. In some cases, one of the users may be the host of the videoconference. The host can perform the necessary steps for activating the videoconferencing system, and for providing information needed by the other users so that they can connect to and participate in the videoconference using a respective computing device. In some cases, the computing device for the host can project (provide) content to the videoconferencing system. In these cases, the computing device for the host can be considered a projecting device. The content can be displayed on a display device (e.g., a high definition television (HDTV)) included in the videoconferencing system. In addition, the content can include audio content the videoconferencing system can process and output, for example, to one or more speakers included in the videoconferencing system. The content can then be viewed by/heard by the multiple users located in the conference room. Once a user joins the videoconference, the computing device of the user can also project (provide) content to the videoconferencing system.FIGS.2A-Idescribe setting up, joining, and participating in a videoconference by multiple users in more than on location. In other cases, any of the participants in a videoconference can use a computing device to project (provide) content to the videoconferencing system. In some implementations, multiple users may gather in a conference room that includes an idle videoconferencing system. One or more of the multiple users may have with them a respective computing device. A user may want to use the display device included in the videoconferencing system to display information (content) from the computing device that the user would like to share with the other users located in the conference room. However, the user does not want to activate a camera and a microphone included in the videoconferencing system (e.g., for security and confidentiality reasons). In these implementations, the user can access an application running on a computing device that can provide an identifier for the idle videoconferencing system. The user can select the idle videoconferencing system as a target display device in order to display content from the computing device to the display device of the videoconferencing system while deactivating the camera and the microphone included in the videoconferencing system. An example of these implementations is described with reference toFIGS.3A-C. FIG.1Ais a block diagram of an example system100that can be used for videoconferencing. The example system100includes a plurality of computing devices102a-c(e.g., a laptop computer, a tablet computer, and a smart phone, and respectively). An example computing device102a(e.g., a laptop or notebook computer) can include one or more processors (e.g., client central processing unit (CPU)104) and one or more memory devices (e.g., client memory106)). The computing device102acan execute a client operating system (O/S)108and one or more client applications (e.g., a web browser application110) that can display a user interface (UI) (e.g., web browser UI112) on a display device120included in the computing device102a. Though not shown inFIG.1A, the computing devices can also include a desktop computing device. The system100includes a computer system130that can include one or more computing devices (e.g., server142a) and one or more computer-readable storage devices (e.g., database142b). The server142acan include one or more processors (e.g., server CPU132), and one or more memory devices (e.g., server memory134). The computing devices102a-ccan communicate with the computer system130(and the computer system130can communicate with the computing devices102a-c) using a network116. The server142acan execute a server136. The system100includes a videoconferencing device160. The computing devices102a-ccan interface to/communicate with the videoconferencing device160using the network116. Similarly, the videoconferencing device160can interface to/communicate with the computing devices102a-c. The videoconferencing device160can communicate with the computer system130using the network116. Similarly, the computer system130can communicate with the videoconferencing device160using the network116. The videoconferencing device160is included in a videoconferencing system158that can include a display device162, a microphone164, a camera166, and a remote control168. In some implementations, the computing devices102a-ccan be laptop or desktop computers, smartphones, personal digital assistants, portable media players, tablet computers, or other appropriate computing devices that can communicate, using the network116, with other computing devices or computer systems. In some implementations, the computing devices102a-ccan perform client-side operations, as discussed in further detail herein. Implementations and functions of the system100described herein with reference to computing device102a, may also be applied to computing device102band computing device102cand other computing devices not shown inFIG.1that may also be included in the system100. The computing device102bincludes a display area124. The computing device102cincludes a display area122. In some implementations, the computer system130can represent more than one computing device working together to perform server-side operations. For example, though not shown inFIG.1, the system100can include a computer system that includes multiple servers (computing devices) working together to perform server-side operations. In this example, a single proprietor can provide the multiple servers. In some cases, the one or more of the multiple servers can provide other functionalities for the proprietor. In a non-limiting example, the computer system can also include a search server and a web crawler server. In some implementations, the network116can be a public communications network (e.g., the Internet, cellular data network, dialup modems over a telephone network) or a private communications network (e.g., private LAN, leased lines). In some implementations, the computing devices102a-ccan communicate with the network116using one or more high-speed wired and/or wireless communications protocols (e.g., 802.11 variations, WiFi, Bluetooth, Transmission Control Protocol/Internet Protocol (TCP/IP), Ethernet, IEEE 802.3, etc.). In some implementations, the web browser application110can include or be associated with one or more browser-based applications (e.g., browser-based application128). The browser-based application128can be executed/interpreted by the web browser application110. The browser-based application128executed by the web browser application110can include code written in a scripting language, such as, JavaScript, VBScript, ActionScript, or other scripting languages. A browser-based application may be configured to perform a single task or multiple tasks for a user. In such an implementation, the browser-based application may be configured to be executed or interpreted by the web browser. This is compared with the native applications (e.g., native application144) that include machine executable code and are configured to be executed directly via the operating system of the client device, whereas, a browser-based application may be incapable of execution or display without the aid of the web browser. Thus, browser-based applications can be run inside a browser with a dedicated user interface, and can provide functionality and an experience that is more rich and interactive than a standalone website but are less cumbersome and monolithic than a native application144. Examples of browser-based applications include, but are not limited to, games, photo editors, and video players that can run inside the web browser application110. The browser-based application128can provide a dedicated UI for display on the display device120. Browser-based applications can be “hosted applications” or “packaged applications.” Hosted applications may include at least a portion of a web site that itself includes web pages, in addition to some metadata that may be especially pertinent to the browser-based application or to the user of the browser-based application to allow the browser-based application to perform some particular functionality for the user. Packaged applications can be thought of as browser-based applications whose code is bundled, so that the user can download all of the content of the browser-based application for execution by the browser. A packaged browser-based application may not need to have network access to perform its functionality for the user, and rather may be executed successfully by the browser locally on the computing device without access to a network. Packaged browser-based applications have the option of using Extension APIs, allowing packaged applications to change the way the browser behaves or looks. In some implementations, the computing device102acan run, or cause the operating system108to execute, the web browser application110. The web browser application110can then provide, in the web browser UI112, a plurality of panes or tabs114a-c. The web browser UI112can be a visual area, usually rectangular, containing some kind of user interface. In a graphical user interface (GUI) used in the computing device102a, the web browser UI112can be a two-dimensional object arranged on a plane of the GUI known as a desktop. The web browser UI112can include other graphical objects (e.g., a menu-bar, toolbars, controls, icons). The web browser UI112can display the graphical objects on the display device120. A user of the computing device102acan interact with the graphical objects to provide input to, or otherwise control the operation of, the web browser application110. The web browser UI112can include a working area in which a document, an image, folder contents, or other objects including information or data for the browser-based application128can be displayed. The working area can include one main object (e.g., a single web document interface) or multiple objects (e.g., more than one web document interface), where each object can be displayed in a separate window (or tab). Each tab can include a UI. In some applications, specifically web browser applications, multiple documents can be displayed in individual tabs114a-c. The tabs114a-ccan be displayed one at a time, and are selectable using a tab-bar, which can reside above the contents of an individual window. That is, one selected tab (e.g., tab114a) can be considered forward-facing (in the foreground). The tab114acan display information or content to a user in the web browser UI112, while the content of other tabs114b,114ccan be considered “hidden” (in the background). A natively operating application146can be an application that is coded using only web technology (defined here as code that can be implemented directly by a web browser application), such as JavaScript, ActionScript, HTML, or CSS. For example, the computing device102acan download and install the natively operating application146from a marketplace server using a web browser application (e.g., web browser application110). The natively operating application146may operate using a runtime148. The natively operating application146may be configured to be executed directly by the CPU104or by the O/S108, using a runtime148, for example. Because natively operating application146is coded using web technologies, no compilation step is required. In some implementations, the computing devices102a-ccan communicate directly with the videoconferencing device160using, for example one or more high-speed wired and/or wireless communications protocols such as Bluetooth, Bluetooth Low Energy (Bluetooth LE), and WiFi. The videoconferencing device160can use the direct communication to identify one or more computing devices that are in proximity to the videoconferencing device160. In these implementations, identifying the one or more computing devices that are in proximity to the videoconferencing device160includes determining that the one or more computing devices are within a communication range of the communication protocol. In some implementations, the videoconferencing device160can use short-range communications to “listen” for broadcasts from short-range communication enabled computing devices (e.g., the computing devices102a-c). For example, the short-range communication can use Bluetooth LE when transmitting and receiving broadcasts. The videoconferencing device is determined to be in proximity to the computing device when the computing device and the videoconferencing device are within the range of the short-range communication system (e.g., are within the range of Bluetooth LE). In some implementations, the system100can use WiFi scans, WiFi signal strength information, or WiFi signature matching to determine proximity of a WiFi-enabled computing device to the videoconferencing device160. For example, the WiFi-enabled computing device can capture a signal strength of a WiFi signal received from the videoconferencing device160. The captured strength of the signal can be indicative of a distance between the videoconferencing device160and the computing device and can be referred to as a received signal strength indicator (RSSI). A copresence application processing interface (API)156included in discovery APIs on the server142acan determine that the captured strength of the signal is within a range indicative of an acceptable proximity of the computing device to the videoconferencing device160. For example, the range (e.g., threshold range) can be stored in the memory134. In another example, the copresence API156can use a set of captured signal strengths for sampled reference locations with respect to the videoconferencing device160to determine if a WiFi-enabled computing device is proximate to the videoconferencing device160. The captured signal strengths for the sampled reference locations with respect to the videoconferencing device160can comprise a database of signal strength signatures for the location of the videoconferencing device160. For example, the signal strength signatures can be stored in the database142b. The WiFi-enabled computing device can capture a signal strength of a WiFi signal received from the videoconferencing device160(a signal strength signature for the location of the WiFi-enabled computing device). The copresence API156can compare the captured signal strength to the signal strength signatures stored in the database142bto determine a closest match or matches. The copresence API156can use the determined closest match or matches to determine the proximity of the WiFi-enabled computing device to the videoconferencing device160. In some implementations, the system100can use an audio token to determine proximity of a computing device to the videoconferencing device160. The system can use the audio token in addition to (or as an alternative to) one or more of the short-range communications, the WiFi location tracking and the WiFi signature matching when determining proximity of a computing device to the videoconferencing device160. For example, the videoconferencing device160can receive a digitized version of an audio token from the computer system130by way of the network116. In some implementations, the database142bcan store the digitized version of the audio token. In some implementations, the memory134can store the digitized version of the audio token. The videoconferencing device160can send/emit the audio token using one or more speakers included in the videoconferencing system158. Any (or all) of the computing devices102a-ccan receive/pick up the audio token. For example, the recipient computing device (e.g., computing device102a) can receive/pick up the audio token and send a digitized version of the received audio token to the computer system130using the network116. The copresence API156can verify/confirm that the audio token sent to the computer system130from the computing device (e.g., computing device102a) was the actual audio token sent by the videoconferencing device160. The audio token confirmation can be used along with/in addition to short-range communications, WiFi location tracking using an RSSI or WiFi signature matching to determine and confirm the proximity of the computing device (e.g., computing device102a) to the videoconferencing system (e.g., the videoconferencing device160). Once proximity is determined, a user of the computing device (e.g., computing device102a) can participate in/join a videoconference that is using and/or being conducted by or hosted by the videoconferencing system (e.g., the videoconferencing device160). In some implementations, the copresence API156can use functions that yield an identifier for the videoconferencing device160. A videoconference management API172included in the server142acan use the identifier to communicate with the videoconferencing device160in order to query a status of the videoconferencing device160and obtain an indicator associated with the conference (e.g., a name for the conference) that the videoconferencing device160is being used in. In some implementations, the copresence API156can determine if the videoconferencing device160is already participating in or hosting a conference. In these cases, the copresence API156, when confirming the proximity of the WiFi-enabled computing device to the videoconferencing device160, can provide a user of the WiFi-enabled computing device the indicator associated with the conference (e.g., a name for the conference) in the web browser UI112. A conferences API154can determine/discover one or more conferences that a user of a computing device is currently participating in and/or invited to participate in. For example, the computing device102acan access the conferences API154included in the server142ausing the network116. A user of the computing device102acan interact with the conferences API154to schedule, setup, start, or join a conference (e.g., a videoconference). A calendar API152can provide information and data about one or more conferences that a user of a computing device (e.g., the computing device102a) may have scheduled and included as calendar entries in a calendar associated with the user. The calendar API152can provide the user of the computing device (e.g., the computing device102a) an indicator associated with the conference (e.g., a name for the conference) for each of the one or more scheduled conferences in the web browser UI112. The server142acan include a videoconferencing application170that includes a videoconference management API172, a media session control174, and a distributor176. The videoconference management API172can provide a signaling interface for a videoconferencing platform or client running in a web browser application (e.g., the web browser application110). The videoconference management API172can provide a signaling interface for a videoconferencing-enabled application140running on the computing device102a. The videoconferencing client and/or the videoconferencing-enabled application140can communication with and interface to the videoconferencing application170. A media session control174can provide a media signaling channel between the videoconference management API172and the videoconferencing client. The media session control174can provide a media signaling channel between the videoconference management API172and the videoconferencing application170. A distributor176can transform and distribute the media stream provided by a computing device (e.g., the computing device102a) participating in a videoconference to the videoconferencing device160and to other computing devices that may also be participating in the videoconference, where the other computing devices can be located remote from the videoconferencing device160(e.g., the other computing devices are located outside of a room or designated area where the videoconferencing device160resides). For example, the distributor176can identify one or more elements of the media stream that can be sent to each computing device participating in a videoconference based on bandwidth and capabilities of each computing device. A display device included in the remote computing device can display the video content of the media stream received by a remote computing device. An audio processor included in the remote computing device can receive the audio content of the media stream. The audio processor can provide the audio content to one or more speakers that are included in the remote computing device. FIG.1Bis a block diagram showing an example flow of procedure calls, signals, and media in the example system100shown inFIG.1A. For example, referring toFIG.1BandFIG.1A, a user of the computing device102arequests information about videoconferencing. FIG.1Cshows an example of a pull-down menu138that can display indications of the information and data determined by the discovery APIs150and provided by the server142ato the computing device102ain response to the remote procedure calls103. The computing device102acan send/provide the information and data (content) displayed in the web browser UI112of a particular tab (e.g., tab114a) to the videoconferencing device160for display on a display device162included in the videoconferencing system158. The videoconferencing device160can display the content on the display device162during the videoconference. In some implementations, a user can run the videoconferencing-enabled application140on the computing device102a. The videoconferencing-enabled application140can display content on the display device162of the videoconferencing system158during the videoconference directly from one or more servers without the content needing to be first downloaded to the computing device102aand then encoded in real-time as a video stream. In some implementations, a web browser application (e.g., the web browser application110) can provide or “cast” a first tab or page of the web browser application (e.g., the tab114a). In some implementations, the web browser application can include an extension that runs in the web browser application where the extension provides a user interface (e.g., web browser UI112) for initiating and controlling the casting of the first tab. In some implementations, the web browser application can provide a user interface (e.g., web browser UI112) for initiating and controlling the casting of the first tab. In addition, for example, a Web Real-Time Communication (WebRTC) application program interface (API) can be used for browser-based real-time communications. A user of the computing device102acan provide or “cast” the first tab (e.g., the tab114a) for viewing on the display device162of the videoconferencing device160using the user interface (e.g., web browser UI112) provided by the web browser application or the web browser application extension. In some implementations, a videoconferencing client interfaces with the web browser application110. A user can select a videoconferencing icon118included in the web browser application110to project (provide) content requested by the videoconferencing-enabled application140to the videoconferencing system158. The videoconferencing client makes/performs remote procedure calls103to the discovery APIs150by way of the network116. In some implementations, a videoconferencing client can be part of/included in an application (e.g., the videoconferencing-enabled application140). The user of the computing device102acan launch/run the application. A user interface for the application can include a videoconferencing icon that when selected can project (provide) content requested by the videoconferencing-enabled application140to the videoconferencing system158. In some cases, the user of the computing device102aand the other participants in a videoconference can see different views. For example, the other participants in the videoconference can view content requested by the videoconferencing-enabled application140on the display device162of the videoconferencing system158(e.g., a bar graph showing sales data) while the user of the computing device102acan be viewing other content (e.g., speaker notes about the sales data). Whether by a native application or by a web browser application, the discovery APIs150included in the server142acan determine/discover one or more available videoconferencing options for a user. For example, the copresence API156can determine/discover videoconferencing systems proximate to the computing device102a(e.g., the videoconferencing device160). The calendar API152can identify one or more scheduled conferences included in a calendar associated with the user of the computing device102a. The conferences API154can identify one or more conferences that the user of the computing device102amay be currently participating in and/or that the user may be invited to participate in. The computing device102acan display the pull-down menu138on the display device120in response to receiving a selection of the videoconferencing icon118. Menu entry126aincluded in the pull-down menu138is indicative of a conference that the user of the computing device102amay be currently participating in and/or invited to participate in. Menu entry126bis indicative of a videoconferencing system that the copresence API156has determined is in proximity to the computing device102a(e.g., in proximity to the videoconferencing device160). Menu entry126cis indicative of a calendar entry for a conference where the calendar entry is in a calendar associated with the user of the computing device102a. The example pull-down menu138can be displayed, as shown inFIG.1A, in the tab114aof the web browser application110. In implementations where a videoconferencing client is part of/included in an application, a pull-down menu similar to the pull-down menu138can also be displayed in a user interface of the application. The menu entries126a-ccan be considered identifiers for respective meetings and/or videoconferences. The user can select one of the menu entries126a-cin order to project (provide) content to the videoconferencing system158. For example, the user can select menu entry126bin order to project (provide) content to the videoconferencing system158. The computing device102amakes one or more remote procedure calls103to the discovery APIs150in order to determine and authenticate the proximity of the computing device102ato the videoconferencing device160as described with reference toFIG.1A. As shown inFIG.1Cand referring toFIGS.1A and1B, the user can be running a browser-based application (e.g., the browser-based application128) that displays a car image180(as an example image) in the tab114a. By selecting the videoconferencing icon118while the car image180is displayed in the tab114a, the computing device102a, once authenticated to access the videoconferencing device160and participate in a videoconference hosted by/provided by the videoconferencing device160, can present, provide, cast, or capture and mirror the information and data included in the tab114a(e.g., the car image180) to the videoconferencing device160. The videoconferencing system158can display the information and data included in the tab114a(e.g., the car image180) on the display device162. In some implementations, for example, a user can be running an application (e.g., the native application144or the natively operating application146) that streams audio and video content to the display device120of the computing device102a. The user can select a videoconferencing icon that can be displayed in a user interface of the application. The selection of the videoconferencing icon can launch a videoconference client that is included in or that is part of the application. The computing device102a, once authenticated to a access the videoconferencing device160and participate in a videoconference hosted by/provided by the videoconferencing device160, can present, provide, cast, or capture and mirror the streaming audio and video content to the videoconferencing device160. The videoconferencing system158can provide the video content to the display device162and can provide the audio content to one or more speakers included in the videoconferencing system158. In some implementations, a user can choose to present, provide, cast, or capture and mirror the contents of the desktop displayed on the display device120included in the computing device102ato the videoconferencing device160. In some implementations, the application may allow the user to select a window from a plurality of windows displayed on the display device120of the computing device102ato present, provide, cast, or capture and mirror to the videoconferencing device160. Referring toFIG.1B, for example, the computing device102acan provide a signal105to and receive a signal105from the videoconference management API172using the network116. The videoconference management API172can provide a signaling interface for the videoconference client. The videoconference management API172can provide a signal107to and receive a signal107from the media session control174in order to open/establish a media signaling channel between the videoconference management API172and the videoconference client. The media session control174can provide a signal109to and receive a signal109from the videoconferencing device160using the network116in order to open/establish a media signaling channel between the videoconference management API172and the videoconferencing device160. The computing device102acan provide a media stream117that includes the information and data (content) for use by the videoconferencing device160to the distributor176included in the server142aby way of the network116. As described, the content can include image or streaming video data for display on the display device162and, in some cases, audio data for output on one or more speakers included in the videoconferencing system158. The distributor176provides the media stream117that includes the image data (and in some cases the audio data) to the videoconferencing device160by way of the network116. The videoconferencing device160provides the media stream117that includes the image data to the display device162. In some cases, the videoconferencing device160provides the media stream117that includes the audio data to the one or more speakers included in the videoconferencing system158. For example, the computing device102acan provide videoconferencing control messages (commands and data) to the videoconferencing device160using the media signaling channel. The control messages can include, but are not limited to, render zoom, enable/disable muting, and pause/restart media stream. The render zoom control message can cause the videoconferencing device160to control the zooming of the image displayed on the display device162by controlling the content of a media stream115provided to the display device162. The render muting control message can cause the videoconferencing device160to mute/unmute audio content of the videoconference by controlling a media stream111provided to one or more speakers and/or received from the microphone164. In cases where the information and data (content) for use by the videoconferencing device160is streaming video and/or audio content, the pause/restart media stream message can cause the videoconferencing device160to pause and restart streaming content by pausing and restarting the media stream115provided to the display device162and by pausing and restarting the media stream111provided to one or more speakers included in the videoconferencing system158. The camera166can provide video content113to the videoconferencing device160. In some implementations, the network116can be a local area network (LAN). The computing device102a, the server142a, and the videoconferencing device160can communicate using the LAN. In these implementations, based on the use of the LAN, the quality of the media stream117may be greater than if one or more of the computing device102a, the server142a, and the videoconferencing device160were not connected to the LAN, but to another network (e.g., the Internet). In some implementations, the entries included in a pull-down menu displayed to a user of a computing device in response to selection of a videoconferencing icon by the user can be determined based on a sign-in status of a user. For example, a domain can be associated with a videoconference. The domain can be represented using a domain name that identifies a network resource or web site. If a user is signed into a first domain using a first user account, the entries included in the pull-down menu can be indicative of conferences associated with the first domain and of videoconferencing systems associated with the first domain. In addition, or in the alternative, the entries included in the pull-down menu can be indicative of calendar entries associated with the first user account on the first domain. If a user has multiple user accounts on the first domain, signing into a second user account on the first domain can result in displaying entries included in the pull-down menu that can be indicative of calendar entries associated with the second user account on the first domain. If a user signs into a second domain, the entries included in the pull-down menu will be indicative of conferences associated with the second domain and of videoconferencing systems associated with the second domain. In addition, or in the alternative, the entries included in the pull-down menu can be indicative of calendar entries associated with the user account on the second domain. In some implementations, a user may be signed into multiple user accounts at the same time that are on different domains. In some cases, when the user attempts to create/join a conference, the first account that matches the domain of the conference will be used to join the conference. In some cases, when the user attempts to create/join a conference, the user may be prompted to select which of the multiple accounts they would like to use when creating/joining a conference. In some cases, if the user has no accounts in a domain, a display device included in a computing device can display a user interface to confirm an account that should be used to create/join a conference. For example, a user may have a work user account and a personal user account. The user may select the work user account when creating/joining a conference for work. The user may select the personal user account when creating/joining a conference for personal use. In some cases, a user may not be a member of a domain (or may not be signed into a domain) associated with a videoconference. The user may want to present, provide, cast, or capture and mirror content on a computing device of the user to the videoconference. In these cases, in order for the user to be able to provide content to the videoconference, a user can initiate “knocking”. For example, a user computing device can send a request message to a host (or presenter) computing device (a computing device of a host of or a presenter in the videoconference). The host computing device can present, provide, cast, or capture and mirror content to the videoconference. The host computing device can receive the request message and display the request message to the host. The host can approve the request and the host computing device can send an approval message to the user computing device. Once the approval message is received, the user computing device can present, provide, cast, or capture and mirror content on the user computing device to the videoconference. FIG.2Ais a diagram that shows an example user interface (UI)202on a display device (e.g., the display device162) included in a videoconferencing system (e.g., the videoconferencing system158). Also displayed on the display device162is a current time216. One or more users (participants) who would like to participate in a meeting can enter a conference room where the videoconferencing system158is located at approximately 3:55 pm and see the UI202displayed on the display device162. The UI202provides indications of one or more meetings that are scheduled for the conference room that one or more of the users can choose to participate in (e.g., provide content to as described above with reference toFIGS.1A,1B, and1C). FIG.2Bis a diagram that shows an example videoconference200that includes a first participant210, a second participant212, and a third participant214. The videoconference200includes the use of the videoconferencing system158. In the example videoconference200, the first participant210and the second participant212are located proximate to the videoconferencing system158(e.g., the first participant210and the second participant212are sitting together in a conference room (e.g., conference room 1900-03L) where the videoconferencing system158is located). The third participant214is at a location remote from the conference room and the videoconferencing system158. For example, the third participant214can be located in an office outside of the conference room but in the same building as the conference room, the third participant214can be located in another building, or the third participant214can be working from home. The first participant210can be a user of the computing device102aas shown inFIGS.1A,1B, and1C. The first participant210can be a host or initiator of the videoconference200. Referring toFIG.1C, the first participant210can select the menu entry126bin order to provide content displayed on the display device120(e.g., the car image180displayed in the tab114a) to the videoconferencing system158. As described, the menu entry126bis provided on the pull-down menu138based on the detected proximity of the videoconferencing system158to the computing device102a. In response to the selection of the menu entry126b, the videoconferencing system158can display the UI202on the display device162. For example, meeting menu entry220indicates that a meeting is scheduled for the conference room (e.g., conference room 1900-03L) at 4:00 pm (in five minutes). The first participant210can select the meeting menu entry220using, for example, the remote control168included in the videoconferencing system158. Based on the selection of the meeting menu entry220, the content displayed on the display device120(e.g., the car image180displayed in the tab114a) is provided (casted, captured and mirrored) to the videoconferencing device160and displayed on the display device162as videoconferencing content218. The first participant210has now joined a meeting (e.g., Transportation-Types) scheduled for the conference room (e.g., conference room 1900-03L) and the use of the videoconferencing system158. In some implementations, the first participant210may choose not to join a scheduled meeting for the conference room. The first participant210can select the meeting menu entry222using, for example, the remote control168included in the videoconferencing system158. Based on the selection of the meeting menu entry222, the first participant210can start a different or new meeting. The content displayed on the display device120(e.g., the car image180displayed in the tab114a) is provided (casted, captured and mirrored) to the videoconferencing device160and displayed on the display device162as the videoconferencing content218. The meeting menu entry220and the meeting menu entry222can be considered identifiers for respective meetings. The first participant210can select a cancel entry224to exit the UI202. In some implementations, a display device (e.g., the display device120) of a computing device (e.g., the computing device102a) that is proximate to the videoconferencing system158can display a user interface similar to the UI202. In some cases, the display device162may not also display the UI202. In some cases, the display device162may also display the UI202. In these implementations, the user of the computing device (e.g., the first participant210) can make meeting selections using the computing device without the need to interact with the videoconferencing system158, which would necessitate locating and using the remote control168. The display device162includes an indication of a videoconferencing system name204. A videoconferencing system can have associated with it a videoconferencing system name that can be permanently assigned to/associated with the videoconferencing system. In some implementations, the videoconferencing system name can be a name for a conference room where the videoconferencing system is located. The display device162includes an indication of a videoconferencing system URL206. In addition, or in the alternative, a videoconferencing system can have associated with it a videoconferencing system URL. For example, for users/participants in a videoconference that may not be signed into the domain that includes the videoconferencing system158, the videoconferencing system URL206can provide a webpage that the user can access in order to be able to participate in the videoconference. The videoconferencing system URL206can provide a mechanism for the user to sign into the domain in order to provide content to the videoconference (allow access to the videoconferencing system158). In some implementations, referring to the description of “knocking” above, the user may receive a code or password included in the approval message that, if entered into a UI provided by the webpage, allows the user to access and provide content to the videoconference. Users invited to a meeting (e.g., Transportation-Types) can view the content presented at the meeting by joining the meeting. In some implementations, a user can be signed into the same domain as the videoconferencing system. The user may have a calendar entry for the meeting. Selecting the calendar entry for the meeting can place the user as a participant in the meeting. As a participant, the user can view videoconferencing content. In some implementations, a user can be provided with the videoconferencing system URL206that can enable the user to view the content presented at the meeting. One or more additional authentication steps may be needed in order for a user to provide content to the videoconference (e.g., knocking). For example, the third participant214can view the videoconferencing content218on a display device232included in a computing device230. The computing device240can be determined to be proximate to the videoconferencing system158using one or more of the systems and processes described herein. As shown inFIG.2B, the second participant212can have displayed on a display device242included in a computing device240content different from the videoconferencing content218. In some cases, though proximate to the videoconferencing system158, the second participant212may not need to or want to provide content to the videoconference. The second participant212can be a participant in the videoconference based on the presence of the computing device240proximate to the videoconferencing system158. FIG.2Cis a diagram that shows an example of a pop-up menu250that can display indications of the information and data determined by, referring toFIG.1B, the discovery APIs150and provided by the server142ato the computing device240. Menu entry252ais indicative of a conference that the second participant212may be invited to participate in. Menu entry252bis indicative of the videoconferencing system158that the copresence API156has determined is in proximity to the computing device240. The menu entries252a-bcan be considered identifiers for respective meetings. The second participant212can present, provide, cast, or capture and mirror information and data being displayed on the display device242(e.g., a motorcycle image256) to the videoconferencing device160based on the detected proximity of the computing device240to the videoconferencing system158(the videoconferencing device160). In this example, the second participant may not require pre-authorization to provide content to the videoconference200. For example, the second participant212is running a videoconferencing-enabled application on the computing device240. The second participant212can select a videoconferencing icon254. The computing device240can display the pop-up menu250on the display device242in response to the selection of the videoconferencing icon254. The second participant can select the menu entry252bto provide the information and data being displayed on the display device242(e.g., a motorcycle image256(as an example image)) to the videoconferencing device160in source form. For example, a URL to the content can be provided/sent to the videoconferencing device160as well as the other participants in the videoconference. In some cases, the information and data being displayed on the display device242can be different from the content provided/sent to the videoconferencing device160as well as the other participants in the videoconference. For example, the content provided/sent to the videoconferencing device160as well as the other participants in the videoconference by a back-end server. FIG.2Dis a diagram that shows the example videoconference200where the second participant212provides videoconferencing content260. The display device162can display the videoconferencing content260provided, casted, or captured and mirrored by the computing device240to the videoconferencing device160. As shown inFIG.2D, the display device120of the computing device102acan continue to display the car image180in the tab114a. The third participant214now can view the videoconferencing content260on the display device232of the computing device230. In addition, since this does not kick the first participant210out of the videoconference, both content items (videoconferencing content260and videoconferencing content218) can be presented in the videoconference. In some implementations, a videoconferencing system can include multiple display devices. In these implementations, the videoconferencing content218can be displayed on one display device while the videoconferencing content260can be displayed on another display device. In some implementations, a participant in the videoconference may select which videoconference content they would like to view on a display device included in a computing device. FIG.2Eshows an example of a pull-down menu270that can display indications of information and data determined by, referring toFIG.1B, the discovery APIs150and provided by the server142ato the computing device230. Menu entry272aand menu entry272care indicative of named devices (e.g., a TV located in a bedroom and a TV located in a recreation room) that are on the same network as the computing device230. In addition, menu entry272bcan be indicative of a conference that the third participant214has as an entry on a calendar for the third participant214. Menu entry272dis indicative of a videoconference that the third participant214has been authenticated/authorized to provide content to as described herein. For example, the menu entry272dis indicative of the videoconference200, where the third participant214can provide content to the videoconferencing device160. The menu entry272band menu entry272dcan be considered identifiers for respective meetings. For example, the third participant214can be running a browser-based application that displays a bicycle image274in a tab276of a web browser application. The third participant214can select a videoconferencing icon278. Based on the selection, a videoconference client can project (provide) content to the videoconferencing system158while a bicycle image274(as an example image) is displayed in a tab276of a web browser application. The computing device230can present, provide, cast, or capture and mirror the information and data included in the tab276(e.g., the bicycle image274) to the videoconferencing device160. The videoconferencing system158can display the information and data included in the tab276(e.g., the bicycle image274) on the display device162. FIG.2Fis a diagram that shows the example videoconference200where the third participant214provides videoconferencing content280. The display device162can display the videoconferencing content280provided (casted, or captured and mirrored) by the computing device230to the videoconferencing device160. As shown inFIG.2F, the display device120of the computing device102acan continue to display the car image180in the tab114a. The display device242of the computing device240can continue to display the motorcycle image256in the videoconferencing-enabled application. FIG.2Gis a diagram that shows the content displayed on the display device120(e.g., the car image180displayed in the tab114a) further including an alert window290. The computing device102acan display the alert window290when, for example, the first participant210receives an incoming email or message. As shown inFIG.2G(and referring toFIG.2B), the computing device102acan display the alert window290while the computing device102ais providing (casting, or capturing and mirroring) the information and data included in the tab114a(e.g., the car image180). In the example shown inFIG.2G, the computing device102acontinues to provide the content of the tab114a(e.g., the car image180) to the videoconferencing device160without including the content of the alert window290as part of the content provided to the videoconferencing device160. FIG.2His a diagram that shows the first participant210, referring toFIG.1A, navigating to/selecting the tab114bof the web browser application110. The first participant210can select another tab (e.g., tab114b) in order to interact with another browser-based application or to view other information and data while the information and data (the contents) displayed in the tab114a(e.g., the car image180) remain displayed on the display device162included in the videoconferencing system158. The first participant210changing the window or tab that they are currently viewing to view or interact with another tab of the web browser application may not change what is being projected/provided to the display device162. For example, the first participant210may access information and data related to the topic of the videoconference200(e.g., car model data292) from a browser-based application294running/executing in the tab114bof the web browser application110. In the example shown inFIG.2H, the computing device102acontinues to provide the content of the tab114a(e.g., the car image180) to the videoconferencing device160while the first participant210navigates to the tab114band runs the browser-based application. The first participant210can interact with content included in the tab114a(e.g., they can provide input to and view output in the tab114ausing the computing device102a), the interactions being displayed/mirrored on the display device162of the videoconferencing system158. In addition, the first participant210can switch/navigate to the tab114bwithout providing (casting, or capturing and mirroring) the information and data (the contents) of the tab114bfor viewing on the display device162. The user can switch from the tab114ato the tab114bin the web browser application110while the contents of the tab114acontinue to be provided to the videoconferencing device160and displayed on the display device162. The first participant210can interact with a browser-based application running/executing in the tab114bof the web browser application110without the contents of the tab114band the interactions occurring within the tab114bbeing displayed on the display device162of the videoconferencing system158. This can allow the first participant210to access/interact with other content without the participants of the videoconference being able to view the other content. For example, the first participant210may display the car model data292in the tab114bfor a car whose image is the car image180displayed in the tab114aof the web browser application110. The first participant210can display the car model data292while the second participant212and the third participant214view the videoconferencing content218. The first participant210can access the car model data292, which may be considered confidential, for use during the videoconference without projecting/providing the car model data292to the videoconferencing device160. Referring toFIGS.1C and2A, the computing device102acan display the pull-down menu138on the display device120in response to receiving a selection of the videoconferencing icon118. The menu entry126bindicates that the videoconferencing system158is in proximity to the computing device102a. The first participant210can select the menu entry126b. In response to the selection of the menu entry126b, the videoconferencing system158can display the UI202on the display device162. The meeting menu entry220indicates that a meeting is scheduled for the conference room (e.g., conference room 1900-03L) at 4:00 pm (in five minutes). In some cases, the first participant210can select the meeting menu entry222. For example, the first participant210can select the meeting menu entry220using the remote control168included in the videoconferencing system158. FIG.2Iis a diagram that shows an example UI282displayed on the display device162that allows a user to enter a name for a new videoconference. For example, in response to the selection of the meeting menu entry222by the first participant210, the display device162can display the UI282. The first participant can enter a name for the new meeting in the input box284. In some cases, the meeting name can include/specify a domain (e.g., meeting@domain, domain/meeting). As described herein, users in proximity to the videoconferencing system158can automatically join and provide content to the videoconference. As described herein, other users can participate by way of one or more processes that include invitations and knocking. In some implementations as described herein, the first participant210can use a keyboard included on the remote control168to enter the meeting name in the input box284. In other implementations as described herein, the display device (e.g., the display device120) of a computing device (e.g., the computing device102a) that is proximate to the videoconferencing system158can display a user interface similar to the UI282. In some cases, the display device162may not also display the UI282. In some cases, the display device162may also display the UI282. In these implementations, the user of the computing device (e.g., the first participant210) can enter a meeting name using a keyboard or other type on input device included in the computing device without the need to interact with the videoconferencing system158, which would necessitate locating and using the remote control168. The first participant210can select a cancel menu option288to exit the UI282. The first participant210can select a cast menu option286to cast content on a computing device of the first participant210to the new videoconference. FIG.3Ais a diagram that shows an example videoconference300that includes a first participant310, a second participant312, and a third participant314. In the example videoconference300, the first participant310, the second participant312, and the third participant314are located proximate to a videoconferencing system358(e.g., the first participant310, the second participant312, and the third participant314are sitting together in a conference room (e.g., conference room 2100-08L) where the videoconferencing system358is located). For example, the first participant310can be a user of the computing device102bas shown inFIGS.1A,1B, and1C. The videoconferencing system358can be similar to the videoconferencing system158. The first participant310can be a host or initiator of the videoconference300. The first participant310can enter the conference room with the computing device102b. FIG.3Bshows an example of a pull-down menu338that can display indications of the information and data determined by, referring toFIGS.1A and1B, the discovery APIs150and provided by the server142ato the computing device102b. In the example shown inFIG.3B, the computing device102bcan run a web browser application. The computing device102bcan send/provide the information and data (content) displayed in a web browser UI of a particular tab (e.g., tab314a) to a videoconferencing device360for display on a display device362included in the videoconferencing system358. In some implementations, the system UI of the computing device102bsend/provide the information and data (content) to the videoconferencing device360. The computing device102bcan display the pull-down menu338in the display area124in response to receiving a selection of a videoconferencing icon318. The menu entry326aindicates that the videoconferencing system358is in proximity to the computing device102b. For example, the menu entry326acan be a name for the conference room that the videoconferencing system358resides in. The first participant310can select the menu entry326a. In cases where there is no meeting scheduled for the conference room and the videoconferencing system residing in the conference room, and where there is no meeting scheduled within a certain time period within a time when the menu entry326ais selected, the selection of the menu entry326acan automatically launch/start the videoconference300that makes use of the videoconferencing system358. For example, at 6:00 pm the first participant310, the second participant312, and the third participant314are sitting together in a conference room (e.g., conference room 2100-08L) where the videoconferencing system358is located. The first participant310selects the menu entry326a. A meeting was scheduled for the videoconferencing system358in the conference room (e.g., conference room 2100-08L) from 4:00 μm to 5:00 pm but no other meetings are scheduled for the remainder of the day. The selection of the menu entry326aautomatically starts/launches the videoconference300. FIG.3Cis a diagram that shows an example UI330displayed in the display area124that allows a user to enter a name for a new videoconference. In some implementations, when the selection of the menu entry326aautomatically starts/launches the videoconference300, the UI330can be displayed in the display area124to allow the first participant310to enter a name for the new videoconference in the input box332. The first participant310can select a cast button334to start the videoconference on the videoconferencing system358. As described herein, users in proximity to the videoconferencing system358can automatically join and provide content to the videoconference. As described herein, other users can participate by way of one or more processes that include invitations and knocking. In some implementations, as described with reference toFIGS.3A-C, one or more individuals (e.g., the first participant310, the second participant312, and the third participant314) can walk into (enter) a conference room that includes a videoconferencing system (e.g., the videoconferencing system358) and easily start a videoconference on an idle videoconferencing system (e.g., the videoconferencing system358). In these implementations, the videoconference can default to having no audio input (e.g., a microphone364is muted or not activated) and can default to disabling a camera366(e.g., the camera366is not activated). The display device362can display a microphone indicator340with a slash or line through the microphone indicator340indicating that the microphone is muted. The display device362can display a camera indicator342with a slash or line through the camera indicator342indicating that the camera is disabled. Muting the microphone364and disabling the camera366can allow the first participant310to just present, provide, cast, or capture and mirror the content of the information and data included in the display area124. In these implementations, the display device362can be used to display/project content to the first participant310, the second participant312, and the third participant314located in the conference room without creating/allowing the possibility of being able to remotely activate the camera366and/or the microphone364for the purpose of eavesdropping. The one or more individuals can use a remote control368when starting the videoconference on an idle videoconferencing system. The videoconference can enable audio output. For example, the first participant310can share content that includes audio content that everyone in the videoconference can hear. Audio input from the videoconferencing system358, however, may not be captured if, for example, the videoconference was started from another device (e.g., a device other than the computing device102b. In some implementations, data deduplication can prevent (eliminate) duplicate copies of the same meeting from being displayed to a user in a pull-down menu of target conferences and/or devices. For example, referring toFIGS.1C,2A, and2B, the first participant210can have the Transportation-Types meeting (the meeting indicated by meeting menu entry220) on a calendar for the first participant210. As such, referring toFIG.1C, in addition to the menu entry126bindicative of the proximity of the computing device102ato the videoconferencing system158, an additional menu entry for the Transportation-Types meeting could also be included in the pull-down menu138. In this case, the menu entry126band the calendar entry for the Transportation-Types meeting are related to the same meeting. In this example, the menu entry126bindicative of the proximity of the computing device102ato the videoconferencing system158was included in the pull-down menu138and an entry for the Transportation-Types meeting was not included. In another example, the menu entry126bindicative of the proximity of the computing device102ato the videoconferencing system158may not be included in the pull-down menu138and an entry for the Transportation-Types meeting may be included. In one case, a user with a computing device may be proximate to a videoconferencing system and may also have a videoconference scheduled on a calendar for the user at the same time. In this case, since the user cannot be in two different places at the same time, a pull-down menu listing target meetings and videoconferencing devices can list the videoconferencing system higher in the list than the videoconference scheduled on the calendar for the user. For example, referring toFIGS.1C,2A, and2B, the first participant210can have a weekly meeting with Katie scheduled on a calendar for the first participant210as indicated by menu entry126c. The weekly meeting with Katie may be scheduled at the same time as the Transportation-Types meeting being held in the conference room 1900-03L. The menu entry126bis indicative of the proximity of the computing device102ato the videoconferencing system158located in the conference room (conference room 1900-03L) where the Transportation-Types meeting is being held. In this case, the menu entry126bis placed higher in the list of target meetings and videoconferencing devices than the weekly meeting with Katie. FIG.4is a flowchart that illustrates a first method400of providing content to a videoconference. In some implementations, the systems described herein can implement the method400. For example, the method400can be described referring toFIGS.1A-C,2A-I, and3A-C. Content in an application executing on a computing device is displayed on a display device included in the computing device (block402). For example, the user can be running a browser-based application (e.g., the browser-based application128) that displays the car image180in the tab114a. It is determined that the computing device is proximate to a videoconferencing system (block404). For example, the copresence API156can determine/discover videoconferencing systems proximate to the computing device102a. The videoconferencing systems in proximity to the computing device can be determined/discovered using one or more of the processes disclosed herein. At least one identifier associated with a videoconference is displayed in a user interface on the display device (block406). For example, the pull-down menu138can display the at least one identifier associated with a videoconference (e.g., menu entry126b). The at least one identifier can be representative of information and data determined by the discovery APIs150and provided by the server142ato the computing device102ain response to the remote procedure calls103. A selection of the at least one identifier is received (block408). For example, the selection of the menu entry126bcan be received. The videoconference is initiated on the videoconferencing system in response to receiving the selection of the at least one identifier (block410). Content is provided for display on a display device included in the videoconferencing system (block412). For example, the content displayed on the display device120(e.g., the car image180displayed in the tab114a) is provided (casted, captured and mirrored) to the videoconferencing device160and displayed on the display device162as videoconferencing content218. FIG.5shows an example of a generic computer device500and a generic mobile computer device550, which may be used with the techniques described here. Computing device500is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device550is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. Computing device500includes a processor502, memory504, a storage device506, a high-speed interface508connecting to memory504and high-speed expansion ports510, and a low speed interface512connecting to low speed bus514and storage device506. Each of the components502,504,506,508,510, and512, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor502can process instructions for execution within the computing device500, including instructions stored in the memory504or on the storage device506to display graphical information for a GUI on an external input/output device, such as display516coupled to high speed interface508. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices500may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory504stores information within the computing device500. In one implementation, the memory504is a volatile memory unit or units. In another implementation, the memory504is a non-volatile memory unit or units. The memory504may also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device506is capable of providing mass storage for the computing device500. In one implementation, the storage device506may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory504, the storage device506, or memory on processor502. The high speed controller508manages bandwidth-intensive operations for the computing device500, while the low speed controller512manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller508is coupled to memory504, display516(e.g., through a graphics processor or accelerator), and to high-speed expansion ports510, which may accept various expansion cards (not shown). In the implementation, low-speed controller512is coupled to storage device506and low-speed expansion port514. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device500may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server520, or multiple times in a group of such servers. It may also be implemented as part of a rack server system524. In addition, it may be implemented in a personal computer such as a laptop computer522. Alternatively, components from computing device500may be combined with other components in a mobile device (not shown), such as device550. Each of such devices may contain one or more of computing device500,550, and an entire system may be made up of multiple computing devices500,550communicating with each other. Computing device550includes a processor552, memory564, an input/output device such as a display554, a communication interface566, and a transceiver568, among other components. The device550may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components550,552,564,554,566, and568, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor552can execute instructions within the computing device550, including instructions stored in the memory564. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device550, such as control of user interfaces, applications run by device550, and wireless communication by device550. Processor552may communicate with a user through control interface558and display interface556coupled to a display554. The display554may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface556may comprise appropriate circuitry for driving the display554to present graphical and other information to a user. The control interface558may receive commands from a user and convert them for submission to the processor552. In addition, an external interface562may be provide in communication with processor552, so as to enable near area communication of device550with other devices. External interface562may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory564stores information within the computing device550. The memory564can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory574may also be provided and connected to device550through expansion interface572, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory574may provide extra storage space for device550, or may also store applications or other information for device550. Specifically, expansion memory574may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory574may be provide as a security module for device550, and may be programmed with instructions that permit secure use of device550. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory564, expansion memory574, or memory on processor552, that may be received, for example, over transceiver568or external interface562. Device550may communicate wirelessly through communication interface566, which may include digital signal processing circuitry where necessary. Communication interface566may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver568. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module570may provide additional navigation- and location-related wireless data to device550, which may be used as appropriate by applications running on device550. Device550may also communicate audibly using audio codec560, which may receive spoken information from a user and convert it to usable digital information. Audio codec560may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device550. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device550. The computing device550may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone580. It may also be implemented as part of a smart phone582, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and can interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The computing system can include clients and servers. A client and server are generally remote from each other and can interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In situations in which the systems and methods discussed herein collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a content server. A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the specification. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims. | 85,463 |
11861154 | DETAILED DESCRIPTION Throughout the following discussion, numerous references will be made regarding servers, services, interfaces, databases, engines, modules, clients, peers, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor (e.g., ASIC, FPGA, DSP, x86, ARM, ColdFire, GPU, multi-core processors, etc.) configured to execute software instructions stored on a computer readable tangible, non-transitory medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions. One should further appreciate the disclosed computer-based algorithms, processes, methods, or other types of instruction sets can be embodied as a computer program product comprising a non-transitory, tangible computer readable media storing the instructions that cause a processor to execute the disclosed steps. The various servers, systems, databases, or interfaces can exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges can be conducted over a packet-switched network, the Internet, LAN, WAN, VPN, cellular, or other type of packet switched network. The following discussion provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed. A system of the inventive subject matter can allow a user to capture an image, audio, text or video data related to an object, and collaborate with a second user in editing content associated with the object. The two or more users can be located in close proximity to one another, or can be located remotely from one another, for example, 1, 5, 10, or even 1,000 or more miles away from one another. FIG.1is a schematic of one possible system of the inventive subject matter. System100can include an object recognition engine110that is communicatively coupled with a collaboration database120, at least one sensor (e.g.,130or146), and an electronic device140. The system100is configured to obtain sensor data135related to an object (e.g., data related to a photograph of an object) via an external sensor130, or sensor data148via the electronic device's sensor146. The various components of system100can be communicatively coupled with one or more other components such that the coupled components can exchange data with one another using one or more of the data exchange methods listed above, as well as data exchange connections such as NFC, wired or wireless data connections (e.g., USB, HDMI), WiFi, Bluetooth, data exchange connections internal to components and modules of computing devices, etc. The example ofFIG.1is presented as shown for illustrative purposes and ease of understanding. As such, the system100should not be interpreted as being limited to only the components as shown inFIG.1. For example, the system100includes a plurality of electronic devices140, each having their respective system components, to be used by a corresponding plurality of users150in a collaboration environment. Similarly, the collaboration database120is illustrated with interface components122,124,126for the purposes of simplicity, but can have any number of interface components across one or more databases120, as allowed by available storage resources. Sensor data can be considered to be data representative of the conditions, environment or scene captured by the sensor. Examples of sensor data can include image data (e.g., captured by an image sensor such as in a camera), motion or impact data (e.g., for motion sensors, pressure sensors, gyroscopes, and accelerometers), audio data (e.g., captured by an audio sensor such as a microphone), text, document, application, computer image or other data (e.g., when the sensor is disposed within a computing device), video data (where the sensor comprises a video camera, video data can also include a series or sequence of image data viewable as video), temperature data (e.g., a temperature sensor such as a thermometer), and combinations thereof. The sensor data can be in formats commonly used for the capture and use of data by each particular sensor or sensor type. For example, image data can be in formats such as RAW, JPG, BMP, etc.; audio data in formats such as MP3, WMA, WAV, M4P, etc.; video data can be in formats such as WMV, AVI, MPEG-4, etc. The electronic device140can comprise any commercially suitable device, including for example, a mobile phone, a tablet, a phablet, a computer, game console, appliance, television, or other type of computing device. Preferred electronic devices can comprise an electronic device interface142, through which an instantiated collaboration interface144can be presented to a user. As discussed herein, the electronic device interface142can be interpreted as referring to input and output interfaces of computing devices that allow for a user to receive output from the computing device and enter input into the computing device. Examples of interfaces142can include a mouse, keyboard, touch screen, stylus, game controller, display screen, audio output (e.g., speakers, headphones, etc.), voice input, motion sensing input, force feedback devices, etc. The instantiated collaboration interface144can be constructed from collaboration interface data112obtained via the object recognition engine110as a function of the collaboration interface components122,124,126. As used herein, the term “collaboration interface component” can be interpreted broadly to include, for example, content associated with an object, a video editor, image editor, text editor, game editor, audio editor, an Application Programming Interface (API), a Remote Procedure Call (RPC), a website hosting content or other functions, and components thereof (e.g., bars, buttons, sliders, menus, etc. to be included in an editor, that enable a user to use the editing and other collaboration functions of the editor). In embodiments, a collaboration interface component can be directly related to the identified object and/or content associated with the object. This collaboration interface component can be a separate component (such as an independent editor specific to the content or to the object) or an add-on to an existing editor component. For example, if the object identified is a movie directed by a director having a distinctive visual style, an image editor or video editor that can be used to edit image content or video clips associated with the movie can be modified by collaboration interface components specifically related to the movie, such that the image or video editor included in the instantiated collaboration interface144can include edits associated with the director's visual style (e.g., a particular visual filter, distortion of color or shapes, ability to insert explosion effects into any aspect of the content, etc.). The instantiated collaboration interface144as presented to a user can include various editors having input components that can enable a user to edit, manipulate or otherwise interact with content associated with an object via instantiated collaboration interface144. The instantiated collaboration interfaces144and collaboration interface components will be discussed in further detail below. The content associated with an object can comprise the representation of the object in the sensor data itself (e.g., an image of the object, an audio recording of the object, a video recording of the object, or other representation of the object as captured by sensor data), content associated with the object stored on a database retrieved based on the object characteristics (e.g., images, audio, video, text, interactive games, software applications, etc.), the actual object itself or a copy thereof (where the actual object is a data file such as word processing file, spreadsheet file, document file, etc.). Content can be in the form of image files, video files, word document files, audio files, web pages, spreadsheets, interactive games, text files, software applications, etc. As discussed above, content associated with an object can be stored as a collaboration interface component within collaboration database120. In embodiments, the content can be stored in one or more separate databases or other non-transitory computer-readable media. In these embodiments, the collaboration interface component corresponding to the content within collaboration database120can be an information address, link or other type of information location pointer indicating an electronic or network location of the content as stored in the separate storage media. The content associated with the object can be of a different modality or content type from that of the sensor data. For example, audio data can be used to recognize a movie playing in the background of an environment where the user device is detecting audio. However, the content data associated with the object (movie) can include video clips, still images from the movie, interactive games, and other content that is not only audio data. In embodiments where the content comprises the object representation within the sensor data, the content can be of a data type extracted from the sensor data. For example, from a video clip having audio, the content data to be used can include songs in audio-only form, pieces of audio dialog separated from the corresponding visual aspect of the video clip, still images taken from the video clip, etc. Once the user is presented with instantiated collaboration interface144via electronic device140, the user can cause an interaction with (such as desired edit or manipulation) the content associated with an object via a collaboration command147. The collaboration command147can be generated by the electronic device140via the instantiated collaboration interface144, in response to user input corresponding to a content editing collaboration component presented in the instantiated collaboration interface144. In embodiments, the change to the content represented by the collaboration command147can be executed by the instantiated collaboration interface144itself. In embodiments, the collaboration command147can be obtained from the collaboration interface144by an editor engine170, which can incorporate a change to the content associated with the object in the instantiated collaboration interface144as a function of the collaboration command147. The editor engine170can be communicatively coupled with a reconciliation engine160, which can be configured to reconcile a conflict between two or more collaboration commands147. Examples of contemplated collaboration commands147include commands to add, delete, modify, insert, review, proofread, crop, clip, speed up, highlight, enlarge, shrink, slow down, distort, skew, rotate, reverse, mirror, superimpose, comment upon, manipulate, replace, create, or otherwise edit content, and add or modify a user comment associated with the content. The object recognition engine110can be configured to obtain sensor data135,148from sensors130,148, where sensor data135,148can be representative one or more objects, symbols, or other features in an environment. Object recognition engine110can be configured to employ one or more object recognition techniques to recognize the objects in the environment depicted in the sensor data based on identified object characteristics. Object recognition by a system of the inventive subject matter can utilize features such as scale-invariant feature transform (SIFT), binary robust invariant scalable keypoints (BRISK), and other object recognition technologies. SIFT, BRISK and other suitable object recognition technologies are described in co-owned U.S. Pat. Nos. 7,016,532; 7,477,780; 7,680,324; 7,403,652; 7,565,008; 7,899,243; 7,881,529; and 7,899,252 to Boncyk et al., and U.S. Pat. No. 7,775,437 to Cohen et al., all of which are incorporated herein by reference in their entirety. For example, image data can be analyzed using one or more object recognition technique to establish one or more image characteristics, which can then be used as an index into an object database storing known objects according to a priori generated image characteristics as described in the above referenced patents. Audio data can be analyzed to recognize words, sounds, music, rhythms, tempo, and other sonic features, which can then be used to recognize objects capable of generating the sounds or related to the sounds and/or recognizing the source of the sound (e.g., a sound coming from a particular movie, etc.). Still further, accelerometer data can represent motion signatures that can be used to identify an activity (e.g., dance, driving, etc.) or a location through integration of acceleration data. Location data such as GPS data can be used, optionally in conjunction with image data, identify a location or building. All possible object recognition techniques and corresponding object characteristics are contemplated. Object characteristics can be identified directly from sensor data135or indirectly from sensor data135. An example of obtaining object characteristics directly from sensor data135can include applying optical character recognition (OCR) algorithms to image data in order to generate one or more words relating to the object. An example of indirectly obtaining the object characteristics includes using the sensor data135,148to recognize a known object, then look-up additional information based on references to the known object. For example, the object recognition engine110can receive image data of a movie poster. The object recognition engine110can analyze the image data using object recognition techniques to derive image data characteristics, and can use the image data characteristics to identify the poster of the movie. In embodiments, additional information or content associated with the movie depicted in the poster can be retrieved by the object recognition engine110based on the identified movie poster, such as editable content or a multi-player game. In embodiments, the type of content retrieved can be based on the image data characteristics used to identify the poster of the movie. In embodiments, the additional information or content associated with the movie can be retrieved based on keywords associated with the movie, which can be obtained via database lookups using image data characteristics, or via techniques such as OCR. The object recognition engine110can be configured to use the derived object characteristics to give rise to one or more collaboration interfaces144that allow multiple entities to collaborate with respect to content associated with the recognized object. The object characteristics can be used to construct a query targeting collaboration database120storing interface components122,124,126. Such components can be stored according to an indexing scheme based on relevant object characteristic modalities (e.g., image-based attributes, audio attributes, OCR text, indirect characteristics, etc.). In response to the query, collaboration database120can return relevant components that can be used to construct the collaboration interface144. The collaboration interface144can be instantiated according to rules associated with one or more of the users, the computing devices associated with the users, the object being recognized, and the content itself. For example, one user can be deemed responsible for initiating, hosting or otherwise “owning” the collaboration by the object recognition engine110(such as based on the user having obtained the sensor data, or initiated the collaboration and invited other users, etc.), and thus have editing privileges not provided to other “guest” users. As such, the instantiated collaboration interface144in the hosting user's device may include the full suite of editing functions, whereas the collaboration interfaces144for guest user devices can be instantiated with certain functions removed. In another example, where a collaboration interface144includes a video editor and the content is a video clip from a movie, the collaboration functions provided to users in the video editor can be restricted based on copyright and other intellectual property rights corresponding to the video clip and/or the movie. In yet another example, the collaboration functions of each component within the interface can be subject to censorship rules. As such, the collaboration interface144can be instantiated such that certain video editor functions in the video editor are unavailable to users for this content (e.g., limits on saving or sharing a generated edited clip, limits on the types of edits possible, etc.). In yet another example, the collaboration capabilities of components included in the instantiated collaboration interface144can be modified based on the capabilities of one or more of the user devices (e.g., networking capabilities, hardware/software limitations such as processing power, OS used and version, display capabilities, etc.). Continuing with the previous example of a movie poster, once the poster is recognized by the object recognition engine110, the collaboration database120can be queried and return interface components122,124,126that allow multiple users to collaborate with each other to place augmented reality versions of the movie characters in a setting, play a shared game related to the movie, comment on the movie in a review forum discussion, edit clips or still images from the movie, generate ringtones or sound-bites from dialogue in the movie, participate in a promotion associated with a product placed in the movie, or otherwise interact together with content that is made available as triggered by recognition with the poster. More than one of these collaboration options can be instantiated into a single collaboration interface144, whereby the users can collaborate or interact with multiple content items associated with the movie simultaneously. In embodiments, the system100can include a construction engine150, which can be configured to construct one or more complete or partial collaboration interfaces to be instantiated on electronic device140. For example, various collaboration interface components (e.g.,122,124,126, etc.) obtained via the collaboration database120can be included in a single collaboration interface that is stored in a database for future instantiation. These collaboration interface components for pre-constructed collaboration interfaces can be gathered according to content themes, content types, common collaboration functions for particular content types, common associations between different content types of a same topic or theme, etc. These pre-constructed collaboration interfaces can be considered high-level, and as such, can be considered to be generic templates for possible interfaces to be instantiated. Thus upon instantiation, the collaboration interface can include the interface components of the pre-constructed interface as well as additional interface components that can be specific to the particular content being used (including the content itself), characteristics of the particular collaboration enabled by the individual instantiated collaboration interface, collaboration rules associated with the content and/or one or more of the collaborating users (or their devices), etc. For example, a pre-constructed interface can comprise a video editor and include a play button, a fast forward button, a rewind button, a stop button, a delete button, an upload interface, a comment button, a speed adjusting slider, a color adjusting slider, and a video display element configured to present a representation of a video. As it is a pre-constructed interface, this interface lacks an association of with any content representative of a recognized sensed object. Upon recognizing an object within received sensor data, the object recognition server110can obtain the pre-constructed collaboration interface via construction engine150(or database storing the constructed collaboration interfaces) and further instantiate it with one or more interface components from the collaboration database120corresponding to the content associated with the sensed object. Based on rules associated with one or more of the content, the collaborating users, and the respective users' devices, one or more of the components of the video editor can be modified or removed, such that the collaboration functions associated with those components are restricted or eliminated from use by one or more of the users. System100can also optionally be coupled with a reconciliation engine160, which is configured to provide an indication of a conflict between two or more collaboration commands and is described in further detail below. While the sample system100illustrated inFIG.1depicts the object recognition engine110, the collaboration database120, the construction engine150, the reconciliation engine160and the editor engine170as external to the electronic device140, it is contemplated that, in embodiments, one or more of the object recognition engine110, the collaboration database120, the construction engine150, the reconciliation engine160and the editor engine170can be integral to the electronic device140that instantiates the collaboration interface144. In embodiments, the collaboration interface144can be instantiated by the object recognition engine110(or caused to be instantiated by the engine110) externally from the electronic device140, such as in a dedicated server or other computing device, and provided to the electronic device140. In one aspect of these embodiments, the collaboration interface144can be instantiated externally from the electronic device140and then communicated to the electronic device140for instantiation within the device140. In another aspect of these embodiments, the collaboration interface144can be instantiated externally from the device140and presented to the device140via a virtualized interface, such that the device140only serves to present the interface144to the user and receive user input related to the collaboration. In these embodiments, the computing device instantiating the collaboration interface144is communicatively coupled with the object recognition engine110, electronic device140and any other user devices such that the collaboration interface144is presented to and allows interaction from the user via electronic device interface142. FIG.2is a schematic showing the construction of a collaboration interface.FIG.2also illustrates example components of an instantiated collaboration interface. For purposes of this example, it should be presumed that user210has captured an image of a “for rent” sign via a camera-enabled first electronic device215and caused sensor (i.e. image) data to be transmitted to object recognition engine205. It should also be presumed that user220has captured a video/audio representation of a cooking show via a video-camera enabled second electronic device225and caused sensor (video and audio) data to be transmitted to object recognition engine205, and has selected a text file saved on second electronic device225and caused text data (corresponding to “sensor data” for the purposes of the functions carried out by the object recognition engine) to be transmitted to object recognition engine205. The object recognition engine205ofFIG.2can correspond to the object recognition engine110ofFIG.1. Likewise, the user devices215,225having instantiated collaboration interfaces230,240and device interfaces250,260, respectively, can each correspond to the electronic device140, instantiated collaboration interface144, and electronic device interface142ofFIG.1. Upon receiving sensor data via the first or second electronic devices (215,225), sensor data from an external sensor (such as sensor130inFIG.1) or upon selection of sensor data stored in one of the electronic devices or in another storage location to be incorporated into one or more collaboration interfaces230,240, the object recognition engine205can be configured to identify a set of object characteristics from the sensor data by using one or more object recognition techniques, and select a set of collaboration interface components having selection criteria satisfied by the object characteristics to be included within the instantiated collaboration interfaces230,240on the first and second electronic devices (215,225), respectively. As discussed above, collaboration interface components within the selected set can include the content associated with the identified object that will be the subject of the collaboration. One or both of the instantiated interfaces230,240can also comprise a representation of the object obtained, or other content, by the object recognition engine205. It is contemplated that the sensor data and/or content data associated with the identified object (e.g., image data, video/audio data, text data, other content associated with the sign, cooking show and/or text document) can be used to instantiate a single collaboration interface on one or more electronic devices (as shown in this example), or can be used to instantiate two or more collaboration interfaces on one or more electronic devices simultaneously or sequentially. As the object recognition engine205is communicatively coupled with first electronic device215and second electronic device225, the object recognition engine205can cause the instantiation of the collaboration interfaces230,240in the respective device. The first collaboration interface230is instantiated in the first device215, and presented to the user210via a first device interface250of first electronic device215. The second collaboration interface240is instantiated in the second device225, and presented to the user220via second device interface260of second electronic device225. As previously mentioned, an instantiated collaboration interface can comprise various components, including for example, a video editor, image editor, text editor, game editor, audio editor, bars (e.g., minimizer, maximizer, zoom in, zoom out, etc.), buttons, sliders, menus, any suitable graphical user interface element (GUI element), or any other suitable components. It is contemplated that all of the components of an instantiated collaboration interface can be obtained via one or more databases (such as one or more collaboration databases120). In the example illustrated inFIG.2, the first instantiated collaboration interface230and second instantiated collaboration interface240generally comprise similar interface components, selected based on having selection criteria satisfied by the object characteristics. For example, the object characteristics identified from the image data can have selection criteria requiring that a component is suitable for editing an image. Thus, the components identified for instantiation would be suitable for performing such edits. Each instantiated collaboration interface comprises some of the same components, for example, a tool bar (270,280), minimizer and maximizer buttons, a close button, font adjusters, an image editor (274,284), video editor (276,286), and text editor (272,282). However, the first instantiated collaboration interface230and second instantiated collaboration interface240can include differences, even within similar interface components. For example, image editor274and image editor284each comprise a website associated with the for rent sign captured by user210, and various optional components such as a color palette drop down menu, a website navigation menu, a slider bar that allows a user to adjust the size of the representation or other item of the editor, as well as any other component that can generally be found in a Microsoft™ Paint, Microsoft™ Word or other similar program. The website associated with the “for rent” sign can be an apartment management company's website that provides images, prices, and specs associated with the apartment displaying the for rent sign. The differences between collaborations interfaces230and240can be generated based on user preferences, user access permissions, user collaboration hierarchy (e.g., the user initiating the collaborations, the user that ‘owns’ or is hosting the collaboration, etc.), user content ownership, user content access permissions, user profiles, user device capabilities, and other factors. FIG.2illustrates possible differences between the collaboration interfaces230,240. In this example, the image editor274is considered to comprise full image editing capabilities, while image editor284is considered to comprise partial editing capabilities (represented inFIG.2as the “grayed out” background of image editor284). As such, image editor274can allow user210to perform the full suite of image editing functions which can include adding a comment on the representation, modifying a font, text, color, size or other appearance of the representation, cropping the representation, rotating the representation, inserting audio comments, inserting video comments, and so forth. However, image editor284can be restricted to allow user220(or other user) to add textual, audio or video comments. As another example, text editor272and text editor282also generally comprise the same components, including a digital representation of the document selected by user220. However, user220is completely blocked from making any edits to the representation via text editor282(again represented by the “grayed out” background within text editor282), while user210can utilize the various text editor components included in text editor272to edit the representation. The preceding description in relation toFIG.2describes content editing where the content to be edited can comprise a representation of the object (as depicted in the sensor data), content associated with the object that is retrieved by the object recognition engine (such as from a collaboration interface component from a collaboration database), or content included in a webpage or other item presented via an instantiated collaboration interface. Much of this description is also applicable where the content to be edited comprises the object itself. For example, a text document can be identified from a captured image depicting a computer screen having the text document visible within the computer screen. In this example, the identified text document can be the object recognized by the object recognition engine, and upon recognition, can also be the content that is edited by collaborating users. It is also contemplated that content editing where the content comprises the object itself can be achieved utilizing, among other things, technology described in U.S. Pat. No. 7,564,469 to Cohen. As shown inFIG.2, video editor276and video editor286comprise the same components, and can be utilized by users210and220(or other users having access to first or second electronic devices (215,225) to perform various edits. These components include a digital representation of the video captured by user220, an audio file uploading component, a play button, a fast forward button, a rewind button, a slow button, a pause button, a close button, or any other suitable component. Suitable components can include those included in Movavi™ products and other video editing software. Once user210or user220have completed their collaboration (i.e., directed an electronic device to generate 0, 1, 5, 10, or even 50 or more collaboration commands via the instantiated collaboration interface), it is contemplated that one or more of the versions of the collaborated-upon, edited content (i.e., before and after any single edit, including the final version) can be saved to one or more of the electronic devices, sent to an email address, sent other devices via MMS or text message, saved in a database communicatively coupled with the object recognition engine, modified further according to additional collaboration commands, uploaded to a website for sharing or publication (e.g., via social networking sites, video hosting sites, etc.), and/or deleted. The actions to be taken on edited content can be based on a user preference or default action, and can be based upon rules associated with the content or object restricting the use of the content pre- and post-modification. It is contemplated that one set of sensor data (e.g., video/audio data from a captured video/audio) can be used to instantiate a collaboration interface having collaboration components directed to different items of content. This can be automatic, or in response to a user request. In an embodiment illustrating one example of using one set of sensor data to instantiate a collaboration interface with collaboration components using different items of content, the collaboration components can be directed to items of content that are separated versions of a combined item of content. For example, a user may wish to have two interfaces shown side by side with respect to a single audio/video recording of a car speeding down a street. One component can be used to edit a video aspect (i.e., the captured moving imagery), and the other component can be used to edit an audio aspect (i.e., the captured sound). The user can enter a command or request when the video content is selected (in embodiments where the content is retrieved, such as from the component database120) or capturing the video/audio data (e.g., in embodiments the content to be used is object representation within the captured sensor data) that an audio editor and a video editor be instantiated in two separate interfaces. Alternatively or additionally, the system can be configured to implement a default rule that where received sensor data comprises two or more different types of data, a separate interface shall be provided for each type. In an embodiment illustrating a second example of using one set of sensor data to instantiate a collaboration interface with collaboration components using different items of content, the object recognition engine110can recognize multiple objects represented in a single set of sensor data, and the components of the instantiated collaboration interface can individually apply to each recognized object. For example, if the sensor data is captured video data that includes a clip of a television show, and the clip of the television show depicts a character driving a particular car, the object recognition engine110can recognize the television show as an object and the car within the show as another object. As such, the collaboration components can be related to (and include) content and components associated with each of the TV show and the car. For example, the instantiated collaboration interface144can include a video editor allowing a user to edit clips of the TV show, and a modeling application that allows the user to build their own version of the car (including options, color, etc., such as for a prospective purchase). In embodiments, the items of content associated with individual collaboration components, such as the “for rent” sign, the cooking show clip and the text file ofFIG.2can be linked, such that collaboration commands applying to individual collaboration components (e.g., the image editor, video editor, and text file editor) can be linked to one or more of the other collaboration components, whereby the editing effect of a collaboration command on one item of content has an associated effect with a second item of content. In one example, the video editor can allow for the insertion of the “for rent” sign representation into the video clip of the cooking show, such that in exterior shots of the cooking show, the host's house can now appear to be for sale. As the image of the “for sale” sign is edited in the image editor274, the appearance of the sign in the video can be similarly changed by the video editor276. Similarly, the image of the “for sale” sign or a clip of the video can be embedded into the text document, wherein changes in the image file performed with the image editor274or the video file by using the video editor276can result in corresponding changes of the image or video clip embedded into the document file being edited in text editor272. Likewise, the video can be edited by embedding comments at particular parts of the video using editor276, with corresponding links or citations to the video added to the text document using text editor272. As such, as the text editor272is used to edit the section of the document including the comment to be displayed in the video, the video editor272can update the text at the corresponding parts of the video clip to display the updated version of the comments. FIG.3is a flowchart illustrating a method300implementing functions and processes to enable the collaboration of one or more users according to the inventive subject matter. The method described inFIG.3can be implemented, for example, via the system illustrated inFIGS.1and2. At step301, the system100can provide access to an object recognition engine, such as engine110, to one or more users, such as user150. This access can be provided via a communicative coupling between an electronic device140accessible to a user and the other components of the system (e.g., engine110, engines150,160,170, and other components such as computing devices or modules that can handle any registration and login functions of a particular system). The access to the object recognition engine110can be provided via an application downloaded to the electronic device140, via a website access via a browser on electronic device140, etc. The access to the object recognition engine110and other components of the system100to access the collaboration functions enabled by the inventive subject matter can be subscription or payment-based, and can require user authentication, such as by the entry of a user identifier and a password for the user. At step302, the system100provides access to a collaboration database, such as collaboration database120, storing collaboration interface components such as components122,124,126. It is also contemplated that the system100can provide access to other database(s), including, for example, databases storing sensor data, digital representations of objects, content associated with objects, edited representations, edited content associated with objects, user uploaded electronic files, or any other data associated with a device or user. Additional databases can also include databases associated with registration/verification/login functions, databases associated with object recognition functions, etc. In embodiments, the access provided at step302can be provided simultaneously with the access to the object recognition engine110provided step301. In embodiments, the access to various databases can be provided on an “as needed” basis, wherein access is only provided at the time that data or information from the various databases is needed to carry out functions and processes associated with the inventive subject matter and other functions of system100. At step303, the object recognition engine110can receive sensor data representative of one or more objects as detected by one or more sensors. The sensor can be a sensor146internal to a user's electronic device140or an external sensor130. As discussed above, contemplated sensor data can include image data, text data, video data, audio data, etc. Examples of objects represented by sensor data can include billboard signs, television shows, movies, advertisements, documents, PDFs, spreadsheets, photographs, sculptures, geographic locations, images, music, YouTube™ clips, an X-ray image, a CT scan image, a fashion item, a combination thereof, or any other physical (real-world) or digital object can be photographed, video recorded, audio recorded, downloaded, or otherwise captured by a sensor. At step304, the object recognition engine110uses one or more object recognition techniques to identify (e.g., directly, indirectly or both) a set of object characteristics (e.g., characteristics of or relating to the object, characteristics of or relating to the type of sensor data obtained, etc.) from the sensor data related to an object represented by the sensor data. The object characteristics can include one or more of a sound, a motion, a color, a size, a curvature or other characteristic of the object, a sensor data type identifier (e.g., identifying the sensor data as image data, video data, audio data, text data, etc., and including the identification of the sensor data as a combination thereof), a characteristic of the sensor data itself, and other type of data. At step305, the object recognition engine110selects one or more collaboration interface components122,124,126from the collaboration database120based on one or more of the object characteristics, from which a collaboration interface can be instantiated. In embodiments, the collaboration interface components can be selected based on selection criteria satisfied by the object characteristics. In further embodiments, the collaboration interface components can be selected based on a mapping of one or more object characteristics to one or more collaboration interface components, wherein the mapping serves to satisfy selection criteria. In other words, an interface component can be selected based on its suitability for use with content, such as allowing a desired edit to be made to content related to an object. For example, where the object characteristic identifies the content related to the object as audio data, a selected collaboration interface component can comprise, among other things, a vocal reducer, a voiceover application, a noise reducer, a silencer, amplifier, normalizer, equalizer, echo slider, or a ringtone generating application. As another example, where the object characteristic comprises curvature data of an object, a selected collaboration interface component can allow a user to edit the curvature of the object. As discussed above, in embodiments, the collaboration interface components can include the content itself that is to be the subject of the collaboration. In embodiments, the content can be the representation of the object within the sensor data itself. As described in step306and in the description above, the object recognition engine110can then cause the instantiation of a collaboration interface on an electronic device140from the set of collaboration interface components. At step307, the object recognition engine110can initiate an action targeting one or more user electronic devices140prior to configuring the device(s) to generate a collaboration command as described in step308. The initiated action can comprise a synchronizing action (e.g., to synchronize the devices participating in the collaborative environment), a network test action, an updating action (e.g., such as to update a device's software application to the most current version for use, to update content to a new version, etc.), a phone call (e.g., to technical support where the object comprises a broken or otherwise non-functioning wireless router, to alert a user that the collaboration is about to begin, etc.), a text message (e.g., to provide devices with copies of the sensor data originally used to select collaboration components for use in the collaboration, such as where a user captures an image of a television screen showing an American Idol™ contestant, etc.), a software process (e.g., a download of an Angry Birds™ mobile application where the object comprises a Angry Birds™ magazine advertisement or QR code therein, etc.), or any other suitable action that can be useful to a collaboration with respect to an object. At step308, the first electronic device140generates a first collaboration command (such as collaboration command147ofFIG.1) via the instantiated collaboration interface144, corresponding to a requested edit to the content, such as an edit entered by the user via the device interface142. The first collaboration command147can allow the user to view the requested edit performed via a collaboration interface component via an electronic device. The generation of the collaboration command can be automatic upon an input of an edit by a user, or can require an action by a user or system of the inventive subject matter (such as a confirmation action or authorization action). In embodiments, the edit corresponding to the collaboration command147can be provided to an editor engine170and executed by the editor engine170. For example, where a user inputs edit selections to delete 5 seconds of an audio clip via an audio or music editor component of an instantiated collaboration interface144, the object recognition engine110can automatically generate a collaboration command to delete the audio clip in the instantiated collaboration interface(s). Alternatively, the system can, among other things, require that a second user approve of the deletion via an instantiated collaboration interface prior to generating the command, or generate (or not generate) the command in accordance with a rule associated with the object. Contemplated rules associated with an object can be stored in a database communicatively coupled with an object recognition engine, and can include, for example, a rule that governs instantiation and operation of the instantiated collaboration interface, a copyright rule, a trademark rule, a patent rule, a contract rule, a language rule, an intellectual property rule, a privacy rule, a censorship rule, or a version control rule. One or more of the rules can be enforced via at least one of the object recognition engine and the instantiated collaboration interface. In this manner, a system can be used to enforce a right associated with an object (e.g., person, copyrighted work, trademark, etc.). For example, a user may wish to capture an image of an image of Catherine Zeta Jones, and utilize a system of the inventive subject matter to edit the image with a friend. While certain edits to a representation of the image of Catherine Zeta Jones may be allowable by a rule (e.g., as a fair use), other edits may be restricted by a copyright rule or moral rights rule. Thus, the system may allow the user to comment upon the image of Catherine Zeta Jones, while restricting a conversion of the image into grayscale. In some embodiments of the inventive subject matter, the collaboration interface(s) instantiated in step306can also be instantiated on a second electronic device (a second electronic device140) to allow a collaboration between two or more users, as described in step309. As illustrated inFIG.2, the same or similar collaboration interfaces can be instantiated on multiple user devices, allowing for the collaboration by multiple users. The collaboration interfaces can differ from user to user, as discussed above with regard to user collaboration/access rules and permissions. At step310, the second electronic device generates a second collaboration command via the collaboration interface instantiated on the second electronic device, as described in step310. In such embodiments, a conflict may arise between a first collaboration command and a second collaboration command. In embodiments, the object recognition engine110can target an action targeting the second electronic device in the same way that the first electronic device is targeted at step307, prior to the execution of step310. A reconciliation engine160that is communicatively coupled to the object recognition engine110can be configured to provide an indication of a conflict between two or more collaboration commands from two different users, as executed at step311. A conflict can be detected by the reconciliation engine160when conflicting commands are received Where such an indication is provided, it is contemplated that the two or more users can determine which command to generate or incorporate into an instantiated collaboration interface or representation. This determination can be via a vote, via a suggested alternative command to one or more of the users, via a selection of a recommended command among the conflicting commands, etc. In embodiments, the determination can be single-handedly made by a user according to a hierarchy of users in the collaboration, whereby the option to determine the command to generate and implement can be decided by the single user of a highest hierarchy (e.g. hosting user, initiating user, sensor data gathering user, content owner, etc.). At step312, the reconciliation engine160reconciles the conflict (i.e., performs revision control and conflict resolution). This reconciliation can be performed via a centralized revision control system (e.g., file locking, version merging, collaboration command prioritizing and synchronization etc.) distributed revision control, integrated version control support, or any other suitable model or system of version control. In embodiments, the conflict can be reconciled by the reconciliation engine160without first providing the indication of the conflict of step311. In these embodiments, the reconciliation engine160automatically resolves any detected conflicts according to reconciliation rules and/or criteria. A reconciliation performed by the reconciliation engine160of the inventive subject matter can be based at least in part on one or more of the sensor data received by the object recognition engine110, a rule associated with an object captured by the sensor data, the object characteristics identified from the sensor data, a rule associated with the content associated with the captured object, and one or more collaboration interface components from which the collaboration interface is instantiated. For example, a rule governing a reconciliation can require that a command generated via the electronic device on which the collaboration interface is first instantiated has a higher priority than a command generated via second, third, or later devices. As another example, a rule governing a reconciliation can require that a command generated by the electronic device that is located nearest to a sensor that transmitted sensor data related to the object to the object recognition server has the highest priority. As yet another example, a rule governing a reconciliation can require that a command correlating to an edit by the user who initiated the collaboration (e.g., first caused sensor data to be transmitted to the object recognition engine, etc.) has priority over all other commands. An example of a reconciliation of a conflict between two collaboration commands is described in the following use case. Lisa is studying in her UNC dorm room and wants to play a game of chess with Lindsey who is at a coffee shop in Boston. Lisa takes a picture of a chess board and a chess piece with her cell phone, and transmits the image data to an object recognition engine that is implemented on her cell phone as part of a mobile application. The object recognition engine identifies a set of object characteristics from the image data related to the chess board, and a second set of object characteristics from the image data related to the chess piece. The object recognition server then selects a set of collaboration interface components having selection criteria satisfied by both set of object characteristics and a collaboration interface is instantiated on Lisa's cell phone from the selected set of collaboration interface components. In this example, the instantiated collaboration interface comprises representations of: a chess board; 1 white king piece; 1 black king piece; 1 white queen piece; 1 black queen piece; 2 white rook pieces; 2 black rook pieces; 2 white bishop pieces; 2 black bishop pieces; 2 white knight pieces; 2 black knight pieces; 8 white pawn pieces; and 8 black pawn pieces. Each representation of a piece is located in the appropriate starting position of the representation of the chess board. The instantiated collaboration interface also comprises two arrows, each of which can be used to select a chess piece and move it to a different position. Lisa inputs a request via the mobile application to invite Lindsey via a text message to her cell phone and an instant message to her Facebook™ and Gmail™ accounts. Lindsey accepts the request, and the instantiated collaboration interface is further instantiated on Lindsey's computer. Lisa uses one arrow to select a white pawn piece and moves it diagonally to the right. Lindsey simultaneously selects the same white pawn piece and moves it diagonally to the left. Thus, Lisa's cell phone shows the selected white pawn piece in one location while Lindsey's computer shows it in a different location. Each device receives an alert indicating the conflict, and the conflict is automatically reconciled according to a default rule that the first player controls the white pieces and the second player controls the black pieces. Thus, the white pawn represented on Lindsey's device is automatically moved two spaces to the right. Another example of a reconciliation is described in the following use case. Architect Valerie and interior designer Vanessa are working on an addition to Melissa's house in Connecticut. Valerie has created a blueprint of what the addition currently looks like and Vanessa would like to remotely view the blueprint so that she can determine how much space she has to work with. Valerie transmits an electronic file comprising sensor data related to the blueprint to an object recognition engine of the inventive subject matter via her computer. Based in part on the received sensor data, the object recognition engine identifies object characteristics from the image and instantiates a collaboration interface on Valerie's computer, the interface comprising a three dimensional model of the blueprint objects, a view adjuster, a drawing tool, an image uploading tool, a color palette, a zoom function, a crop tool, and an expansion tool. As described above, these components can be selected from a collaboration database communicatively coupled with the object recognition engine. If Valerie wants to remove a component of the instantiated collaboration interface, it is contemplated that she can simply click on a symbol related to the component. If Valerie wants to add a component to the instantiated collaboration interface, it is contemplated that she can be presented with optional components to select from and move to a desired location within the interface. Valerie, satisfied with the originally instantiated collaboration interface components, sends a request to the object recognition engine to instantiate the same collaboration interface on Vanessa's computer. The object recognition engine sends a link to Vanessa's email address from which Vanessa opens the instantiated collaboration interface. Valerie and Vanessa, viewing substantially identical collaboration interfaces, make various edits to the three dimensional model representative of changes they wish to make on Melissa's addition. For example, Valerie utilizes the expansion tool to expand a game room of the three dimensional model. Viewing this change, Vanessa utilizes an image uploading tool to add a representation of a sectional to the game room. Thus, using a system of the inventive subject matter, Valerie and Vanessa can collaborate with one another remotely to achieve the best architecture and design combination for Melissa. Other contemplated uses for systems of the inventive subject matter include, among other things, telemedicine, retail activities, design activities, training activities, or education. As used herein, and unless the context dictates otherwise, the term “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention. Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims. It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C . . . and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc. | 59,657 |
11861155 | DETAILED DESCRIPTION Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts. In overview, the technologies described herein are predominately intended to facilitate provision of a rich user interface for rendering on one or more touch/gesture controlled displays, such as wall-mounted glass displays. Such displays are preferably relatively large, for example having a screen size in excess of 42 inches. As used herein, the term “touch/gesture controlled” refers to a display/interface in respect of which the primary mode of control is either touch (for example via touchscreen input technology) or gesture (for example via a motion sensor arrangement), or by way of a combination of touch and gesture. The term “gesture” should be read sufficiently broadly to include gestures made via a touchscreen interface and gestures made for recognition by motion sensing equipment (and other such feature-recognising equipment). There may be additional modes of control present, including the likes of voice or peripheral inputs (such as keyboards, mice, touch devices, and so on). In some cases, aspects of the technology are described by reference to functionalities provided/observed via a user interface. In such cases, embodiments may take the form of client, server, and other computing devices (and methods performed by those devices) thereby to provide such functionalities. Technologies described below provide for a touch/gesture controlled interface that enables an operator to access information and content relating to a facility (for example a building or a region including multiple buildings). This information and content may relate to one or more “facility systems” or “building systems” (with the term “facility” being used to describe a site defined by one or more buildings and/or other locations), being systems defined by information technology components that make available such information and/or content. Examples of such facility systems include: Video surveillance systems, which provide access to live video data from one or more cameras located in the facility. HVAC systems, which provide access to control and monitoring of HVAC components (such as chillers, air handling unity, thermostats, and so on). This may be provided by a broader building management system. Access control systems, which include access control devices and associated control systems (for example to control access through doors and the like, and monitor movement through such doors). Energy management systems, which provide access to monitoring and/or controlling of energy consumption in a building. A general objective for technologies described herein is to provide an effective, powerful, and easy-to-use touch/gesture controlled user interface, and associated back-end framework. Framework Overview A first exemplary framework is illustrated inFIG.1A. Other exemplary frameworks are shown in further figures, and similar reference numerals have been used where relevant. Major components shown inFIG.1Aare: A touch/gesture driven display101, which is configured to enable rendering of a map-based user interface100. A display driver terminal110, which is configured for driving user interface100(for example in terms of processing user inputs, obtaining data from remote sources, and rendering data on the display. This is either integrated with display101, or coupled to display101by a HDMI cable (or other form of data cable). A tile data server, which maintains map and layer data for the map-based user interface100. This is configured to communicate with display driver terminal110via a network connection. A set of facility system IT components which provide data underlying content accessible via interface100. These include a surveillance system130(which includes a plurality of cameras131), a building management system132(which includes HVAC components, sensors, etc133), and access control system134(which includes access control devices, servers, etc135) and another system136(which includes devices/components137). These communicate with driver110via network communications, optionally via one or more intermediary servers (not shown). An alternate multi-screen arrangement is illustrated inFIG.6, and described further below. Map-based navigation interface100includes image data representing a map, such as a floorplan. This is rendered on-screen as a map layer (for example a background map layer). Overlaid on the image data representing a map are various objects, including a plurality of content control objects (CCOs)102. In some embodiments described herein, CCOs take the form of orbital menu objects. However other GUI components may be used, for example CCOs such as those described by reference toFIG.7. A user interacts with such CCOs to gain access to content and/or functionality. For example, where a CCO relates to a camera, interaction with the CCO provides access to a live video feed from that camera (optionally in combination with other content/controls). Interface100is able to be navigated in terms of pan and zoom. For example, motion/touch gestures such as swiping, pinching, and the like may be used to effect such navigation as is customary for touch/gesture driven displays. The manner by which interface handles such navigation is dealt with in more detail below. In this embodiment, interface100is provided by way of display driver110, which may be provided via a computing device having an interface able to be coupled to display101(for example an HDMI output or the like). Display driver110includes a processor111that enables execution of software instructions, for example software instructions maintained on a memory module112. Communications modules113(such as Ethernet network adaptors, WiFi, or the like) enable interaction with remote devices. In this regard, various software functionalities provided via interface100may be derived from software executing at display driver110and/or at other distributed terminals. For instance, whether a given functionality is provided by locally or merely executing code may be a matter of design choice, based upon optimisation of resources. In this regard, any functionality/data shown inFIG.1Aas being remote of display driver110may, in other embodiments, be provided in whole or in part locally by display driver110. Memory module112maintains software instructions for a user interface module which is configured to control rendering of interface100. This user interface module is responsive to user inputs (via input components114, which may include touch-based and gesture-based inputs, and the like) for controlling interface100. Controlling display100includes the likes of processing inputs indicative of commands for zooming, panning, accessing content, shifting vertical levels, and so on (discussed in more detail further below), and processing those inputs thereby to provide desired functionality. In terms of panning and zooming, in the embodiment ofFIG.1A, display driver110is in communication (over a network or Internet connection) with a tile data server120. This enables the provision of a tile-based arrangement for displaying map data and layer data. In overview, the image data displayed via interface100includes a background map layer (defined by background tile data121), and one or more layers that are able to be superimposed on the background map (defined by layer data121a-121n). These are discussed in more detail further below. FIG.1Billustrative an alternate configuration whereby a display server140interposes display driver terminal110(and optionally a plurality of further display driver terminals coupled to respective displays) with the tile data server and other systems. In this manner, display server140operates substantially like a web server for the present technologies. By such an approach, the user interface may be loaded into and rendered within a web browser application executing on a display driver terminal (or other device). This allows for scalability, and reduces the need for a display driver terminal to possess any special characteristics or software. FIG.1Cillustrates another configuration, whereby display driver terminal interacts directly with surveillance system130, as opposed to via server140. This configuration is used to more efficiently manage bandwidth in the context of displaying live video data via interface100. In overview, in some embodiments a display object rendered in interface100is configured to display live video data from one of cameras131, and for this purpose the display object includes data that enables the creation of a direct connection to the relevant camera via system130. In some cases this includes initiating a socket connection via a specified camera server (specified by a network address, for instance) of system130thereby to coordinate the delivery of live video data for display in the display object of interface100. It will be appreciated that other framework-level variations may be applied for further embodiments. User Interface Components FIG.2illustrates an exemplary screenshot200according to one embodiment. This is provided thereby to assist in explanation of various user interface components referenced throughout the specification. A background map layer201is illustrated, in this case being a substantially two-dimensional isometric floorplan. This shows artefacts such as walls, doors, stairs, and the like. The floorplan may show a single building, or multiple buildings (with intermediate areas). The floorplan may include multiple substantially two-dimensional images representing different vertical zones (such as floors in a building). In some cases only a single vertical zone is displayed, with a navigation interface allowing navigation between vertical zones. In other cases multiple separate map images for respective vertical zones are shown alongside one another. In such cases preferably visual features (such as coloured lines) are provided thereby to indicate pathways between levels, for instance at stairwells and lifts. By way of example, in one embodiment a coloured line connects a stairwell in one map image to a corresponding stairwell in another map image, thereby to visually indicate functional connection between the stairwells. Content control objects (CCOs)202aand202bare shown. CCO202aincludes a camera icon, and is bound to a resource in the form of a camera managed by a surveillance system. CCO202bis bound to the resource in the form of an access control device managed by an access control system. CCO202ais able to be operated by a user thereby to launch a content display object203(in this case being a video display object), which is configured to display a content item (in this case being streaming live video) for its bound resource (in this case being a camera). Various other forms of content display objects and content items may be present, for example depending on the nature of resources to which CCOs are bound. Content Control Objects (CCOs) As noted above, interface100includes a plurality of content control objects (CCOs)102, which are overlaid on (or otherwise visible within) a map-based interface. For example, a CCO may be defined by menu objects, such as a radial/orbital menu object. This allows a user to interact with a CCO thereby to access content, for example to launch a display object for a desired content item (for example to view live video data from a camera). In some cases a CCO is configured to automatically launch a display object where predefined conditions are met (for example to enable automated launching of video display objects when a user navigates to a predefined pan and zoom location). These predefined conditions may include alert conditions (for instance, in one example an alert is raised for a given camera when motion is sensed, and that causes automatic launching of a video display object). Each CCO is bound to one or more facility system components, such as one or more cameras of surveillance system130, a sensor of a component monitored by building management system132, and so on. This binding is configured to enable at least either or both the following: A user of interface100to access content made available by the facility system component (for example live video data from a camera, or a value from a sensor). For example, by clicking on a CCO, a user is able to access such content, which is preferably displayed overlaid on the map display (optionally in a content display layer). Downstream pushing of data from the component (or supporting system) to interface100, for example where an alarm condition has been reached. For example, a visual alarm indicator may be applied to or adjacent a CCO based on such downstream pushing of data. Data indicative of CCOs is maintained at a server device, optionally being a server device that is additionally responsible for delivering image data for the map. In some embodiments, a CCO is defined by data indicative of: A position on the map at which the CCO is to be located (which may vary depending of zoom position and/or CCO aggregation rules). A reference to the resource to which the CCO is bound (or, in some cases, resources to which the CCO is bound). This allows binding and/or other data sharing to occur. Data indicative of display parameters. This may include data indicative of an icon for the CCO (for example a CCO bound to a camera in a surveillance system may carry a camera icon). It may also include instructions for the provision of user interface components (for example menu items that are to be displayed via an orbital menu, and how those operate) for example via JavaScript Object Notation (JSON). This allows a CCO to be loaded in a functional manner for the purposes of interface100. CCO processing operations (for example data handling) may be performed at a client or server side (or a combination of both) depending on matters of subjective choice in specific implementations. Background Map Layer As noted above, interface100includes a background map layer. The map may be two or three dimensional, although in the case of three dimensional maps it is preferable that it represents a region of substantially common vertical position (for example a single floor in a building). In some examples multiple maps maintained in a stackable form, thereby to allow an operator to view different vertical positions (for example a separate map for each floor of a building, with each map being defined relative to a common set of referential spatial coordinates). In the example ofFIG.1A, the map is shown as a substantially two dimensional map, being a two dimensional map that displays limited perspective on some or all features thereby to enhance viewability. An isometric view is shown, however alternate views may also be used. In some cases views shift between plan and isometric depending on a level of zoom. Data to enable the rendering of the background map layer at a variety of resolutions is maintained in background tile data121of tile data server120. In this regard, a tile-based approach is used to manage display and resolution, as shown inFIG.3. Background map tile data is defined at varying resolution levels, with a respective number of tiles for each resolution level. From a definitional perspective, these are a highest level (301inFIG.3) and move down to lower levels (302and303inFIG.3). Each lower level is defined by an increased number of tiles as compared to its preceding level above. Affine transformations are preferably used to manage zooming within a given tile. In the context of the background map layer, level301is defined by a full mapped view of a facility (in this example represented by an oval shape), defined at a predefined resolution level. This predefined resolution level is optionally selected based on a maximum optimal resolution for display device101(or based on a maximum optimal resolution for an anticipated display device, based on current technological norms). Level302is defined by four partial views, each of which being defined at the same predefined resolution level as the entire view of level301. That is, in terms of total number of pixels, the four tiles of level302define, in combination, four times the number of pixels as the single tile at level301. In level303there are sixteen tiles, again each at the same predefined resolution level as the tiles at levels301and302. This means that, by way of example, a much greater level of detail and granularity of the map is able to be provided in the tiles at level303compared with levels302and301. In terms of operation, based upon a navigation command (such as a pan or zoom), display driver110provides positional information to server120. This positional information is indicative of location (for example relative to set of axis defined relative to the overall full view map in301) and a zoom state. Based on this positional information, server120selectively delivers image data from one or more tiles for the background map layer. As an example, assume level301represents a zoom value of 100%, level 2 represents a zoom value of 200%, and level 3 represents a zoom value of 400%. For zoom levels from 100% to <200%, a zoom operation is applied directly to the single tile of level301. Once the zoom level reaches/surpasses 200%, tile server120delivers image data from one or more of the tiles of level302(depending on vie position relative to tile boundaries). Preferably, on transition between tiles, the preceding tile is used at non-optimal resolution as a preview during download and rendering of the new tile. Tile-Based CCO Management In some embodiments, CCOs displayed in response to zoom position, with an increased number of CCOs becoming visible at higher zoom levels. This may be achieved by reference to the tile-based approach discussed above that is used for the background layer. That is, CCO positions are defined for each tile, so that when a tile is loaded for the background image, CCOs for positions defined for that tile are also loaded. FIG.3illustrates a plurality of CCOs (one labelled as310, by way of example). It will be noticed that the number of CCOs increases from level301to level302, and from level302to level303. From an end user perspective, the objective is to avoid cluttering the on-screen area with too many CCOs. There are a few approaches for achieving this: Manually defining each content layer tile, by defining locations for CCOs. A set of automated rules for CCO amalgamation, such that where a predefined CCO density is met (for example in terms of proximity, number on-screen, etc) those are automatically combined into a single CCO from which content from both constituent CCOs is accessible. By such an approach, CCOs need only be manually created at the lowest level (i.e. for each tile at level303inFIG.2) higher level views are automatically generated. A combination of manual definition and automated rules. Preferably, upward amalgamation of CCOs results in a child-parent relationship (in the case ofFIG.3being a child-parent-grandparent relationship between the three levels). In this relationship, data binding is upwardly aggregated. That is, a grandparent CCO is functionally bound to all child facility system components. In some embodiments this means that all alarms for components bound to child CCOs are visible via the child, parent and grandparent CCOs. Furthermore, content related to the facility system components (e.g. video data, temperature data, etc) for components bound to child CCOs is preferably able to be accessed via the child, parent and grandparent CCOs. So as to provide an example, consider a set of four level303tiles which correspond to a common single level302tile. Assume there are four camera-type CCOs across the set of four level303tiles (although not necessarily one in each), with each of those CCOs being bound to a respective individual camera. That is, each of those CCOs allows a user to launch a video window pop-up from its respective camera. These four CCOs at level303are replaced by a single camera-type CCO at level302, with this level302CC) being bound to all four cameras (and being configured to allow a user to launch a video window pop-up from any of the four cameras, or in some cases a multi-camera view comprising live video from all four). CCO Layers In some embodiments, CCOs are defined in multiple content layers (shown in terms of layer data121a-ninFIG.1A). These content layers may respectively include one or more CCOs. In some embodiments there are respective layers for content types (for example separate content layers for surveillance, HVAC, mechanical, electrical, financial), each having CCOs for that content type. In some cases CCOs include graphical icons that identify the content type with which they are associated (for example a CCO associated with surveillance content may carry a camera icon). In some embodiments, CCO aggregation may occur by combining CCOs from separate layers. For example, based on an aggregation algorithm, where two CCOs are to be displayed, and these are located within a threshold proximity, an algorithm is executed to combine their respective display parameter data (for example combining JSON data) thereby to provide a single CCO that provides all (or a selection of) the functionalities that would otherwise be provided by two separate CCOs. CCO layers are also helpful in enabling filtering of CCO display, for example in based on user attributes and/or user-defined filters. This is discussed further below. CCO Display Relative to Zoom Level In various embodiments, as a result of CCO layers, tile-based management, or other approaches, logic that determines which CCOs are displayed is related to a zoom level. That is, a given CCO is visible by default only once a prerequisite zoom level is reached. In some embodiments, this aspect of the technology is enhanced by way of rules which define circumstances in which a given CCO may be visible at a zoom level preceding its prerequisite. This includes defining conditions (for example the presence of an alarm condition) which, if satisfied, result in either (i) a given CCO being visible at all zoom levels; or (ii) a given CCO being visible at a zoom level directly or indirectly preceding its prerequisite zoom level. CCO Display Management Based on User Attributes In some embodiments, interface100is configured such that CCOs are displayed based on one or more attributes of a user (operator). That is, a user provides credentials to access interface100, and based on these credentials (which may reference user attributes stored in a remote database), decisions are made (preferably at a server level) as to which CCOs are to be displayed in interface100. In this regard, one embodiment provides a computer implemented method for displaying building information to a user via a touch/gesture driven user interface that is rendered on a display screen, the method including:(i) based on current navigation data, displaying a portion of a map layer, wherein the map layer includes image data representative of a facility; and(ii) positioning, on the displayed portion of the map layer, a set of content control objects, wherein each content control object enables a user to load one or more content items via display objects superimposed on the map layer. This method operates on the basis that the set of CCOs is selected from a collection of available CCOs based upon: (a) the current navigation data (for example defined in terms of pan and zoom position); and (b) one or more attributes of the user. In some cases each CCO is bound to at least one building resource, the one or more attributes of the user are indicative of access rights for building resources, and the set of CCOs is selected from the collection of available CCOs based upon the user's access rights with respect to the building resources to which the CCOs are bound. For example, a certain user has access only to view surveillance footage, and so only CCOs bound to cameras are shown. In some cases, there is a plurality of building resource categories, wherein each building resource belongs to a building resource category. The one or more attributes of the user are indicative of access rights for building resource categories, and the set of CCOs is selected from the collection of available CCOs based upon the user's access rights with respect to the building resources to which the CCOs are bound. For example, this may be used in conjunction with a layer approach as disused above. Rather than determining whether or not to show a CCO on a per-CCO basis, the decision is made on a layer basis. The user attributes may include the likes of a tag in a user ID file, permissions in a permissions table, or substantially any other approach. Preferably, each CCO includes data of the required user attribute (e.g. access permission) required. In some cases the user attributes additionally include filtering parameters set by the user. For example, a user with access to video CCOs and HVAC CCOs may choose to filter thereby to only show video CCOs. In some cases the technical mechanisms for determining which CCOs are displayed vary between situations where access permissions are processed and where filters are processed. In cases where a CCO is bound to multiple resources, user-attribute display involves additional challenges. That is, the CCO may appear differently depending on the user's attributes. There are a few ways in which this is managed in embodiments, for example: A one-to-one relationship between CCOs and resources. CCOs may be aggregated together based on a set of rules, but this aggregation is performed following a determination as to which CCOs are to be displayed. For example, in respect of one user a map position (location and zoom level) may include a video CCO and HVAC CCO aggregated together, but for another use that same map position may only include the video CCO (if the user does not have access rights for HVAC, or has filtered out HVAC). Algorithms for enabling modification of data for a CCO that is bound to multiple resources, such that content is available only for a resource where the user has access. Using multiple CCO definitions for multi-resource-mound CCOs, so that separate data is stored to allow loading for a reduced number of the bound resources where appropriate. CCO display management based on user attributes is very useful in the context of interface100, as it allows a single interface framework to server multiple purposes, based on varying operator scope of responsibilities, access rights, roles, and current tasks. Interface100is therefore, at the back end, able to be configured to provide a wide range of rich content, whilst at the front end only content determined to be appropriate for a given user is displayed on the map. User-Specific Content Layer In some embodiments a user is able to customise the location of CCOs. For example, a user is able to add a new CCO, which includes selecting a resource to which it is to be bound. The newly created CCO may reside in a user-specific content layer, which is in some cases available only to that user (and loaded automatically for that user), and in other cases published to other users. In some embodiments a user is additionally/alternately enabled to customise menu items accessible from a given CCO. Again, in some cases these customisations are made exclusively for the user in question, and in other cases they are published for other users (for example by updating the CCO data such that updated CCO data is loaded by the next user to navigate to a position containing that CCO). Multi-Level Map Navigation As noted above, in some cases only a single vertical zone is displayed via the background map layer, even in the case of a multi-level facility. In such cases, a navigation interface may be provided thereby to enable navigation between vertical zones (for example between floors). One embodiment provides a method including, displaying, in a primary region of the user interface, a first map layer, wherein the first map layer includes image data representative of a facility, wherein the user interface is navigated by way of pan and zoom operations defined with respect to the map layer. The method also includes displaying a multi-level navigation interface, wherein the multi-level navigation interface displays a series of stacked map layer previews. The stacked map layer previews are defined relative to a common origin. That is, a certain longitude-latitude coordinate set is commonly defined across all map layers, allowing those layers to be stacked in a meaningful manner. In some embodiments the navigation interface is superimposed on the map layer. In this case, one map layer preview is representative of the first map layer and another map layer preview is representative of a second map layer. A user is enabled a user to interact with the multi-level navigation interface thereby to select a desired one of the stacked map layer previews. For example, this may include a gesture driven selection, optionally effectively scrolling upwards/downwards through the stack. In response to the user's selection of the map layer preview representative of a second map layer, the second map layer is displayed in the primary region of the user interface. In some cases the series of stacked map layers are defined by two-dimensional isometric views stacked thereby to provide a three-dimensional representation of a building defined by floors respectively represented by the layers. This is shown inFIG.5, which shows isometric staked views for multiple levels alongside a two-dimensional floorplan for one level. In some cases prior to the user's selection of the map layer preview representative of a second map layer, the first map later is displayed for a region having a boundary defined in horizontal dimensions, and in response to the user's selection of the map layer preview representative of a second map layer, the second map layer is displayed in the primary region of the user interface for a region having the same boundary defined in terms of horizontal dimensions. That is, the user views the same region (in horizontal aspects) of a different level. In some embodiments, in the case the pan and zoom location for the first map layer meet a first set of conditions, a first multi-level interface is displayed, and wherein, in the case the pan and zoom location for the first map layer meet a second set of conditions, a second multi-level interface is displayed. For example, this may be used for a multi-building facility: the multi-level interface is shown in respect of a building proximal (or defined around) the user's current navigation position. Persistent Content Positioning In some embodiments the user interface is configured to enable persistent display of launched content display object at a launch location. This is configured such that, following navigation away from a map position such that a given content display object is no longer on-screen, and return to that map position, which requires re-loading of map image data for that position, the given content display object remains in its launch location. That is, loaded content display objects appear to remain in persistent positions (relative to the map) as a user navigates the map. In some embodiments, in response to a user command via a content display object to launch a content item, storing context information for the launched content item, thereby to enable management of persistent display. Additionally, in the interests of conserving network and other computing resources, streaming of content by a given content display object is ceased when the given content item is not displayed on-screen. Preferably, persistent display is able to be controlled by the user. For example, some embodiments provide a mechanism for the user to, by way of a single command, close all persistently displayed content display object. In another example, the user interface provides functionality for a user to select whether a given content display object is to be persistently displayed or, alternately, is to automatically close following navigation that causes the content item to no-longer be on-screen. Transition of Content Between Map-Bound and Map-Unbound Layers As discussed above, an operator is enabled to launch various content display objects within interface100, for example a display object configured to present live video from a camera. In some cases, as noted above, the position of a content display item, once launched, is persistent, in that a user upon navigating away from, and subsequently returning to, a given map view position, content items launched for display at that position remain in place. Such content display objects may be considered as being map-bound, in that they are bound to the map in terms of pan and/or zoom operations. That is, the content display object remains the same size and position relative to the background map layer during operator-controlled navigation (for example a content display object becomes smaller during a zoom-out navigation). In some embodiments, a user is enabled to transition content between a map-bound layer and a map-unbound layer. That is, such embodiments provide a computer implemented method including enabling the user to move a given display object between:(i) a map bound layer, wherein the display object is bound to the map layer for pan and/or zoom operations; and(ii) a map unbound layer, wherein the position and/or size of the display object remains constant relative to the display screen independent of pan and/or zoom operations. In some embodiments, when in the map bound layer, the display object is bound to the map layer for pan and zoom operations. In some embodiments, in the map unbound layer, the position and/or size of the display object remains constant relative to the display screen independent of pan and zoom operations. In some embodiments, in the map unbound layer, both the position and size of the display object remains constant relative to the display screen independent of pan and/or zoom operations. The manner by which a display object is transitioned between the map-bound layer and map-unbound layer varies between embodiments. An exemplary approach is to use a “throw” gesture for progressing a given display object from the map bound layer to the map unbound layer. That is, interface100is configured to recognise a “throw” gesture made in respect of a content display object as a command not only to move the object made on the “throw” attributed (e.g. velocity and trajectory), but also as a command to transition the content item to a map-unbound layer. Handling of the map-unbound layer may be achieved via data and processing at the client or a server as matter of design choice. FIG.4provides an example of how such transition operates, by reference to three exemplary simplified screenshots from interface100. In screenshot401two video display objects (411and412) are launched, and by default reside in a map-bound layer. These are each configured to display live video data from respective cameras in a surveillance system. Object411is manipulated by way of a throw gesture, and moves into the position shown in screenshot402. This throw manipulation also transitions object411into a map-unbound layer. Screenshot403shows interface100in a lower zoom position. In this screenshot, object411has remained the same size and position relative to the display screen, whereas object412has remained at a constant position and size relative to the background map layer. Sharing of Content to Networked Devices Some embodiments enable sharing of content between display101and other networked devices. For example, in one embodiment, in response to a “throw” gesture having predefined characteristics, the terminal providing interface100is configured for providing a signal to a second terminal in networked communication with the first terminal, thereby to make the content associated with a building resource. This is illustrated inFIG.1B, which shows a plurality of second networked terminals161a-d. In some cases the user interface includes a plurality of peripheral throw locations, wherein each location is selectively associable with a specific remote terminal, wherein a “throw” gesture for a display object having characteristics representative of a given throw location cause the providing of a signal to the associated specific remote terminal in networked communication with the first terminal, thereby to make the content associated with a building resource. For example, a user is able to manually associate each throw location with a desired terminal. In some embodiments sensing equipment is configured to determine the relative location of the second terminal with respect to the first terminal, thereby to enable determination of whether the “throw” gesture has a trajectory towards the second terminal. The sensing equipment may include image sensing equipment, coupled to either or both of the first terminal or the second terminal. Multi-Modal Operation In some embodiments a touch/gesture driven user interface is simultaneously controllable by multiple modes, including at least one gesture-driven mode, and at least voice-driven mode. Preferably, panning and zooming operations with respect to the map layer are controllable using commands including the name data for the building resources. In this regard, a voice-command processing module is optionally configured to have access to the name data, thereby to enable processing of voice commands in the at least one voice driven mode, wherein those commands are indicative of one or more aspects of the name data. The voice-command processing module is responsive to a command indicative of name data for a given one of the building resources for performing a command in respect of that building resource. This may include, in response to a predefined voice command indicative of name data for a given one of the building resources, the user interface is configured to navigate to an optimal map layer position, in terms of pan and zoom, for displaying that given one of the building resources. By way of example, building resources may be named according to a location-specific hierarchy derived naming convention, for example: BuildingA, BuildingA/Level2, BuildingA/Level2/ConferenceRoom, and BuildingA/Level2/ConferenceRoom/Camera1. A given voice command may be matched against this name to determine whether it is likely to refer to that building resource. This need not require the full name or all aspects. For example, a voice command including “conference room” and “camera 1” may uniquely identify that resource (for example if there is only a single ConferenceRoom defined in the name data). The voice command processing is also preferably context dependent, favouring name data for resources shown on-screen. For example, if only one resource including Camera1 in its name is shown on-screen, then a command including “camera 1” is inferred to relate to that resource. Multi-Screen Implementation In some embodiments, the user interface is provided over multiple displays. An example is illustrated inFIG.6, where three screens are used. A first screen601, in this example being a wall display with touch/gesture controls, provides a map-based interface (for example as described in examples above). This map-based interface, based on a given point-in-time rendering, includes a plurality of CCOs (for example as described elsewhere herein). A second screen602provides an object detail view, which is configured to provide a detailed view for a plurality of content items referenced by CCOs. For example, this may including displaying a plurality of video display objects, each providing a live video feed from a respective camera (with each camera being referenced by a given CCO). Other examples include display objects that provide status and/or configuration information for other facility items, such as access control devices, HVAC components, and so on. Preferred embodiments provide a functionality whereby the system is configured to define a view on screen602responsive to CCOs visible on screen601. This may occur automatically (for example following map repositioning) or responsive to a user command (for example the user clicks a button to “throw” a new view to screen602based on currently visible CCOs. FIG.6also includes a third screen603, which in this embodiment is a desk display (again preferably touch/gesture driven), which provides a control UI. This control UI preferably provides detailed information regarding the facility. In some cases a view on screen603is defined based on CCOs visible on screen601, for example a “quicklist” which provides access to diagnostic information and the like for the relevant facility items. Screens601,602and603are driven by processing components604, which may include multiple processing units. A common keyboard and mouse is preferably coupled to processing components604, and configured to enable movement of a mouse cursor between all three screens. Alternate CCO Implementation FIG.7illustrates an alternative CCO implementation, showing a progression of CCO display subject to user interaction. Icon701represents an exemplary CCO, in this case being for a camera (i.e. it is a camera icon). Interacting with this CCO results in the launching of a compact detail display702, which includes a display object703. This display object provides a view for the relevant facility item referenced by the CCO, for example a live stream of surveillance footage in the context of a camera. Further interaction expands display702to provide an expanded detailed display, which additionally includes object controls704(for example video controls, such as rewind, record, pause, PTZ, and the like) and other controls705. In some cases the other controls enable a user to associate links to other system user interface objects/views with the CCO. CONCLUSIONS AND INTERPRETATION It will be appreciated that the disclosure above provides various significant systems, methods, frameworks and methodologies for enabling display of facility information and surveillance data via a map-based user interface. Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining”, analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities. In a similar manner, the term “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A “computer” or a “computing machine” or a “computing platform” may include one or more processors. The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. A bus subsystem may be included for communicating between the components. The processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth. The term memory unit as used herein, if clear from the context and unless explicitly stated otherwise, also encompasses a storage system such as a disk drive unit. The processing system in some configurations may include a sound output device, and a network interface device. The memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one of more of the methods described herein. Note that when the method includes several elements, e.g., several steps, no ordering of such elements is implied, unless specifically stated. The software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system. Thus, the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code. Furthermore, a computer-readable carrier medium may form, or be included in a computer program product. In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Note that while diagrams only show a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium. The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an exemplary embodiment to be a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media also may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. For example, the term “carrier medium” shall accordingly be taken to included, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor of one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions. It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system. It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention. Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination. Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. “Coupled” may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other. Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention. | 54,887 |
11861156 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The following describes technical solutions of embodiments in this application with reference to accompanying drawings. In the descriptions of embodiments of this application, “/” means “or” unless otherwise specified. For example, A/B may represent A or B. In this specification, “and/or” describes merely an association relationship between associated objects and indicates that three relationships may exist. For example, A and/or B may indicate the following three cases: Only A exists, both A and B exist, and only B exists. In addition, in the descriptions of embodiments of this application, “a plurality of” means two or more. The terms “first” and “second” mentioned below are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of the number of indicated technical features. Therefore, a feature limited by “first” and “second” may explicitly or implicitly include one or more of the features. In the descriptions of embodiments of this application, unless otherwise specified, “a plurality of” means two or more. An orientation or a location relationship indicated by the terms “middle”, “left”, “right”, “up”, “down”, and the like is based on an orientation or a location relationship shown in the accompanying drawings, and is merely intended to facilitate description of this application and simplify description, instead of indicating or implying that the mentioned apparatus or element needs to have a specific orientation or needs to be constructed and operated in a specific orientation, and therefore cannot be construed as a limitation on this application. The following first describes an electronic device100involved in embodiments of this application. FIG.1Ais a schematic diagram of a structure of the example electronic device100according to an embodiment of this application. The electronic device100may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (USB) interface130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communications module150, a wireless communications module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor module180, a button190, a motor191, an indicator192, a camera193, a display194, a subscriber identification module (SIM) card interface195, and the like. The sensor module180may include a pressure sensor180A, a gyro sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a distance sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. It may be understood that the structure shown in this embodiment of this application does not constitute a limitation on the electronic device100. In some other embodiments, the electronic device100may include more or fewer components than those shown in the figure, combine some components, split some components, or have different component arrangements. The components shown in the figure may be implemented by using hardware, software, or a combination of software and hardware. The processor no may include one or more processing units. For example, the processor no may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, a neural-network processing unit (NPU), and/or the like. Different processing units may be independent devices, or may be integrated into one or more processors. The controller may be a nerve center and a command center of the electronic device100. The controller may generate an operation control signal based on an instruction operation code and a time sequence signal, to control instruction fetching and instruction execution. A memory may be further disposed in the processor no, to store instructions and data. In some embodiments, the memory in the processor no is a cache. The memory may store instructions or data that has been used or is cyclically used by the processor no. If the processor no needs to use the instructions or the data again, the processor may directly invoke the instructions or the data from the memory. This avoids repeated access, reduces waiting time of the processor no, and improves system efficiency. In some embodiments, the processor no may include one or more interfaces. The interface may be an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identification module (SIM) interface, a universal serial bus (USB) port, and/or the like. The I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor no may include a plurality of I2C buses. The processor no may be coupled to the touch sensor180K, a charger, a camera flash, the camera193, and the like through different I2C bus interfaces. For example, the processor no may be coupled to the touch sensor180K through the I2C interface, so that the processor no can communicate with the touch sensor180K through the I2C bus interface, to implement a touch function of the electronic device100. The I2S interface may be used for audio communication. In some embodiments, the processor no may include a plurality of I2S buses. The processor no may be coupled to the audio module170through the I2S bus, to implement communication between the processor110and the audio module170. In some embodiments, the audio module170may transmit an audio signal to the wireless communications module160through the I2S interface, to implement a function of answering a call by using a Bluetooth headset. The PCM interface may also be used for audio communication, and may perform sampling, quantizing, and encoding on an analog signal. In some embodiments, the audio module170may be coupled to the wireless communications module160through a PCM bus interface. In some embodiments, the audio module170may alternatively transmit an audio signal to the wireless communications module160through the PCM interface, to implement a function of answering a call by using a Bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication. The UART interface is a universal serial data bus, and is used for asynchronous communication. The bus may be a bidirectional communications bus. The bus converts to-be-transmitted data between serial communication and parallel communication. In some embodiments, the UART interface is usually configured to connect the processor no to the wireless communications module160. For example, the processor no communicates with a Bluetooth module in the wireless communications module160through the UART interface, to implement a Bluetooth function. In some embodiments, the audio module170may transmit an audio signal to the wireless communications module160through the UART interface, to implement a function of playing music by using a Bluetooth headset. The MIPI interface may be configured to connect the processor no to a peripheral component such as the display194or the camera193. The MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), or the like. In some embodiments, the processor no communicates with the camera193through the CSI, to implement a photographing function of the electronic device100. The processor no communicates with the display194through the DSI, to implement a display function of the electronic device100. The GPIO interface may be configured by using software. The GPIO interface may be configured as a control signal or a data signal. In some embodiments, the GPIO interface may be configured to connect the processor no to the camera193, the display194, the wireless communications module160, the audio module170, the sensor module180, or the like. The GPIO interface may be further configured as the I2C interface, the I2S interface, the UART interface, the MIPI interface, or the like. The USB interface130is an interface that conforms to the USB standard specification, and may be specifically a mini USB interface, a micro USB interface, a USB T e-C interface, or the like. The USB interface130may be configured to connect to a charger to charge the electronic device100, may be configured to transmit data between the electronic device100and a peripheral device, or may be configured to connect to a headset to play audio by using the headset. The interface may be further configured to connect to another electronic device such as an AR device. It may be understood that an interface connection relationship between the modules shown in this embodiment of this application is merely an example for description, and does not constitute a limitation on a structure of the electronic device100. In some other embodiments of this application, the electronic device100may alternatively use an interface connection manner different from that in the foregoing embodiment, or combine a plurality of interface connection manners. The charging management module140is configured to receive charging input from the charger. The charger may be a wireless or wired charger. In some embodiments of wired charging, the charging management module140may receive a charging input from a wired charger through the USB interface130. In some embodiments of wireless charging, the charging management module140may receive a wireless charging input through a wireless charging coil of the electronic device100. The charging management module140may further supply power to the electronic device by using the power management module141while charging the battery142. The power management module141is configured to connect the battery142, the charging management module140, and the processor110. The power management module141receives an input from the battery142and/or the charging management module140, and supplies power to the processor110, the internal memory121, an external memory, the display194, the camera193, the wireless communications module160, and the like. The power management module141may be further configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health status (electric leakage and impedance). In some other embodiments, the power management module141may alternatively be disposed in the processor no. In some other embodiments, the power management module141may alternatively be disposed in a same device as the charging management module140. A wireless communications function of the electronic device100may be implemented by using the antenna1, antenna2, mobile communications module150, wireless communications module160, modem processor, baseband processor, and the like. The antenna1and the antenna2are configured to transmit and receive an electromagnetic wave signal. Each antenna of the electronic device100may be configured to cover one or more communications frequency bands. Different antennas may be further multiplexed, to improve antenna utilization. For example, the antenna1may be multiplexed as a diversity antenna in a wireless local area network. In some other embodiments, an antenna may be used in combination with a tuning switch. The mobile communications module150may provide a wireless communications solution that is applied to the electronic device100and that involves 2G/3G/4G/5G communication, or the like. The mobile communications module150may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like. The mobile communications module150may receive an electromagnetic wave through the antenna1, perform processing such as filtering or amplification on the received electromagnetic wave, and transmit a processed electromagnetic wave to the modem processor for demodulation. The mobile communications module150may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation through the antenna1. In some embodiments, at least some functional modules of the mobile communications module150may be disposed in the processor no. In some embodiments, at least some functional modules of the mobile communications module150may be disposed in a same device as at least some modules of the processor no. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The baseband processor processes the low-frequency baseband signal, and then transfers a signal obtained after processing to the application processor. The application processor outputs a sound signal by using an audio device (which is not limited to the speaker170A, the receiver170B, or the like), or displays an image or a video on the display194. In some embodiments, the modem processor may be an independent device. In some other embodiments, the modem processor may be independent of the processor no, and is disposed in a same device as the mobile communications module150or another functional module. The wireless communications module160may provide a wireless communications solution that is applied to the electronic device100and that involves UWB, a wireless local area network (WLAN) (such as a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field communication (NFC) technology, an infrared (IR) technology, or the like. The wireless communications module160may be one or more components integrating at least one communications processor module. The wireless communications module160receives an electromagnetic wave through the antenna2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor no. The wireless communications module160may further receive a to-be-sent signal from the processor no, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna2. In some embodiments, the antenna1and the mobile communications module150in the electronic device100are coupled, and the antenna2and the wireless communications module160in the electronic device100are coupled, so that the electronic device100can communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a global system for mobile communications (GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time division synchronous code division multiple access (TD-SCDMA), long term evolution (LTE), BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a BeiDou navigation satellite system (BeiDou navigation satellite system, BDS), a quasi-zenith satellite system (QZSS), and/or satellite-based augmentation systems (SBAS). The electronic device100implements a display function through the GPU, the display194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display194and the application processor. The GPU is configured to perform mathematical and geometric calculation and render an image. The processor no may include one or more GPUs that execute program instructions to generate or change display information. The display194is configured to display an image, a video, or the like. The display194includes a display panel. The display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device100may include one or N displays194, where N is a positive integer greater than 1. In some embodiments of this application, the display194displays an interface that is currently output by the system. For example, the interface is an interface of an instant messaging application. The electronic device100can implement a photographing function by using the ISP, the camera193, the video codec, the GPU, the display194, the application processor, and the like. The ISP is configured to process data fed back by the camera193. For example, during photographing, a shutter is pressed, and light is transmitted to a photosensitive element of the camera through a lens. An optical signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, so as to convert the electrical signal into a visible image. The ISP may further perform algorithm optimization on noise, brightness, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of a photographing scenario. In some embodiments, the ISP may be disposed in the camera193. The camera193is configured to capture a static image or a video. An optical image is generated for an object through the lens, and is projected onto the photosensitive element. The photosensitive element may be a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) photoelectric transistor. The photosensitive element converts an optical signal into an electrical signal, and then transmits the electrical signal to the ISP for converting the electrical signal into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard format, such as RGB or YUV. In some embodiments, the electronic device100may include one or N cameras193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may further process another digital signal in addition to the digital image signal. For example, when the electronic device100selects a frequency, the digital signal processor is configured to perform Fourier transform on frequency energy. The video codec is configured to compress or decompress a digital video. The electronic device100may support one or more types of video codecs. Therefore, the electronic device100may play or record videos in a plurality of coding formats, such as moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, and MPEG-4. The NPU is a neural-network (NN) computing processor. The NPU quickly processes input information with reference to a structure of a biological neural network, such as a transfer mode between human brain neurons, and may further perform self-learning continuously. Applications such as intelligent cognition of the electronic device100, for example, image recognition, facial recognition, speech recognition, and text understanding, may be implemented through the NPU. The external memory interface120may be used to connect to an external storage card, such as a micro SD card, to extend a storage capability of the electronic device100. The external storage card communicates with the processor no through the external memory interface120, to implement a data storage function. For example, files such as music and a video are stored in the external storage card. The internal memory121may be configured to store computer-executable program code that includes instructions. The processor no implements various functional applications and data processing of the electronic device100by running the instructions stored in the internal memory121. The internal memory121may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (such as a sound playing function or an image playing function), and the like. The data storage area may store data (such as audio data and a phone book) and the like that are created during use of the electronic device100. In addition, the internal memory121may include a high-speed random access memory, and may further include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory device, or a universal flash storage (UFS). The electronic device100may implement an audio function, for example, music playing and recording, through the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into an analog audio signal output, and is also configured to convert an analog audio input into a digital audio signal. The audio module170may be further configured to code and decode an audio signal. In some embodiments, the audio module170may be disposed in the processor no, or some functional modules of the audio module170are disposed in the processor no. The speaker170A, also referred to as a “loudspeaker”, is configured to convert an electrical audio signal into a sound signal. The speaker170A may be used to listen to music or answer a hands-free call on the electronic device100. The receiver170B, also referred to as an “earpiece”, is configured to convert an electrical audio signal into a sound signal. When the electronic device100is used to answer a call or listen to audio information, the receiver170B may be put close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call or sending speech information, a user may have the mouth of the user near the microphone170C to make a sound, so as to input a sound signal to the microphone170C. At least one microphone170C may be disposed in the electronic device100. In some other embodiments, two microphones170C may be disposed in the electronic device100, to collect a sound signal and to further perform noise reduction. In some other embodiments, three, four, or more microphones170C may alternatively be disposed in the electronic device100, to collect a sound signal, perform noise reduction, and identify a sound source, so as to implement a directional recording function and the like. The headset jack170D is configured to connect to a wired headset. The headset jack170D may be the USB interface130, or may be a 3.5-mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications and internet association of the USA (cellular telecommunications and internet association of the USA, CTIA) standard interface. The pressure sensor180A is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor180A may be disposed on the display194. In some optional embodiments of this application, the pressure sensor180A may be configured to capture a pressure value generated when a finger part of a user touches the display, and transmit the pressure value to the processor. In this way, the processor can identify the finger part through which the user enters a user operation. There are various types of pressure sensors180A, such as a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor180A, capacitance between electrodes changes. The electronic device100determines pressure intensity based on a capacitance change. When a touch operation is performed on the display194, the electronic device100receives intensity of the touch operation through the pressure sensor180A. The electronic device100may also calculate a touch location based on a detection signal of the pressure sensor180A. In some embodiments, different touch positions may correspond to different operation instructions. In some optional embodiments, the pressure sensor180A may further calculate a quantity of touch points based on a detected signal, and transmit a calculated value to the processor. In this way, the processor can identify whether the user uses a single finger or a plurality of fingers to enter a user operation. The gyro sensor180B may be configured to determine a motion posture of the electronic device100. In some embodiments, an angular velocity of the electronic device100around three axes (namely, axes x, y, and z of the electronic device) may be determined by using the gyro sensor180B. The gyro sensor180B may be configured to ensure image stabilization during photographing. For example, when the shutter is pressed, the gyro sensor180B detects an angle at which the electronic device100jitters, calculates, based on the angle, a distance for which a lens module needs to compensate, and allows the lens to cancel the jitter of the electronic device100through reverse motion. In this way, image stabilization is ensured. The gyro sensor180B may be further used in a navigation scenario and a motion-sensing game scenario. The barometric pressure sensor180C is configured to measure barometric pressure. In some embodiments, the electronic device100calculates an altitude based on a value of the barometric pressure measured by the barometric pressure sensor180C, to assist in positioning and navigation. The magnetic sensor180D includes a Hall sensor. The electronic device100may detect opening and closing of a flip leather case by using the magnetic sensor180D. In some embodiments, if the electronic device100is a clamshell phone, the electronic device100may detect opening and closing of a flip cover based on the magnetic sensor180D. Further, a feature such as automatic unlocking upon opening of the flip cover is set based on a detected opening or closing state of the leather case or flip cover. The acceleration sensor180E may detect acceleration magnitudes in all directions (usually along three axes) of the electronic device100. When the electronic device100is still, a magnitude and direction of gravity may be detected. The acceleration sensor180E may be further configured to identify a posture of the electronic device, and is used for landscape/portrait mode switching and an application such as a pedometer or the like. In some optional embodiments of this application, the acceleration sensor180E may be configured to capture an acceleration value generated when a finger part of a user touches the display (or when a user finger taps a rear cover frame of the electronic device100), and transmit the acceleration value to the processor. In this way, the processor can identify the finger part through which the user enters a user operation. The distance sensor180F is configured to measure a distance. The electronic device wo may measure a distance in an infrared manner or a laser manner. In some embodiments, in a photographing scenario, the electronic device100may measure a distance by using the distance sensor180F, to implement quick focusing. The optical proximity sensor180G may include a light-emitting diode (LED) and an optical detector, for example, a photodiode. The light-emitting diode may be an infrared light-emitting diode. The electronic device100emits infrared light by using the light-emitting diode. The electronic device100detects infrared reflected light from a nearby object by using the photodiode. When adequate reflected light is detected, the electronic device100may determine that there is an object near the electronic device100. When inadequate reflected light is detected, the electronic device100may determine that there is no object near the electronic device100. The electronic device100may use the optical proximity sensor180G to detect that the user holds the electronic device100close to an ear for a call, so as to automatically turn off the display for power saving. The optical proximity sensor180G may also be used in a leather case mode or a pocket mode to automatically unlock or lock the screen. The ambient light sensor180L is configured to sense brightness of ambient light. The electronic device100may adaptively adjust brightness of the display194based on the sensed brightness of ambient light. The ambient light sensor180L may also be configured to automatically adjust white balance during photographing. The ambient light sensor180L may also cooperate with the optical proximity sensor180G to detect whether the electronic device wo is in a pocket, so as to avoid an accidental touch. The fingerprint sensor180H is configured to collect a fingerprint. The electronic device100may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. The temperature sensor180J is configured to detect a temperature. In some embodiments, the electronic device100performs a temperature processing policy based on the temperature detected by the temperature sensor180J. For example, if a temperature reported by the temperature sensor180J exceeds a threshold, the electronic device100degrades performance of a processor near the temperature sensor180J, to reduce power consumption and implement thermal protection. In some other embodiments, if the temperature is less than another threshold, the electronic device100heats the battery142to prevent the electronic device100from being shut down abnormally due to a low temperature. In some other embodiments, if the temperature is less than still another threshold, the electronic device100boosts an output voltage of the battery142to prevent an abnormal shutdown caused by a low temperature. The touch sensor180K is also referred to as a “touch panel”. The touch sensor180K may be disposed on the display194. The touch sensor180K and the display194form a touchscreen, which is also referred to as a touch screen. The touch sensor180K is configured to detect a touch operation on or near the touch sensor180K. The touch operation refers to an operation of touching the display194by a hand, an elbow, a stylus, or the like of the user. The touch sensor may transfer the detected touch operation to the application processor to determine a type of a touch event. A visual output related to the touch operation may be provided through the display194. In some other embodiments, the touch sensor180K may alternatively be disposed on a surface of the electronic device100at a location different from a location of the display194. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal-cord part. The bone conduction sensor180M may also be in contact with a human pulse to receive a pulsating blood pressure signal. In some embodiments, the bone conduction sensor180M may alternatively be disposed in the headset, to constitute a bone conduction headset. The audio module170may obtain a voice signal through parsing based on the vibration signal that is of the vibration bone of the vocal-cord part and that is obtained by the bone conduction sensor180M, to implement a voice function. The application processor may parse heart rate information based on the pulsating blood pressure signal obtained by the bone conduction sensor180M, to implement a heart rate detection function. The button190includes a power button, a volume button, and the like. The button190may be a mechanical button or a touch-sensitive button. The electronic device100may receive a button input, and generate a button signal input related to user settings and function control of the electronic device100. The motor191may generate a vibration prompt. The motor191may be configured to produce an incoming call vibration prompt and a touch vibration feedback. For example, touch operations performed on different applications (such as photographing and audio playing applications) may correspond to different vibration feedback effects. Touch operations performed on different areas of the display194may also correspond to different vibration feedback effects of the motor191. Different application scenarios (such as time reminding, information receiving, an alarm clock, and a game) may also correspond to different vibration feedback effects. A touch vibration feedback effect may be further customized. The indicator192may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like. The SIM card interface195is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface195or removed from the SIM card interface195, to be in contact with or separated from the electronic device100. In embodiments of this application, a mobile phone may quickly switch to a previously used application by using a multi-task interface. The multi-task interface includes one or more pages, and each of the one or more pages corresponds to an application of the mobile phone. In some embodiments, the application is an application that is running on the mobile phone. The mobile phone may start and simultaneously run a plurality of applications to provide different services or functions for a user. That the mobile phone simultaneously runs a plurality of applications means that the mobile phone has started the plurality of applications, and the plurality of applications are not closed. Resources such as a memory occupied by the plurality of applications are not deleted by the mobile phone, the plurality of applications simultaneously occupy the resources such as the memory in the background, and the plurality of applications are required to interact with the user at the same time in the foreground. For example, the mobile phone successively starts a music application, a gallery application, and an instant messaging application, and simultaneously runs the three applications. In this case, the multi-task interface includes a page corresponding to the music application, a page corresponding to the gallery application, and a page corresponding to the instant messaging application. The user may trigger one of the pages on the multi-task interface to switch to an interface of an application corresponding to the page. During use of an application, if the user switches to another application or a desktop to perform an operation, the electronic device may retain the previously used application as a background application in a multi-task queue. When a plurality of applications are simultaneously run, the mobile phone may generate, based on the plurality of applications in the multi-task queue, a page corresponding to each of the applications. In some embodiments, an application corresponding to a page on the multi-task interface is an application that has been started by the mobile phone. The user may restart the application based on the page corresponding to the application on the multi-task interface. In some application scenarios, the mobile phone displays the multi-task interface after detecting an operation that is indicated by the user and that is of opening the multi-task interface. The multi-task interface includes one or more pages in the mobile phone. By using a page on the multi-task interface, the mobile phone may quickly enter an application corresponding to the page, and display a user interface of the application. For example, the user may directly enter a main interface (or an interface (a non-payment code interface) displayed when a payment application is exited last time) of the payment application from a news browsing application by using the multi-task interface, and then performs a series of operations to enter a payment code interface. An embodiment of this application provides a method for displaying an application interface of application software. In the foregoing application scenario, the user may directly switch from the multi-task interface to the payment code interface of the payment application. This simplifies user operations. Specifically, the electronic device100may add a shortcut control for any application interface of application software, to establish an association relationship between the shortcut control and the application interface. The user may quickly access the application interface corresponding to the shortcut control by triggering the shortcut control. In this application, the user can quickly open a specific interface or enable a specific function of an application. Therefore, access efficiency is improved. The application interface herein includes any interface of the application software in a running process. When the electronic device100runs an application, a task is started. A task includes one or more activities. The activity is an application component configured to implement interaction between the electronic device100and the user. One activity provides one application interface. The electronic device100may make a response based on an event triggered by the user on the application interface. In this application, the electronic device100establishes an association relationship between a shortcut control and an application interface. By triggering the shortcut control, the electronic device100may be switched from a current activity to an activity associated with the shortcut control. Therefore, a quick switch between application interfaces is implemented. FIG.1Bis a block diagram of a software structure of the electronic device100according to an embodiment of this application. In a layered architecture, software is divided into several layers, and each layer has a clear role and task. The layers communicate with each other through a software interface. In some embodiments, an Android system is divided into the following five layers from top to bottom: an application layer, an application framework layer, an Android runtime, a system library (not shown inFIG.1B), and a hardware abstraction layer (HAL) (not shown inFIG.1B), and a kernel layer. The application layer may include a series of application packages. As shown inFIG.1B, the application packages may include applications such as Camera, Gallery, Calendar, Calls, Map, Navigation, WLAN, Bluetooth, Music, Videos, Games, Shopping, Travel, Instant Messaging (such as Messages), and the like. In addition, the application packages may further include system applications such as a home screen (namely, a desktop), a leftmost screen, a control center, a notification center, and the like. The application framework layer provides an application programming interface (API) and a programming framework for an application at the application layer. The application framework layer includes some predefined functions. As shown inFIG.1B, the application framework layer may include an input manager, a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a display manager, an activity manager, and the like. For ease of description, inFIG.1B, an application framework layer that includes an input manager, a window manager, a content provider, a view system, and an activity manager is used as an example for illustration. It should be noted that any two of the input manager, window manager, content provider, view system, and activity manager may invoke each other. The input manager is configured to receive an instruction or a request from a lower layer such as the kernel layer or the hardware abstraction layer. The window manager is configured to manage a window program. The window manager may obtain a size of a display, determine whether a status bar exists, perform screen locking, take a screenshot, and the like. In this application, the window manager is configured to: when the electronic device100meets a preset trigger condition, display a window that includes one or more shortcut controls. The activity manager is configured to manage activities that are running in a system, including information about a process, an application, a service, a task, and the like. In this application, the activity manager starts a task stack each time an application is run. A task stack includes one or more activities. For example, the electronic device100runs an application, and starts a task stack of the application. When a new activity (an activity1) is created, the new activity is displayed at the forefront of the display194and is located at the top of the task stack (the top of an activity stack). In this case, the activity1is visible and is in an active (active/running) state in which interaction with a user can be implemented. As shown inFIG.1C, when the electronic device100receives a user operation, a new application interface needs to be displayed. When a new activity (an activity2) is started and pushed into the task stack, the activity1that is originally located at the top of the task stack is pushed down to a second layer of the task stack, and the activity2is placed at the top of the task stack. In this case, the activity2is in an active state. If the activity2is an application interface that is not displayed in full screen or is a transparent application interface, in other words, the activity2does not fully cover the activity1, the activity1is in a paused state, the activity1is still connected to the window manager, and all data of the activity1is visible on the display194. However, interaction with the user cannot be performed. When the system memory of the electronic device100is inadequate, the activity1is forcibly terminated (killed). If the activity2fully covers the activity1, the activity1is in a stopped state, and all data of the activity1is retained but is invisible on the display194. When the system memory of the electronic device100is inadequate, an activity in the stopped state is terminated prior to an activity in the paused state. An activity in a terminated state cannot be reactivated. It should be noted that when an application is closed, all activities in a task stack of the application are also terminated. For example, the electronic device100displays a first interface of a first application, and an activity10corresponding to the first interface is in an active state and is located at the top of a task stack. After the electronic device100detects an operation that is indicated by the user and that is of opening a multi-task interface, the electronic device100displays the multi-task interface. In this case, an activity11corresponding to the multi-task interface is in an active state and is located at the top of the task stack. The activity10that is originally located at the top of the task stack is pushed down to a second layer of the task stack. The multi-task interface includes one or more pages, and each of the one or more pages is associated with an activity. For example, the multi-task interface includes a page corresponding to the first interface, and the page is associated with the activity10. In some embodiments, the page corresponding to the activity10is currently in a paused/stopped state. In this case, if the user triggers the page, the activity manager reactivates the activity10and places the activity10on the top of the task stack. The display194displays the first interface. In some embodiments, the page corresponding to the activity10is in a terminated state. In this case, if the user triggers the page, the electronic device100restarts the first application corresponding to the page and displays a homepage interface of the first application. The activity manager creates a new activity (an activity of the homepage interface) and places the activity on the top of the task stack. The display194displays the homepage interface. In this embodiment of this application, if the user wants to create a shortcut to an application interface such as a payment code interface, after receiving a user operation on the payment code interface, the electronic device100creates a shortcut control associated with an activity of the payment code interface in response to the user operation. Optionally, the electronic device100protects the activity from being terminated. When the electronic device100displays another application interface, such as a news browsing interface, the user may trigger the shortcut control of the payment code interface. In response to the user operation, the activity manager reactivates the activity and places the activity on the top of the task stack. In this case, the display194of the electronic device100displays the payment code interface. The content provider is configured to store and obtain data, and enable the data to be accessed by an application. The data may include a video, an image, audio, calls that are made and received, a browsing history and a bookmark, a phone book, and the like. In this application, the content provider includes a display activity storage module and a path storage module. The activity storage module is configured to store a correspondence between a shortcut control and an activity. If the user wants to create a shortcut control for an application interface, the electronic device100creates, after receiving a user operation on the application interface, a shortcut control corresponding to an activity of the application interface in response to the user operation. When the electronic device detects a user operation on the shortcut control, the electronic device100activates, in response to the user operation, the activity corresponding to the shortcut control based on a correspondence in the activity storage module, and displays the application interface. In some embodiments, after an application is closed, all activities in the application are also terminated. In this case, a correspondence between a shortcut control and each of the activities becomes invalid. In other words, the shortcut control becomes invalid. The user cannot invoke the activity of the application by using the shortcut control. Optionally, an invalid shortcut control is not displayed on the display194. The path storage module is configured to store a path associated with each shortcut control. The electronic device100may display a corresponding application interface based on the path. Data in the path storage module does not become invalid as an activity is terminated. The path is a directory path through which the electronic device100reaches an application interface, which may be a relative path or an absolute path. The relative path describes a path relationship, that is, a location of a file relative to a directory where the file is located. For example, a directory of “Chat list” is “E:\Instant messaging software\Chat list” and a directory of a “Lisa” dialog box is “E:\Instant messaging software\Chat list\Lisa”. In a relative sense, the “Lisa” dialog box is located in a “Chat list” subdirectory in the directory where the “Lisa” dialog box is located. In this case, a statement used to reference the “Lisa” dialog box is “..\Chat List\Lisa”. The absolute path is a full path of a file, which includes a domain name. The absolute path is a path where the file really exists on a hard disk. For example, if the “Lisa” dialog box is stored in the “E:\Instant messaging software\Chat list\Lisa” directory on a hard disk, the absolute path of the “Lisa” dialog box is “E:\Instant messaging software\Chat list\Lisa”. The electronic device100records a directory path from where application software is run to where a current application interface is accessed. When detecting a user operation of creating a shortcut control for the current application interface, the electronic device100records a directory path of the current application interface. When the user taps the shortcut control, the electronic device100is redirected to the corresponding application interface based on the directory path in the path storage module. Optionally, the path may be a user operation path through which the electronic device100reaches the application interface. The electronic device100records operations performed by the user from when application software is run to when a current application interface is accessed. When the user manually adds a frequently used function or a frequently used function is automatically recorded based on user behavior, coordinates of the user on the display194are recorded. When the user taps the shortcut control, the electronic device100simulates user operations in the background to execute the user operation path, and displays the corresponding application interface. In some embodiments, if the user triggers a shortcut control in a running process of an application, the electronic device100may invoke an activity to display a corresponding user interface. If the application is closed and rerun, an activity associated with the shortcut control has been terminated. In this case, if the user triggers the shortcut control, the electronic device wo may invoke a path to display the corresponding user interface. The view system includes a visual control, such as a text display control, an image display control, or the like. The view system may be configured to construct an application. A display interface may include one or more views. For example, a display interface includes an SMS message notification icon, and may include a text display view and an image display view. In this application, the view system is configured to: when the electronic device100meets a preset trigger condition, display a shortcut area on the display103. The shortcut area includes one or more shortcut controls added by the electronic device100. A position and a layout of the shortcut area, and an icon, a position, a layout, and a function of a control in the shortcut area are not limited in this application. The display manager is configured to transmit display content to the kernel layer. The phone manager is configured to provide a communications function for the electronic device100, for example, management of a call status (including answering, declining, or the like). The resource manager provides, for an application, various resources such as a localized character string, an icon, an image, a layout file, a video file, and the like. The notification manager enables an application to display notification information in the status bar, and may be configured to transmit a notification-type message. The displayed notification information may automatically disappear after a short pause without user interaction. For example, the notification manager is configured to notify download completion, provide a message notification, and the like. The notification manager may alternatively display a notification in a top status bar of the system in a form of a graph or scrollable text, for example, a notification of an application running in the background or a notification that appears on the display in a form of a dialog box. For example, text information is displayed in the status bar, a prompt tone is given, the electronic device vibrates, or the indicator light blinks. The Android runtime includes a kernel library and a virtual machine. The Android runtime is responsible for scheduling and management of the Android system. The kernel library includes two parts: a function that needs to be invoked in Java and a kernel library of Android. The application layer and the application framework layer run on a virtual machine. The virtual machine executes Java files at the application layer and the application framework layer as binary files. The virtual machine is configured to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection. The system library may include a plurality of functional modules, for example, a surface manager, a media library, a three-dimensional graphics processing library (for example, OpenGL ES), a 2D graphics engine (such as SGL), and the like. The surface manager is configured to manage a display subsystem and provide fusion of 2D and 3D layers for a plurality of applications. The media library supports static image files, playback and recording of a plurality of frequently used audio and video formats, and the like. The media library may support a plurality of audio and video coding formats, such as MPEG4, H.264, MP3, AAC, AMR, JPG, and PNG. The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, composition, layer processing, and the like. The 2D graphics engine is a drawing engine for 2D drawing. The hardware abstraction layer HAL is an interface layer between operating system software and hardware components, and provides an interface for interaction between upper-layer software and lower-layer hardware. The HAL layer abstracts bottom-layer hardware as software that includes a corresponding hardware interface. A bottom-layer hardware device may be configured by accessing the HAL layer. For example, a related hardware component may be enabled or disabled at the HAL layer. In some embodiments, a core architecture of the HAL layer is implemented in at least one of C++ or C. The kernel layer is a layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, a touch control chip driver, an input system, and the like. For ease of description, inFIG.1B, a kernel layer that includes an input system, a driver of a touch control chip, a display driver, and a storage driver is used as an example for illustration. Both the display driver and the storage driver may be disposed in a driver module. It may be understood that the structure shown in this application does not constitute a limitation on the electronic device100. In some other embodiments, the electronic device100may include more or fewer components than those shown in the figure, combine some components, split some components, or have different component arrangements. The components shown in the figure may be implemented by using hardware, software, or a combination of software and hardware. The following describes a workflow of software and hardware of the electronic device100by using an example scenario in which the electronic device100creates a shortcut control for an application interface1and quickly enters the application interface1from an application interface2based on the shortcut control. When the touch sensor180K receives a touch operation, a corresponding hardware interruption is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including information such as touch coordinates and a timestamp of the touch operation). The original input event is stored at the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies a control corresponding to the input event. For example, the touch operation is a touch tap operation, and a control corresponding to the tap operation is a control that is used for creating a shortcut control for the application interface1. When the touch sensor180K of the electronic device100identifies the tap operation on the control that is used for creating the shortcut control for the application interface1, the activity storage module obtains an activity corresponding to the application interface1, and stores a correspondence between the shortcut control and the activity. Optionally, the path storage module obtains and stores a directory path (or a user operation path) from running application software to accessing the application interface1, and stores a correspondence between the shortcut control and the directory path (or the user operation path). When the electronic device100accesses the application interface2, the touch sensor180K identifies a tap operation on the shortcut control for the application interface1. The content provider obtains, from the activity storage module, the activity corresponding to the shortcut control, and invokes the activity manager to place the activity on the top of a task stack. The display manager transmits a request to the display driver at the kernel layer, where the request is used to display the application interface1corresponding to the activity. Then, the display driver drives a GPU105and the display194to display the application interface1. Optionally, the content provider obtains, from the path storage module, the directory path (or the user operation path) corresponding to the shortcut control. The display manager transmits a request to the display driver at the kernel layer, where the request is used to display the application interface1corresponding to the directory path (or the user operation path). Then, the display driver drives the GPU105and the display194to display the application interface1. In some embodiments, when a preset condition is met, the electronic device100records the directory path of the application interface1and creates a shortcut control for a current application interface. The preset condition is that the application interface1is accessed for more than a preset quantity of times, the application interface1is accessed for more than a preset quantity of times within a preset time period, stay duration of the application interface1is greater than a preset time period, or the like. With reference to an application scenario, the following describes how a method for quickly entering an application interface provided in this application is implemented on a display interface of the electronic device100. In an application scenario, when a user chats with another person by using instant messaging software of the electronic device100, the user wants to switch to another page of the instant messaging software, for example, to a payment code interface to make a payment. If the user wants to return to the chat interface after completing the payment, the user may first exit the payment code interface, and then search a chat list for a chat box used for chatting with the specific person, so as to enter the chat interface for the person. However, the user operations are complex. In this case, when chatting with the another person, the user may add a shortcut control for the current chat interface, and the electronic device100establishes an association relationship between the shortcut control and the chat interface. After the user completes the payment on the payment code interface, the user may quickly switch from the payment code interface to the chat interface by using the shortcut control, without performing complex operation steps. This simplifies an operation process of accessing an application interface, and improves access efficiency of the application interface. UI embodiments shown inFIG.2AtoFIG.2Gprovide an example operation process in which the electronic device100quickly enters a corresponding application interface from a multi-task interface to by adding a shortcut control. FIG.2Ashows an example user interface510on the electronic device100. The user interface510may be a chat interface in instant messaging software, and may include a status bar511and a chat box. The status bar201may include one or more signal strength indicators513of a mobile communications signal (which may also be referred to as a cellular signal), one or more signal strength indicators514of a wireless fidelity (Wi-Fi) signal, a Bluetooth indicator515, a battery status indicator516, and a time indicator517. If a Bluetooth module of the electronic device100is enabled (in other words, the electronic device supplies power to the Bluetooth module), the Bluetooth indicator515is displayed on the display interface of the electronic device100. After detecting a user operation that indicates opening of the multi-task interface, the electronic device100displays the multi-task interface. The multi-task interface includes pages corresponding to a plurality of applications that are running on the electronic device100. Various user operations may be performed to indicate the opening of the multi-task interface. For example, the user operation may be that a physical button is triggered, a virtual button is triggered, or a gesture is drawn. In this embodiment of this application, the user interface510is merely an example interface. The electronic device100may alternatively open the multi-task interface on another interface. For example, when the electronic device100detects an upward sliding operation at the bottom of the electronic device100, the electronic device100displays a multi-task interface520in response to the operation, as shown inFIG.2B. The multi-task interface520includes one or more pages that are horizontally arranged in parallel based on a preset sequence policy. Optionally, in a sequence policy, the electronic device100arranges pages corresponding to different applications based on a time sequence of running the applications. For example, a page corresponding to an application that is most recently run is arranged on the rightmost, and a page corresponding to an application that is previously run is arranged on the left in sequence. The user may slide left and right on the multi-task interface520to switch between pages for display. For example, when the electronic device100detects a rightward sliding operation on the multi-task interface520, the pages on the multi-task interface520sequentially move rightward in response to the operation. When the electronic device100detects a leftward sliding operation on the multi-task interface520, the pages on the multi-task interface520sequentially move leftward in response to the operation. As shown inFIG.2B, the multi-task interface520may include a page521, a page522, and a delete icon523. The page521is fully displayed, and the page522is partially displayed. The page521corresponding to the application that is most recently run is arranged on the rightmost, and the page522corresponding to the application that is previously run is arranged on the left. Optionally, display content of the page521is a page last displayed (namely, a page displayed on the user interface510) when the application runs in the foreground. Optionally, the page521is associated with the user interface510. When the electronic device100detects a user operation on the page521, the electronic device100displays the user interface510associated with the page521, as shown inFIG.2A. The delete icon523may be used to close applications corresponding to all pages on the current multi-task interface520. When the electronic device receives a user operation on the delete icon523, the electronic device deletes all the pages on the multi-task interface520in response to the user operation. In some embodiments, the delete icon523may be used to close an application corresponding to a page that is fully displayed on the current multi-task interface520. The multi-task interface520further includes an icon5211. If the user wants to quickly enter the user interface510from another interface, the user may trigger the icon5211in a manner such as tapping after entering the multi-task interface520. When the electronic device100detects a user operation on the icon5211, the electronic device100adds a shortcut control for the user interface510in response to the user operation, and establishes an association relationship between the shortcut control and the user interface510. The user may use the shortcut control to quickly switch from the another interface to the user interface510. For example, when the electronic device100detects the user operation on the icon5211, the electronic device100displays an application interface shown inFIG.2C. As shown inFIG.2C, the multi-task interface520inFIG.2Cincludes a control area525. The control area525includes one or more controls, such as controls526and527. The control526is a shortcut control for the user interface510. The control527is configured to add a new shortcut control. Optionally, display content of the control526is a thumbnail of the user interface510. Alternatively, the display content of the control526is associated with the user interface510. When the electronic device100detects a user operation on the control526, the electronic device100displays the user interface510in response to the user operation, as shown inFIG.2A. Optionally, the control526further includes a control5261. The control5261is configured to delete the control526. When the electronic device100detects a user operation on the control5261, the electronic device100deletes the control526in response to the user operation. Then, the control area525does not include the control526. The multi-task interface520inFIG.2Cincludes a page524, where the page524and the page521inFIG.2Bbelong to same application software (the instant messaging software). Optionally, display content of the page524may be a homepage of the application software. The homepage herein may be understood as a first page that is displayed when the application software is started. Optionally, the display content of the page524may be an upper-level page of the display content of the page521. When the electronic device100detects a user operation on the icon5211, the electronic device100adds the control526in response to the user operation and switches from the page521to the page524. As shown inFIG.2D, when the electronic device100detects a user operation on the page524, the electronic device100displays an application interface530corresponding to the page524in response to the user operation. In some embodiments, if the instant messaging software keeps running (running in the foreground or background), the control area525does not change. If the instant messaging software is closed, the control526added to the control area by the electronic device100is deleted accordingly.FIG.2Cis used as an example. When the electronic device receives a user operation (for example, sliding the page524upward) of closing the instant messaging software, the electronic device deletes the page524and the control area525from the multi-task interface520in response to the user operation. When the electronic device starts the instant messaging software next time and displays a page of the instant messaging software on the multi-task interface520, the control area525does not include the control526. Optionally, the multi-task interface520may include the control area525, and the control area525may include the control527. In some embodiments, when the electronic device reruns the instant messaging software next time and displays a page of the instant messaging software on the multi-task interface520, controls in the control area525remain the same. In other words, the control526is still valid. In this application,FIG.2AtoFIG.2Dshow a workflow of adding a shortcut control (the control526) for the user interface510. With reference to the foregoing embodiments, when the electronic device100enters another interface of the instant messaging software, the electronic device100may quickly return to the application interface510by using the shortcut control (the control526). In addition, a shortcut control for another interface may also be added. For example,FIG.2EtoFIG.2Ishow an example of this workflow. As shown inFIG.2E, the electronic device100displays a payment code interface540of the instant messaging software. When the electronic device100detects an upward sliding operation at the bottom of the electronic device100, the electronic device100displays the multi-task interface520in response to the operation, as shown inFIG.2F. The multi-task interface520inFIG.2Fincludes a page541, and display content of the page541may be a thumbnail of the payment code interface540. Optionally, the page541is associated with the user interface540. When the electronic device100detects a user operation on the page541, the electronic device100displays the user interface540associated with the page541, as shown inFIG.2E. The page541displays an icon5411. If the user wants to save the user interface540as a frequently used page, the user may trigger the icon5411in a manner such as tapping after entering the multi-task interface520. When the electronic device100detects a user operation on the icon5411, the electronic device100adds a shortcut control for the user interface540in response to the user operation, and establishes an association relationship between the shortcut control and the user interface540. The user may use the shortcut control to quickly switch from another interface to the user interface540. For example, when the electronic device100detects a user operation on the icon5411, the electronic device displays an application interface shown inFIG.2G. As shown inFIG.2G, the multi-task interface520includes the control area525. Compared to the control area525inFIG.2C, the control area525inFIG.2Gfurther includes a control528. The control528is a shortcut control for the user interface540. Optionally, display content of the control528is a thumbnail of the payment code interface540. Alternatively, the display content of the control528is associated with the payment code interface540. When the electronic device100detects a user operation on the control528, the electronic device100displays the payment code interface540in response to the user operation, as shown inFIG.2E. The multi-task interface520inFIG.2Gincludes the page524, where the page524and the page541inFIG.2Fbelong to the same application software (the instant messaging software). Optionally, the display content of the page524may be the homepage of the application software. The homepage herein may be understood as the first page that is displayed when the application software is started. Optionally, the display content of the page524may be the upper-level page of the display content of the page541. When the electronic device100detects a user operation on the icon5411, the electronic device100adds the control528to the control area525in response to the user operation, and switches from the page541to the page524. In some embodiments, inFIG.2G, when the electronic device100detects a user operation on the control526, the electronic device100displays the user interface510in response to the user operation, as shown inFIG.2A. In some embodiments, inFIG.2F, when the electronic device100detects a user operation on the control526, the electronic device100displays the user interface510in response to the user operation, as shown inFIG.2A. In this embodiment of this application,FIG.2HandFIG.2Ishow this step. If the user wants to quickly switch to the application page510when the electronic device100displays the user interface540shown inFIG.2E, the user may first enter the multi-task interface520and triggers the control526in the control area525, as shown inFIG.2H. When the electronic device100detects the user operation on the control526, the electronic device100displays the application interface510, as shown inFIG.2I. A display interface inFIG.2His the same as the display interface inFIG.2F, and a display interface inFIG.2Iis the same as the display interface inFIG.2A. Details are not described herein. In some embodiments, when the electronic device100enters the multi-task interface again, the control area525includes a previously added control. For example, as shown inFIG.2I, when the electronic device100detects an upward sliding operation at the bottom of the electronic device100, the electronic device100displays the multi-task interface520shown inFIG.2Jin response to the operation. The multi-task interface520includes the page521, and the control area525includes the control526. In some embodiments, the electronic device100may automatically add a display control to or remove a display control from the control area525. For example, inFIG.2I, the user interface510is displayed. When the electronic device100detects an upward sliding operation at the bottom of the electronic device100, the electronic device100displays the multi-task interface520shown inFIG.2Kin response to the operation. The multi-task interface520includes the page521, and the control area525includes the control528but not the control526. When the electronic device100detects a user operation on the page521, the electronic device100displays the application interface510, and when the electronic device100detects a user operation on the control526, the electronic device100also displays the application interface510. Therefore, when the electronic device100identifies that a current interface is the multi-task interface entered from the application interface510, the control526associated with the application interface510may not be displayed in the control area. When the electronic device100detects a user operation on the control528, the electronic device100displays a third page, as shown inFIG.2E. The control528is a control automatically added by the electronic device100. If the electronic device100has displayed the third page, the third page is a historical track interface. The electronic device100establishes a correspondence between the control528and the third page, and displays the control528in the control area525. If the electronic device100has displayed the third page, and cumulative display time of the third page is greater than a first threshold, the third page is a historical track interface. The electronic device100establishes a correspondence between the control528and the third page, and displays the control528in the control area525. If the electronic device100has displayed the third page, and a cumulative quantity of times of displaying the third page is greater than a second threshold, the third page is a historical track interface. The electronic device100establishes a correspondence between the control528and the third page, and displays the control528in the control area525. Optionally, the control area525may include the control526and the control528. In this application, when the electronic device100displays the third page, the control526may be triggered on the multi-task interface to quickly enter the third page.FIG.2AtoFIG.2Gshow an operation process in which the electronic device100quickly switches between application interfaces of a same application by using the multi-task interface. For different applications,FIG.3AtoFIG.3Dshow an example of quick switching between application interfaces of different applications. FIG.3Ashows an application interface310of a browser. When the user browses news by using the browser software, the electronic device100displays the application interface310. If the user wants to switch to a chat interface of the instant messaging software, the user may use a multi-task interface to directly enter the chat interface. InFIG.3A, when the electronic device100detects an upward sliding operation at the bottom of the electronic device100, the electronic device100displays the multi-task interface520in response to the operation, as shown inFIG.3B. The multi-task interface520inFIG.3Bincludes a page311, the page524, and the delete icon523. The page311is fully displayed, and the page524is partially displayed. The page311corresponding to an application (the browser) that is most recently run is arranged on the rightmost, and the page524corresponding to the application (the instant messaging software) that is previously run is arranged on the left in sequence. The user may slide left and right on the multi-task interface520to switch between pages for display. As shown inFIG.3B, when the electronic device100detects a rightward sliding operation on the multi-task interface520, pages on the multi-task interface520sequentially move rightward in response to the operation. In this case, the page521is fully displayed. As shown inFIG.3C, a page522(partially displayed) is on the left side of the page521, and the page311(not displayed) is on the right side of the page524. For specific descriptions ofFIG.3C, refer to descriptions ofFIG.2G. Details are not described herein. InFIG.3C, when the electronic device100detects a user operation on the control526, the electronic device100displays the user interface510in response to the user operation, as shown inFIG.3D. For specific descriptions ofFIG.3D, refer to descriptions ofFIG.2A. Details are not described herein. To be specific, if the user wants to quickly switch to a specific page (such as the application interface510) of application software2(such as the instant messaging software) when the electronic device100runs the application software1(such as the browser software) in the foreground, the control526corresponding to the application interface510may be selected on the multi-task interface. When the electronic device100detects a user operation on the control526, the electronic device100displays the user interface510. FIG.3AtoFIG.3Dshow an example operation process in which the electronic device wo quickly switches between application interfaces of different applications by using the multi-task interface520. An embodiment of this application further provides a function for recommending a frequently used interface to add. The frequently used interface may be an important interface of an application. For example, for an instant messaging application, the frequently used interface may be a homepage of the application, a homepage of an applet in the application, a page of a function (such as a payment function, a code scanning function, a setting function, or the like) in the application, a dialog box of a specific contact, or the like. In some embodiments, the frequently used interface may be preset in the electronic device100. In this case, the electronic device100stores a correspondence between the frequently used interface and an activity corresponding to the frequently used interface, and protects the activity from being terminated. When the electronic device100displays another application interface, the user may trigger a shortcut control of the frequently used interface. In response to the user operation, the activity manager reactivates the activity and places the activity on the top of a task stack. In this way, the electronic device100displays the frequently used interface. In some embodiments, the frequently used interface may be automatically determined and generated by the electronic device100based on a quantity of times/a frequency of accessing the interface by the user. If the electronic device100finds that a quantity of times/a frequency of accessing a current interface is greater than a threshold, the electronic device100determines that the current interface is a frequently used interface, and stores a correspondence between the current interface and an activity corresponding to the current interface. In addition, the electronic device100protects the activity from being terminated. Optionally, the electronic device100stores a directory path of a frequently used interface. The electronic device100may be redirected to the frequently used interface from another interface based on the directory path. Optionally, the user may add/remove a frequently used interface in a customized manner. Optionally, the electronic device100may automatically update a frequently used interface. As shown inFIG.4A, the multi-task interface520and controls526,528, and527inFIG.4Aare the same as those inFIG.2G. For specific descriptions ofFIG.4A, refer to related descriptions ofFIG.2G. Details are not described herein. When the electronic device100detects a user operation on the control527, the electronic device100displays, in response to the user operation, one or more pages that can be added.FIG.4Bshows an example application interface550. The application interface550includes pages551,552, and553. The page551includes an icon5511. The icon5511is used to indicate the user to add, as a shortcut, an application interface associated with the page551. The page552includes an icon5521. The icon5521is used to indicate the user to add, as a shortcut, an application interface associated with the page552. The page553includes an icon5531. The icon5531is used to indicate the user to add, as a shortcut, an application interface associated with the page553. For example, when the electronic device100detects a user operation on the icon5511, the electronic device100adds, in response to the user operation, a shortcut control for the application interface associated with the page551, and establishes an association relationship between the shortcut control and the application interface. The user may use the shortcut control to quickly switch to the application interface from another interface. For example, as shown inFIG.4C, when the electronic device100detects user operations on the icon5511and the icon5531, the electronic device100adds, in response to the user operations, shortcut controls for application interfaces associated with the page551and the page553. Compared toFIG.4A,FIG.4Dfurther includes the control5512and the control5532. The control5512corresponds to the page551, and the control5532corresponds to the page553. Optionally, display content of the control5512is associated with display content of the page551, and display content of the control5532is associated with display content of the page553. Figures such asFIG.2C,FIG.2F,FIG.2G,FIG.2H, andFIG.3Cin this embodiment of this application each include the control527, and the control527in each figure functions the same. An embodiment of this application further provides a function for adding a historical track page. The historical track page may be a page/function accessed by the user in an application running process, a page/function accessed by the user for more than a preset quantity of times in an application running process, a page/function on which the user dwells for greater than a preset time period in an application running process, or a page/function that is accessed by the user for more than a preset quantity of times in a preset time period in an application running process, or an important interface such as a main interface, a page of a function (such as a payment function, a code scanning function, and a setting function), or the like. If the electronic device100determines that a current interface is a historical track page, the electronic device100stores a correspondence between the current interface and an activity corresponding to the current interface, and protects the activity from being terminated. When the electronic device100displays another application interface, the user may trigger a shortcut control of the historical track page. In response to the user operation, the activity manager reactivates the activity and places the activity on the top of a task stack. In this way, the electronic device100displays the historical track page. Optionally, the electronic device100stores a directory path or a user operation path of a historical track page. The electronic device100may be redirected to the historical track page from another interface based on the directory path or the user operation path. As shown inFIG.5A, the multi-task interface520and controls526,528, and527inFIG.5Aare the same as those inFIG.2G. For specific descriptions ofFIG.5A, refer to related descriptions ofFIG.2G. Details are not described herein. When the electronic device100detects an upward sliding operation in the area525of the electronic device100, the electronic device100displays one or more historical track pages in response to the operation. As shown inFIG.5B, the area525has a larger display scope, and the following three historical track pages are displayed in the area525as examples: a page561, a page562, and a page563. For example, when the electronic device100detects a user operation on the page561, the electronic device100displays a user interface corresponding to the page561in response to the user operation. In some embodiments, there is a maximum display scope for the area525. For example, it is set that a maximum of six controls can be displayed in the area525. Optionally, the user may slide left and right (or up and down) in the area525to switch between controls for display. For example, when the electronic device100detects a rightward sliding operation in the area525, controls in the area525sequentially move rightward in response to the operation. When the electronic device100detects a leftward sliding operation in the area525, controls in the area525sequentially move leftward in response to the operation. If the user continues to sliding upward on the interface shown inFIG.5B, the electronic device100displays more historical track pages, or displays historical track pages in full screen.FIG.5Cshows an example historical track interface560. The historical track interface560includes one or more historical track pages, such as pages5611,5621,5631, and5641. The page5611corresponds to the page561inFIG.5B, and the page5611and the page561correspond to a same user interface. The page5621corresponds to the page562inFIG.5B, and the page5621and the page562correspond to a same user interface. The page5631corresponds to the page563inFIG.5B, the page5631and the page563correspond to a same user interface. The page5641is not shown inFIG.5B. For example, when the electronic device100detects a user operation on the page5611, the electronic device100displays a user interface corresponding to the page5611in response to the user operation. In some embodiments,FIG.5Bis optional. On the interface shown inFIG.5A, when the electronic device detects an upward sliding operation in the control area525of the electronic device100, the electronic device100displays, in response to the operation, the historical track interface560shown inFIG.5C. In this embodiment of this application, display manners of pages and controls on the foregoing multi-task interface are not limited.FIG.6AtoFIG.6Ffurther show several display manners of the multi-task interface.FIG.6AtoFIG.6Dshow that different applications may be displayed by performing a horizontal sliding on the multi-task interface, and different application pages of a same application may be displayed by performing a vertical sliding. FIG.6Ashows a multi-task interface520that includes a page521and a page522. The page521is fully displayed, and the page522is partially displayed to the left of the page521. It can be learned that the page521is superimposed on another page. This indicates that the application software includes a plurality of pages. The user may slide up and down on the multi-task interface520to switch between pages of a same application for display. When the electronic device100detects an upward sliding operation on instant messaging software on the multi-task interface520, pages of the instant messaging software sequentially move upward in response to the operation.FIG.6Bincludes pages521,571,581, and591. The page581is superimposed on the page591, the page571is superimposed on the page581, and the page521is superimposed on the page571. Optionally, inFIG.6B, when the electronic device100detects an upward sliding operation on the instant messaging software, the pages of the instant messaging software continue to move upward, but an overlapping area between every two adjacent pages becomes smaller. To be specific, an overlapping area between the page521and the page571becomes smaller, an overlapping area between the page571and the page581becomes smaller, and an overlapping area between the page581and the page591becomes smaller. When the electronic device100detects a user operation on any one of the pages, the electronic device100displays a user interface corresponding to the page. Optionally, when the electronic device100detects an upward sliding operation on the instant messaging software on the multi-task interface520shown inFIG.6A, the electronic device100displays, in response to the operation, one or more pages of the instant messaging software, as shown inFIG.6C.FIG.6Cincludes pages572,582, and592. When the electronic device100detects a user operation on any one of the pages, the electronic device100displays a user interface corresponding to the page. Optionally, when the electronic device100detects an upward sliding operation on the multi-task interface520shown inFIG.6A, the electronic device displays, in response to the operation, an interface shown inFIG.6D.FIG.6Dincludes pages573,583, and593. When the electronic device100detects a user operation on any one of the pages, the electronic device100displays a user interface corresponding to the page. In some embodiments,FIG.6EandFIG.6Fshow that different applications may be displayed by performing a vertical sliding (an upward sliding or a downward sliding) on the multi-task interface, and different application pages of a same application may be displayed by performing a horizontal sliding (a leftward sliding or a rightward sliding). FIG.6Eshows a multi-task interface520that includes a page521and a page522. The page521is fully displayed, and the page522is partially displayed above the page521. It can be learned that the page521is superimposed on another page. This indicates that the application software (the instant messaging software) includes a plurality of pages. The user may slide horizontally on the multi-task interface520to switch between pages of a same application for display. When the electronic device100detects a rightward sliding operation on the instant messaging software on the multi-task interface520, the pages of the instant messaging software sequentially move rightward in response to the operation. As shown inFIG.6F, pages521,574, and584are included. The page574is superimposed on the page584, and the page521is superimposed on the page574. Optionally, inFIG.6F, when the electronic device100detects a rightward sliding operation on the multi-task interface520, the pages of the instant messaging software continue to move rightward, but an overlapping area between every two adjacent pages becomes smaller. To be specific, an overlapping area between the page521and the page574becomes smaller, and an overlapping area between the page574and the page584becomes smaller. When the electronic device100detects a user operation on any one of the pages, the electronic device100displays a user interface corresponding to the page. In an embodiment of this application,FIG.7AtoFIG.7Cfurther show how to view shortcut controls of a plurality of applications. In some application scenarios, a user views an album by using an electronic device100while walking to a subway station. When arriving at the subway station, the user gently slides upward at the bottom of an album interface. The album application is zoomed out, and a row of shortcut controls are displayed at the bottom. The shortcut controls may include shortcut controls for a plurality of applications on the electronic device100, for example, a shortcut control for a subway QR code. When the user taps the shortcut control for the application that generates subway QR codes, the electronic device100displays, in full screen, an application interface of the application that generates subway QR codes. Therefore, a quick switch can be implemented between application interfaces. In some embodiments, an icon associated with the album interface is displayed in a lower-right corner of the application interface of the application that generates subway QR codes. After a QR code on the application interface of the application that generates subway QR codes is scanned, the user may tap the icon in the lower-right corner to return to the previous album interface and continue to view the album. FIG.7Ashows an example album interface610. When the electronic device100detects a user operation on the album interface610, the electronic device100displays, in response to the user operation, a shortcut control area620shown inFIG.7B. The user operation may be a short-path upward sliding operation. When the electronic device100detects a short-path upward sliding operation at the bottom of the electronic device100, the electronic device100displays the area620shown inFIG.7B. The area620shows five shortcut controls as examples, and the five shortcut controls include shortcut controls for a plurality of applications of the electronic device100. For example, a control621is a shortcut control for a gallery application. Controls622and623are shortcut controls for instant messaging software, where the control622indicates a payment code interface and the control623indicates a code scanning interface. A control624is a shortcut control for a map application, and a control625is a shortcut control for a video application. The user may slide left and right (or up and down) in the area620to view more shortcut controls. For example, when the electronic device100detects a user operation on the control623, as shown inFIG.7C, the electronic device100displays a corresponding application interface630for code scanning. In some embodiments, the application interface630includes a control640. When the electronic device100detects a user operation on the control640, the electronic device100returns, in response to the user operation, to a previous application interface, that is, the album interface610. In this embodiment of this application, the user may alternatively trigger, on another interface, display of shortcut controls for all applications. For example, when the electronic device100detects a short-path upward sliding operation at the bottom of a main interface of the electronic device100, the electronic device100displays a row of shortcut controls at the bottom of the main interface. An embodiment of this application further provides an interaction manner of displaying a multi-task interface across devices. In this manner, an electronic device100may obtain a multi-task queue of another electronic device, and the another device may also obtain a multi-task queue of the electronic device100and select one or more tasks from the multi-task queue to process tasks. For example, a multi-task interface is opened on a tablet, where the multi-task interface displays other devices that are communicatively connected to the tablet. After a mobile phone is selected, the tablet may display application software that is recently used on the mobile phone and frequently used pages that are added to the application software. In other words, a multi-task queue of the mobile phone is displayed on the multi-task interface of the tablet. An application that is running on the mobile phone may be directly invoked and displayed on the tablet. For example, a payment function of the mobile phone is directly invoked on the tablet, a web page that is opened on the mobile phone continues to be displayed on the tablet, a video played on the mobile phone continues to be played on the tablet, a file edited on the mobile phone continues to be edited on the tablet, or the like. In an implementation, the electronic device100is communicatively connected to another electronic device, so as to obtain historical task records from the another electronic device. In another implementation, the electronic device100and other electronic devices each are communicatively connected to a server, and synchronize historical task records to the server. In this way, any electronic device in the system may obtain historical task records of another electronic device by using the server. The server may be a network server, a local server, or the like. This is not limited in this application. Optionally, the electronic device100and other electronic devices are communicatively connected to the server. In this case, a plurality of electronic devices may log on to the server by using a same user account, so as to synchronize historical task records to the server. This way, the plurality of electronic devices that log on to the server by using the same user account can obtain historical task records from each other by using the server. The user account in this embodiment of this application may be a character string for distinguishing identities of different users. For example, the user account is an email address or a cloud service account. FIG.8AtoFIG.8Cshow an example process of displaying a multi-task queue of a mobile phone on a tablet. As shown inFIG.8A, a multi-task interface810of the tablet is displayed. The multi-task interface810includes a device area820. The device area820includes a device icon of the tablet and device icons of one or more electronic devices connected to the tablet, including a computer, a smart screen, and a mobile phone (the electronic device100). The device icon of the tablet is marked, which indicates that the current multi-task interface810is the multi-task interface of the tablet. In other words, pages821and822are pages corresponding to applications that are running on the tablet. When the tablet detects a user operation on the device icon of the mobile phone, the tablet displays a multi-task interface of the mobile phone (the electronic device100) in response to the user operation. As shown inFIG.8B, the device icon of the mobile phone in the device area820is marked, which indicates that a current multi-task interface830is the multi-task interface of the mobile phone. In other words, pages311,524, and522are pages corresponding to applications that are running on the mobile phone. A page831and the page311inFIG.3Bare application interfaces of a same activity, a page832and the page524inFIG.3Bare application interfaces of a same activity, and a page833and the page522inFIG.3Care application interfaces of a same activity. Details are not described herein. InFIG.8B, when the tablet detects a user operation on a shortcut control8322below the page832, the tablet displays, in response to the user operation, an application interface840corresponding to the shortcut control8322, as shown inFIG.8C. A principle of the shortcut control8322is the same as that of the foregoing control528, and details are not described herein. The following describes a procedure of an interface display method provided in this application. As shown inFIG.9, the following steps are included: S101: An electronic device displays a first page of a first application in full screen. The first application is an instant messaging application, and the first page may be, for example, the user interface540shown inFIG.2E. S102: The electronic device receives a first operation on the first page. The first operation may be an upward sliding operation performed at the bottom of the first page, or may be a tapping operation, a touch operation, a voice operation, or the like. S103: In response to the first operation, the electronic device displays the first page in a window form in a first area of a display, displays a second page of a second application in a window form in a second area of the display, and displays a first control in a third area of the display, where the first control is associated with a third page of the first application. The first page in the window form may be, for example, the page541shown inFIG.2F. The second page of the second application in the window form may be, for example, the page522shown inFIG.2F. The third area may be, for example, the control area525shown inFIG.2F. The first control may be, for example, the control526shown inFIG.2F. S104: When the electronic device receives a second operation on the first page in the window form, the electronic device displays the first page in full screen in response to the second operation. The second operation is not limited to a tap operation, a touch operation, a voice operation, or the like. S105: When the electronic device receives a third operation on the second page in the window form, the electronic device displays the second page in full screen in response to the third operation. The third operation is not limited to a tap operation, a touch operation, a voice operation, or the like. S106: When the electronic device receives a fourth operation on the first control, the electronic device displays the third page in full screen in response to the fourth operation, where the third page differs from the first page. The fourth operation is not limited to a tap operation, a touch operation, a voice operation, or the like. For example, inFIG.2HandFIG.2I, the electronic device receives a tap operation on the control526, the electronic device displays the third page. The third page may be the user interface510shown inFIG.2I. S107: When the electronic device receives a fifth operation on the third page, the electronic device displays, in response to the fifth operation, the third page in a window form in the first area and the second page in the window form in the second area. The fifth operation may be an upward sliding operation, as shown inFIG.2I, performed at the bottom of the third page, or may be a tapping operation, a touch operation, a voice operation, or the like. The third page in the window form may be, for example, the page521shown inFIG.2J. In this embodiment of this application, when the first page on the electronic device is accessed, the first operation may be performed to enter a multi-task interface, and a tap operation may be performed on the first control on the multi-task interface. The first control is associated with the third page. In response to the tap operation performed on the first control, the electronic device displays the third page. In this way, the third page can be quickly entered from the first page can be implemented and access efficiency is improved. In some embodiments, the electronic device displays the first control in the third area in response to the fifth operation. When the electronic device receives the fifth operation on the third page, the electronic device displays the multi-task interface. The third area still includes the previous first control. In other words, the first control may be always displayed in the third area on the multi-task interface. Optionally, the first control is displayed surrounding the first application. For example, the fifth operation is the upward sliding operation that is shown inFIG.2Iand that is performed at the bottom of the third page, and the first control is the control526shown inFIG.2J. In some embodiments, the electronic device displays a second control in the third area in response to the fifth operation, where the second control is associated with the first page. When the electronic device receives a sixth operation on the second control, the electronic device displays the first page in full screen in response to the sixth operation. For example, as shown inFIG.2K, the fifth operation is the upward sliding operation that is shown inFIG.2Iand that is performed at the bottom of the third page, and the third area (the control area525) includes the second control (the control528shown inFIG.2J). The second control is automatically added by the electronic device. If the electronic device has displayed the third page, the third page is a historical track interface. The electronic device establishes a correspondence between the second control and the third page, and displays the second control in the third area. Optionally, if the electronic device has displayed the third page, and cumulative display time of the third page is greater than a first threshold, the third page is a historical track interface. The electronic device establishes a correspondence between the second control and the third page, and displays the second control in the third area. Optionally, if the electronic device has displayed the third page, and a cumulative quantity of times of displaying the third page is greater than a second threshold, the third page is a historical track interface. The electronic device establishes a correspondence between the second control and the third page, and displays the second control in the third area. A manner in which the electronic device automatically adds a control associated with a historical track interface is provided herein, so that a user can quickly return to a historical interface. The electronic device displays the third page in a window form in the first area. In this case, the electronic device may directly enter the third page based on the third page in the window form, which functions the same as the first control. Therefore, when the electronic device identifies that a current interface is the multi-task interface entered from the third page, the first control that is in the third area and that is associated with the third page may not be displayed. This reduces resources. In some embodiments, when the electronic device receives a seventh operation on the first page in the window form, the electronic device displays the second control in the third area in response to the seventh operation, where the second control is associated with the first page. Alternatively, the second control may be manually added by the user. For example, as shown inFIG.2FandFIG.2G, the seventh operation is a tap operation on a control5411. In some embodiments, the electronic device displays a fourth page in a window form in the first area in response to the seventh operation. The fourth page is a homepage of the first application or an upper-level page of the first page in the first application. For example, as shown inFIG.2FandFIG.2G, the first page (the page541) in the window form is displayed in the first area inFIG.2F. After the seventh operation on the control5411is detected, the page524in a window form inFIG.2Gis displayed in the first area. The page524is the homepage of the instant messaging application (the first application). In some embodiments, the first control is displayed in the third area by the electronic device in response to an eighth operation when the electronic device receives the eighth operation on the third page in a window form. The third page in the window form is displayed in the first area by the electronic device in response to a ninth operation when the electronic device displays the third page in full screen and the electronic device receives the ninth operation on the third page. A manner of manually adding the first control is described herein. In the first area of the multi-task interface, the third page in a window form is displayed. When receiving a user operation on the third page, the electronic device may add the first control to the multi-task interface. For example, as shown inFIG.2BandFIG.2C, the third page in the window form is the page521shown inFIG.2B. When the electronic device receives a tap operation on the control5211, the control526(the first control) is displayed in the third area inFIG.2C. In some embodiments, the electronic device displays the third page in full screen before displaying the first page of the first application in full screen. When the electronic device receives a tenth operation on the third page, the electronic device displays the first page in full screen in response to the tenth operation. A manner of automatically adding the first control is described herein. If the electronic device has displayed the third page, the electronic device establishes a correspondence between the first control and the third page. In some embodiments, the electronic device displays the third page in full screen before displaying the first page of the first application in full screen, where the cumulative display time of the third page is greater than the first threshold. When the electronic device receives a tenth operation on the third page, the electronic device displays the first page in full screen in response to the tenth operation. Another manner of automatically adding the first control is described herein. If the electronic device has displayed the third page, and the cumulative display time of the third page is greater than the first threshold, the electronic device establishes a correspondence between the first control and the third page. In some embodiments, the electronic device displays the third page in full screen before displaying the first page of the first application in full screen, where the cumulative quantity of times of displaying the third page is greater than the second threshold. When the electronic device receives the tenth operation on the third page, the electronic device displays the first page in full screen in response to the tenth operation. Another manner of automatically adding the first control is described herein. If the electronic device has displayed the third page, and the cumulative quantity of times of displaying the third page is greater than the first threshold, the electronic device establishes a correspondence between the first control and the third page. In some embodiments, the third area further includes a third control, the third control is associated with a fifth page of the first application, and the third control is preset by the electronic device. In some embodiments, the electronic device displays the fifth page in full screen. When the electronic device receives an eleventh operation on the fifth page, the electronic device displays the first control and a fourth control in response to the eleventh operation. When the electronic device receives a user operation on the first control, the electronic device displays the third page in full screen. When the electronic device receives a user operation on the fourth control, the electronic device displays a sixth page of a third application in full screen, where the third application differs from the first application. The electronic device may view controls of a plurality of applications. When the eleventh operation is detected on the fifth page such as the user interface610inFIG.7A, the display device displays the interface shown inFIG.7B. The first control, such as a control622, is associated with the first application. The fourth control, such as a control621, is associated with a fourth application. In some embodiments, after the electronic device displays the third page in full screen when receiving the user operation on the first control, the electronic device displays a fifth control on the third page. When the electronic device receives a user operation on the fifth control, the electronic device displays the fifth page in full screen. A return control is provided herein. If the electronic device enters the fifth page by using the first control, the electronic device may also quickly return to the third page by using the fifth control on the fifth page. For example, the fifth control is the control640shown inFIG.7C. In some embodiments, after the electronic device displays the sixth page of the third application in full screen when receiving the user operation on the fourth control, the electronic device displays a sixth control on the sixth page. When the electronic device receives a user operation on the sixth control, the electronic device displays the fifth page in full screen. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), a semiconductor medium (such as a solid-state drive), or the like. Persons of ordinary skill in the art may understand that all or some of the procedures of the methods in embodiments may be implemented by a computer program instructing related hardware. The program may be stored in the computer-readable storage medium. When the program is executed, the procedures in the method embodiments may be included. The foregoing storage medium includes any medium that can store program code, such as a ROM, a random access memory RAM, a magnetic disk, or an optical disc. | 114,316 |
11861157 | DETAILED DESCRIPTION In accordance with various implementations, mechanisms (which can include methods, systems, and media) for presenting offset content are provided. In some implementations, the mechanisms can cause first content to be presented on a user device, such as a wearable computer (e.g., a smart watch or other wearable computing device), a vehicle computer, a tablet computer, a mobile phone, and/or any other suitable type of computer. In some implementations, the first content can be any suitable type of content, such as a home screen of the user device, a messaging client, map content, content from a fitness tracker, a display associated with a media player, and/or any other suitable type of content. In some implementations, the mechanisms can determine that the first content is to be shifted in a particular direction (e.g., up, down, left, right, and/or in any other suitable direction) and by a particular amount. In some implementations, the first content can be shifted in a direction that causes a portion of the display to no longer be used to display the first content. For example, in instances where the first content is shifted upward, a bottom portion of the display screen may no longer be used to display the first content. In some such implementations, the mechanisms can cause second content to be presented in the unused portion of the display. For example, in some implementations, the second content can include contextual controls to interact with the first content. As a more particular example, in instances where the first content is a user interface that presents a map, the second content can include user interface controls to zoom in or out on the map, controls to get directions to a particular location, and/or any other suitable controls. As another example, in some implementations, the second content can include contextual controls to interact with an application executing in the background of the user device, such as a media player that is causing audio content to be presented by the user device. As a more particular example, in some implementations, in instances where the contextual controls are controls for interacting with a media player, the contextual controls can include a pause button, a volume adjustment, and/or any other suitable controls. In some implementations, the mechanisms can determine that the first content is to be shifted based on any suitable information. For example, in some implementations, the mechanisms can determine that the first content is to be shifted based on a determination that a particular button associated with the user device (e.g., a power button, and/or any other suitable button) has been pressed. As another example, in some implementations, the mechanisms can determine that the first content is to be shifted based on a determination that the user device has been rotated or tilted in a particular direction, which can indicate that the first content is to be shifted in a corresponding direction. In yet another example, in some implementations, the mechanisms can determine that the first content is to be offset from its initial center point by a particular distance based on a desired size of the second content including one or more contextual controls. Turning toFIGS.1A and1B, illustrative examples of user interfaces for presenting offset content in accordance with some implementations of the disclosed subject matter are shown. In some implementations, the user interfaces can be presented on any suitable type of user device. For example, as shown inFIGS.1A and1B, in some implementations, the user interfaces can be presented on a wearable computing device, such as a watch. As another example, in some implementations, the user interfaces can be presented on any other suitable type of user device, such as a mobile phone, a tablet computer, a media player, and/or any other suitable type of user device. Note that, in some implementations, the user device can have a display screen that is of any suitable shape (e.g., a circular shape, an elliptical shape, a square, shape, a rectangular shape, a curved rectangular shape and/or any other suitable shape). In some implementations, as shown inFIG.1A, content100can be presented on a user device. In some implementations, content100can then be shifted (e.g., up, down, left, right, diagonally, and/or in any other suitable direction). For example, as shown inFIG.1B, content100can be shifted upward so that a portion140of content100is no longer within a display of the user device and a remaining portion130of content100is displayed on the user device. In some implementations, a portion of a screen that no longer includes content100, such as portion150, can be blank, as shown inFIG.1B. Additionally or alternatively, in some implementations, portion150can include any suitable additional content, as shown in and described below in connection withFIGS.2A-2D. Note that, in some implementations, content100can be shifted in response to the user device receiving any suitable type of user input, such as a button press, a detected motion of the user device, and/or any other suitable type of user input, as described below in more detail in connection with block504ofFIG.5. It should be noted that, in some implementations, in response to shifting or offsetting content100such that portion140of content100would no longer be displayed within the display of the user device, content100can be modified such that content100fits within remaining portion130. In some implementations, the user device can include a setting for indicating whether to offset content100or resize or otherwise redraw content100to fit within remaining portion130. Additionally or alternatively, a content provider associated with content100can indicate whether to offset content100(or particular types of content) in response to receiving a request to present second content in portion150. For example, a content provider can associate an indication with the content that inhibits the content from being offset. In another example, a content provider can associate particular controls for presented with a particular type of content (e.g., media playback controls for playing back media content items and navigational controls for interacting with playlists of media content items). It should also be noted that, in some implementations, the user device can present settings for indicating an offset direction for presenting additional content, such as contextual controls. For example, when the user device is a wearable computing device placed on a left wrist of a user, a setting can be selected to cause content100to be offset such that the additional content appears on the right edge of the display. In another example, settings can be presented that cause a first type of content (e.g., contextual controls) to be presented by shifting content100in an upward direction, while presenting a second type of content (e.g., indicators of additional content, such as additional information regarding a restaurant corresponding to a location in map content) by shifting content100in a diagonal direction and presenting the second type of content in a corner of the display. It should further be noted that, in some implementations, the user device can determine an offset for content100in response to presenting additional content, such as contextual controls for interacting with content100. For example, the user device can analyze content100to determine primary content from secondary content and, based on the determination, can offset content100such that the additional content is presented within a region of secondary content while continuing to present at least a portion of the primary content. In a more particular example, content100can be analyzed (e.g., either at the user device or transmitted to an external server device for analysis) that includes map content having likely regions of interest and likely regions of non-interest (e.g., blank spaces, regions in which there are no establishments for providing additional content) and, in response to performing the analysis, content100can be shifted to present additional content within a region of non-interest while continuing to present likely regions of interest. In another more particular example, content100can be analyzed to determine blank portions within content100and, based on the determined blank portions, can offset content100to present the additional content within a portion of content100containing a particular amount of blank portions (e.g., greater than a threshold area). FIGS.2A and2Bshow illustrative examples of user interfaces for presenting map content in accordance with some implementations of the disclosed subject matter. For example, in some implementations, map content200can be presented on a user device and can include any suitable images or graphics, such as a map of a particular geographic location. In some implementations, map content200can then be shifted (e.g., up, down, left, right, diagonally, and/or in any other suitable direction). For example, as shown inFIG.2B, map content200can be shifted upward so that a portion of map content200is no longer visible on the display of the user device and a remaining portion210is presented on the user device. Additionally, in some implementations, contextual controls220can be presented in a portion of the display that no longer includes map content200. For example, as shown inFIG.2B, contextual controls220can include controls suitable for interacting with map content200, such as selectable inputs to zoom in or out of the map content, a selectable input to get directions to a particular location, a selectable input to find a particular type of establishment (e.g., a restaurant, a type of store, etc.) within a geographic location, and/or any other suitable type of controls. FIGS.2C and2Dshow examples of user interfaces for presenting content associated with a fitness tracker (e.g., a run tracker, a pedometer, and/or any other suitable type of content) in accordance with some implementations of the disclosed subject matter. For example, in some implementations, content250can be presented on a user device and can include any suitable text, images, icons, graphics, animations, and/or any other suitable content. As a more particular example, as shown inFIG.2C, content250can include a duration of time elapsed since a timer was started, a distance traveled during the elapsed time, a pace corresponding to the distance traveled, and/or any other suitable information. In some implementations, content250can then be shifted (e.g., up, down, left, right, diagonally, and/or in any other suitable direction). For example, as shown inFIG.2D, content250can be shifted in an upward direction so that a portion of content250is no longer visible on the display of the user device and a remaining portion260is presented on the user device. Additionally, in some implementations, contextual controls270can be presented in a portion of the display that no longer includes content250. For example, in some implementations, contextual controls270can include controls suitable for interacting with a fitness tracker (e.g., a selectable input to pause the tracker, a selectable input to detect a current geographic location, and/or any other suitable controls). As another example, in some implementations, contextual controls270can include controls that may be useful while using a fitness tracker. As a more particular example, as shown inFIG.2D, contextual controls270can include controls for manipulating playback of audio content a user of the user device may be listening to, such as a stop button, a rewind or fast-forward button, volume adjustment, and/or any other suitable controls. Note that the examples of map content and fitness tracker content described above in connection withFIGS.2A-2Dare described merely as illustrative examples, and content presented on a user device can be any suitable type of content, such as a home screen of the user device, a messaging screen of the user device, a presentation of a media content item, and/or any other suitable type of content. Additionally, note that, in some implementations, content presented in a portion of a display that no longer includes the shifted content can be manipulated. For example, referring toFIG.2D, in some implementations, a user of the user device can swipe contextual controls270(e.g., right, left, and/or in any other suitable direction), which can cause a different group of contextual controls to be presented. Turning toFIG.3, an illustrative example300of hardware for presenting offset content that can be used in accordance with some implementations of the disclosed subject matter is shown. As illustrated, hardware300can include one or more servers such as a server302, a communication network304, and/or one or more user devices306, such as user devices308and310. In some implementations, server302can be any suitable server for storing content, information, and/or data. For example, in some implementations, server302can be a server that stores data related to applications executing on user device206and/or that has one or more applications suitable for executing on user device206available for download. As another example, in some implementations, server302can be a server that streams media content (e.g., music, audiobooks, live-streamed audio content, video content, and/or any other suitable type of media content) to user device306via communication network304. In some implementations, server302can be omitted. Communication network304can be any suitable combination of one or more wired and/or wireless networks in some implementations. For example, communication network304can include any one or more of the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), and/or any other suitable communication network. User devices306can be connected by one or more communications links312to communication network304that can be linked via one or more communications links (e.g., communications link314) to server302. Communications links312and/or314can be any communications links suitable for communicating data among user devices306and server302such as network links, dial-up links, wireless links, hard-wired links, any other suitable communications links, or any suitable combination of such links. In some implementations, user devices306can include one or more computing devices suitable for viewing content, and/or any other suitable functions. For example, in some implementations, user devices306can be implemented as a mobile device, such as a wearable computer, a smartphone, a tablet computer, a vehicle (e.g., a car, a boat, an airplane, or any other suitable vehicle) information or entertainment system, a portable media player, and/or any other suitable mobile device. As another example, in some implementations, user devices306can be implemented as a non-mobile device such as a desktop computer, a set-top box, a television, a streaming media player, a game console, and/or any other suitable non-mobile device. Although server302is illustrated as a single device, the functions performed by server302can be performed using any suitable number of devices in some implementations. For example, in some implementations, multiple devices can be used to implement the functions performed by server302. Although two user devices308and310are shown inFIG.3, any suitable number of user devices, and/or any suitable types of user devices, can be used in some implementations. Server302and user devices306can be implemented using any suitable hardware in some implementations. For example, in some implementations, devices302and306can be implemented using any suitable general purpose computer or special purpose computer. For example, a server may be implemented using a special purpose computer. Any such general purpose computer or special purpose computer can include any suitable hardware. For example, as illustrated in example hardware400ofFIG.4, such hardware can include hardware processor402, memory and/or storage404, an input device controller406, an input device408, display/audio drivers410, display and audio output circuitry412, communication interface(s)414, an antenna416, and a bus418. Hardware processor402can include any suitable hardware processor, such as a microprocessor, a micro-controller, digital signal processor(s), dedicated logic, and/or any other suitable circuitry for controlling the functioning of a general purpose computer or a special purpose computer in some implementations. In some implementations, hardware processor402can be controlled by a server program stored in memory and/or storage404of a server (e.g., such as server302). For example, the server program can cause hardware processor402of server302to transmit content to a user device and/or receive information from a user device. In some implementations, hardware processor402can be controlled by a computer program stored in memory and/or storage404of user device306. For example, the computer program can cause hardware processor402of user device306to perform any of the functions described in connection withFIG.5, and/or perform any other suitable functions. Memory and/or storage404can be any suitable memory and/or storage for storing programs, data, media content, advertisements, and/or any other suitable information in some implementations. For example, memory and/or storage404can include random access memory, read-only memory, flash memory, hard disk storage, optical media, and/or any other suitable memory. Input device controller406can be any suitable circuitry for controlling and receiving input from one or more input devices408in some implementations. For example, input device controller406can be circuitry for receiving input from a touchscreen, from a keyboard, from a mouse, from one or more buttons, from a voice recognition circuit, from a microphone, from a camera, from an optical sensor, from an accelerometer, from a gyroscope, from a temperature sensor, from a near field sensor, and/or any other type of input device. Display/audio drivers410can be any suitable circuitry for controlling and driving output to one or more display/audio output devices412in some implementations. For example, display/audio drivers410can be circuitry for driving a touchscreen, a flat-panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Communication interface(s)414can be any suitable circuitry for interfacing with one or more communication networks, such as network304as shown inFIG.3. For example, interface(s)414can include network interface card circuitry, wireless communication circuitry, and/or any other suitable type of communication network circuitry. Antenna416can be any suitable one or more antennas for wirelessly communicating with a communication network (e.g., communication network304) in some implementations. In some implementations, antenna416can be omitted. Bus418can be any suitable mechanism for communicating between two or more components402,404,406,410, and414in some implementations. Any other suitable components can be included in hardware400in accordance with some implementations. Turning toFIG.5, an example500of a process for presenting offset content is shown in accordance with some implementations of the disclosed subject matter. In some implementations, blocks of process500can be executed on a user device. In some implementations, process500can begin at502by causing a first user interface to be presented on a user device. In some implementations, the first user interface can include any suitable content, such as a home screen for the user device, map content, fitness tracker content, a user interface corresponding to a media content player, and/or any other suitable content. In some implementations, process500can cause the first user interface to be presented in any suitable manner. For example, as shown inFIGS.1A,2A, and2C, the first user interface can be presented in a manner that occupies an entirety of the display. At504, process500can receive, from the user device, an input to shift the first user interface. For example, in some implementations, the input can indicate that the first user interface is to be shifted upward (or in any other suitable direction), as shown inFIGS.1B,2B, and2D. In some implementations, the user input can be any suitable input. For example, in some implementations, the user input can be a button press of a button associated with the user device (e.g., a power button, and/or any other suitable button), a swipe or other gesture on a touchscreen of the user device, and/or any other suitable user input. As another example, in some implementations, the user input can be from a directional change or movement of the user device, such as a tilt or rotation of the user device in a particular direction. As a more particular example, in some implementations, the input can be a detection that a user of the user device has tilted the user device upward, which can indicate that the first user interface is to be shifted downward. As another more particular example, in some implementations, the input can be a detection of a finger proximal to the display of the user device, which can indicate a user interest in interacting with content being presented on the first user interface. In some implementations, process500can determine an offset for the first user interface at506. The offset can indicate any suitable information, such as an amount that the first user interface is to be shifted and/or a direction (e.g., up, down, left, right, diagonally, and/or any other suitable direction) the first user interface is to be shifted. For example, in some implementations, the offset can indicate that the first user interface is to be shifted by a particular fraction or percentage, by a particular number of pixels, by a particular number of millimeters, and/or any other suitable shift. In some implementations, process500can determine the offset based on any suitable information. For example, in some implementations, the offset can be a fixed offset of a predetermined size or fraction (e.g., 25%, 30%, 20 pixels, 50 pixels, 10 millimeters, 20 millimeters, and/or any other suitable size) and/or direction that is determined based on any suitable information, such as a size of a user interface that includes contextual controls (e.g., as described above in connection withFIGS.2A-2D). As a more particular example, a fixed offset can be received from a user of the user device, such as a selected setting indicating a preference to offset the first user interface such that additional content is sized to occupy 40% of the available display of the user device. As another example, in some implementations the offset can be determined based on the user input. As a more particular example, in instances where the user input is a button press, process500can determine the offset based on a duration of time the button is pressed. As a specific example, in some implementations, process500can begin shifting the first user interface in a particular direction (e.g., up, down, left, right, and/or any other suitable direction) and can continue shifting the first user interface until process500determines that the button has been released. As another more particular example, in instances where the user input is based on input from a motion sensor (e.g., a gyroscope, an accelerometer, and/or any other suitable type of motion sensor), process500can determine a size and direction of the offset based on the magnitude and direction of motion detected by the motion sensor. As a specific example, in some implementations, process500can determine that the offset is to have a direction that corresponds to a direction of rotation or tilt detected by the motion sensor. As another specific example, in some implementations, process500can determine that the size of the offset is to be proportional to a degree of rotation detected by the motion sensor. As yet another specific example, process500can determine that the first user interface is to begin being shifted in a particular direction in response to determining that the user device is being rotated or tilted, and can continue shifting the first user interface until process500determines that the rotation has stopped. Note that, in some implementations, process500can determine that the first user interface is to be shifted to present a portion of the first user interface that is obstructed or not fully visible due to an obstruction (e.g., watch hands and/or a hinge on a watch, a finger covering a display of the user device, and/or any other suitable type of obstruction). In some such implementations, process500can determine a size and/or direction of the offset based on the size and placement of the obstruction. For example, in instances where the obstruction is a pair of watch hands or a hinge that the watch hands are connected to, process500can determine an amount of the offset based on a size (e.g., length, width, and/or any other suitable size information) of the hands or the hinge, and can determine a direction of the offset based on any suitable information, such as a direction the watch hands are pointing. Note also that, in some implementations, process500can determine that the first user interface is to be shifted in a particular direction based on the content being presented in the first user interface. For example, process500can analyze the first user interface and determine which portion of the first user interface can be removed by the offset. In a more particular example, process can analyze the content being presented in the first user interface and determine which portions of the first user interface are likely to contain primary content from which portions of the first user interface are likely to contain secondary content. Based on the determination, process500can determine an offset amount such that additional user interface portions or offset content can be presented while the portions that are likely to contain secondary content are removed by the offset amount. For example, in response to determining that the first user interface contains portions having blank content, process500can determine an offset amount in which a portion of the first user interface containing blank content is removed from being displayed by the offset amount and the additional user interface portions or offset content are presented in the portion previously containing blank content. Note also that, in some implementations, process500can determine that the first user interface is to be shifted by a particular offset amount based on the additional content to be presented on the user device. For example, process500can analyze the additional user interface portions that include contextual controls and can determine the offset amount to apply to the first user interface such that the contextual controls can be displayed at a given size (e.g., based on user preferences). In another example, process500can analyze the additional user interface portions that include multiple sets of contextual controls and can determine the offset amount to apply to the first user interface such that each of the multiple sets of contextual controls can be displayed without continuously modifying the offset amount. Alternatively, process500can determine whether the offset amount of the first user interface is to be modified based on the content currently being presented in the first user interface and/or the contextual controls being presented in the additional user interface. For example, in response to receiving a user input to select a second set of contextual controls, process500can determine whether the offset amount of the first user interface is to be modified based on properties of the second set of contextual controls. In some implementations, process500can identify contextual controls to present in connection with the first user interface at508. In some implementations, process500can identify the contextual controls based on any suitable information. For example, in some implementations, process500can identify the contextual controls based on content presented in the first user interface. As a more particular example, as shown in and described above in connection withFIGS.2A and2B, in instances where the first user interface presents map content, process500can determine that the contextual controls are to be controls for interacting with the map content, such as selectable inputs to zoom in or out on the map, a search feature to search for locations or businesses on the map, a selectable input to get directions based on the map, and/or any other suitable controls. As another more particular example, as shown in and described above in connection withFIGS.2C and2D, in instances where the first user interface presents content from a fitness tracker (e.g., a pedometer, a run tracker, and/or any other suitable type of fitness tracker), the contextual controls can be controls for interacting with the fitness tracker (e.g. selectable inputs to pause a distance tracker, and/or any other suitable controls). Additionally or alternatively, in some implementations, the contextual controls can be controls for interacting with an application operating in the background on the user device, such as a media player, a messaging application (e.g., an e-mail client, a text messaging application, and/or any other suitable messaging application), and/or any other suitable type of application. For example, in some implementations, the contextual controls can be controls for starting or stopping audio content that is being presented, controls for skipping a song that is being played, controls for volume adjustment, and/or any other suitable controls. As another example, in some implementations, the contextual controls can be controls for previewing received messages, composing a new message, reading a particular message, and/or any other suitable controls. Note that, in some implementations, process500can determine multiple groups of contextual controls. For example, in some implementations, process500can identify a first group of controls suitable for interacting with content presented in the first user interface and a second group of controls suitable for interacting with an application operating in the background of the user device. Process500can present the first user interface shifted by the offset at510. For example, as shown inFIGS.1B,2B, and2D, the first user interface can be shifted such that a portion of the first user interface is no longer visible on a display of the user device. As a more particular example, in instances where the first user interface is shifted upward, an upper portion of the first user interface may no longer be visible after the first user interface is shifted. In some implementations, process500can additionally present a second user interface that includes the contextual controls in a portion of the display that no longer includes the first user interface, as shown in and described above in connection withFIGS.2B and2D. Note that, in instances where process500identifies multiple groups of contextual controls as described above, process500can cause a first group of contextual controls (e.g., controls for interacting with content presented in the first user interface) to be presented, as shown in and described above in connection withFIGS.2B and2D. In some such implementations, process500can cause presentation of the first group of contextual controls to be inhibited and can cause the second group of contextual controls to be presented, for example, in response to determining that a user of the user device has swiped the first group of contextual controls in a particular direction or otherwise indicated that the second group of contextual controls is to be presented. Process500can cause the first user interface to be shifted using any suitable technique or combination of techniques. For example, in some implementations, process500can use an event handler that detects a particular type of a user input (e.g., a button press, motion of a user device, and/or any other suitable user input as described above) that indicates that the first user interface is to be shifted. In some such implementations, the event handler can then call a function that applies an offset to currently displayed content. For example, in some implementations, the function can take as an input a size and direction of the offset, as described above in connection with block506. As a more particular example, in some implementations, the function used can be the same as or similar to a function used for burn-in protection that causes content displayed on a screen to be periodically shifted by a predetermined amount. In some implementations, process500can cause the first user interface to be shifted by modifying a mapping between a logical display that represents content to be presented on the user device and a physical display that represents a display screen of the user device. For example, in some implementations, when the first user interface is presented in a first position without an offset (e.g., as shown inFIGS.1A,2A, and2Cand as described above in connection with block502), there can be a one-to-one mapping between the logical display and the physical display such that each pixel represented in the logical display corresponds to a pixel of the physical display. Process500can then apply the offset by adding a displacement to a representation of the first user interface in the logical display. For example, in some implementations, process500can add blank rows or columns to represent an unused portion of the display, and/or can redraw the representation of the first user interface in the logical display in any other suitable manner based on the determined offset. Note that, in some implementations, process500can cause the first user interface to be shifted with any suitable animation or transition. For example, in some implementations, process500can cause the first user interface to appear to slide in a direction of the shift as it is shifted. Note that, in some implementations, in response to shifting or offsetting content such that a portion of the content in the first user interface would no longer be displayed within the display of the user device, process500can determine whether the content should be modified such that the content fits within the remaining portion of the display. In some implementations, the user device can include a setting for indicating whether to offset the content being presented within the first user interface or to resize or otherwise redraw the content being presented within the first user interface to fit within the remaining portion of the display. Additionally or alternatively, a content provider associated with the content being presented in the first user interface can indicate whether to offset the content (or particular types of content) in response to receiving a request to present additional user interface options. At512, process500can cause the first user interface to resume being presented in an original position (e.g., without the offset). In some implementations, in instances where a second user interface with contextual controls was presented at block510, process500can cause the second user interface to no longer be presented. In some implementations, process500can cause the first user interface to resume being presented in an original position in response to determining that a predetermined duration of time (e.g., 400 milliseconds, 500 milliseconds, one second, and/or any other suitable duration of time. In some implementations, process500can cause the first user interface to resume being presented using any suitable technique or combination of techniques. For example, in some implementations, process500can call a function that causes content to be presented, and process500can indicate (e.g., in a function call, and/or in any other suitable manner) that no offset is to be used. In some implementations, process500can call the same function used to render the first user interface with the offset to resume presentation of the first user interface without the offset. In some implementations, at least some of the above described blocks of the process ofFIG.5can be executed or performed in any order or sequence not limited to the order and sequence shown in and described in connection with the figure. Also, some of the above blocks ofFIG.5can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. Additionally or alternatively, some of the above described blocks of the process ofFIG.5can be omitted. In some implementations, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes herein. For example, in some implementations, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as non-transitory forms of magnetic media (such as hard disks, floppy disks, and/or any other suitable magnetic media), non-transitory forms of optical media (such as compact discs, digital video discs, Blu-ray discs, and/or any other suitable optical media), non-transitory forms of semiconductor media (such as flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and/or any other suitable semiconductor media), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. In situations in which the systems described herein collect personal information about users, or make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location). In addition, certain data may be treated in one or more ways before it is stored or used, so that personal information is removed. For example, a user's identity may be treated so that no personal information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a content server. Accordingly, methods, systems, and media for presenting offset content are provided. Although the invention has been described and illustrated in the foregoing illustrative implementations, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed implementations can be combined and rearranged in various ways. | 39,929 |
11861158 | DETAILED DESCRIPTION To make the technical problems to be resolved, technical solutions, and advantages in the present disclosure clearer, the following provides detailed descriptions with reference to the accompanying drawings and specific embodiments. Referring toFIG.1,FIG.1is a schematic flowchart of a message processing method applied to a transmit end device according to an embodiment of the present disclosure. An embodiment of the present disclosure provides a message processing method, applied to a transmit end device, and the transmit end device is an electronic device. The message processing method may include the following steps. Step101: The transmit end device receives a first gesture input performed on first message content, where the first gesture input is a gesture corresponding to a preset modifier. In this embodiment of the present disclosure, a user performs instant messaging with a first preset contact of a receive end device on a communication interface by using the transmit end device, to receive and send an instant messaging message. In this step, in a case that the user needs to modify input or sent message content (the first message content), the first gesture input is performed on the first message content on the communication interface of the transmit end device, so that the transmit end device receives the first gesture input. In this way, an area that needs to be modified can be accurately positioned in the first message content, that is, a marking area of the first gesture input, so that a corresponding modification operation is performed based on the first gesture input in a subsequent step, and the user can quickly modify message content. Herein, the first message content may be message content that is being input, or may be message content that is sent to the receive end device on the communication interface. In this embodiment of the present disclosure, the user forms a corresponding gesture execution input based on the preset modifier, that is, the first gesture input, so that a modification operation can be simpler, fit into a paper file modification operation, and conform to an operation habit of the user. For example, the preset modifier may include at least one of a correction symbol, a deletion symbol, an addition symbol, an exchange symbol, a transfer symbol, a reservation symbol, and the like. Step102: The transmit end device determines a first modification manner corresponding to the first gesture input in response to the first gesture input, and performs a first modification operation corresponding to the first modification manner on a marking area of the first gesture input in the first message content to obtain modified second message content. In this step, the transmit end device determines the first modification manner for the first message content in response to the first gesture input received in step101, determines the marking area of the first gesture input in the first message content, and performs the first modification operation corresponding to the first modification manner on the marking area to obtain the second message content obtained after the first message content is modified. In this way, the corresponding modification manner is quickly determined based on the concise first gesture input, to quickly and accurately modify the first message content. Step103: The transmit end device sends the second message content to a receive end device. In this step, the transmit end device sends the second message content obtained in step102to the receive end device, so that the receive end device receives and displays the second message content. Herein, if the first message content is sent message content, a display location of the second message content on a communication interface of the receive end device is an original display location of the first message content, that is, the second message content is displayed in a display location of the first message content in a message record. It can be understood that, in this case, a display location of the second message content on a communication interface of the transmit end device is the display location of the first message content in the message record. If the first message content is sent message content, there is no time limit for modifying the first message content. In this embodiment of the present disclosure, a first gesture input corresponding to a preset modifier is received and responded to, a first modification manner corresponding to the first gesture input is determined, a first modification operation corresponding to the first modification manner is performed on a marking area of the first gesture input in first message content to obtain modified second message content, and the modified second message content is sent to a receive end device, so that a modification operation can be simpler, and message content is modified by a user quickly and accurately, thereby improving user experience. In some embodiments of the present disclosure, to facilitate accurate modification of the first message content by the user, before the transmit end device receives the first gesture input performed on the first message content in step101, the method may further include the following step: the transmit end device a second gesture input performed on first message content on a communication interface; and displays the first message content through enlarging in response to the second gesture input. In this way, by performing the second gesture input on the first message content, the user can display the first message content through enlarging, so that the user can mark content that needs to be modified in the first message content. The second gesture input may include at least one of a slide input, a press input, a click input, a double-click input, a drag input, a pinch input, and an expand input. In some embodiments of the present disclosure, that the transmit end device determines a first modification manner corresponding to the first gesture input in response to the first gesture input, and performs a first modification operation corresponding to the first modification manner on a marking area of the first gesture input in the first message content to obtain modified second message content in step102may include at least one of the following: the transmit end device determines that the first modification manner is correction modification in response to the first gesture input, displays at least one piece of corrected content associated with target content in the marking area of the first gesture input, and after a selection input performed on target corrected content in the at least one piece of corrected content is received, replaces the target content with the target corrected content; the transmit end device determines that the first modification manner is adjustment modification in response to the first gesture input, displays at least one piece of second adjustment content associated with first adjustment content, and after a selection input performed on target adjustment content in the at least one piece of second adjustment content is received, replaces the target content with the target adjustment content, where the first adjustment content is content obtained after the target content in the marking area of the first gesture input is adjusted; the transmit end device determines that the first modification manner is addition modification in response to the first gesture input, displays an input method interface, and after added content that is input on the input method interface is received, adds the added content to the marking area of the first gesture input in the first message content; and the transmit end device determines that the first modification manner is deletion modification in response to the first gesture input, and deletes the target content in the marking area of the first gesture input in the first message content. For example, as shown inFIG.3, in “Let's go to background next week” in the shown message content321, “Beijing” is input as “background” by mistake (because if you input pinyin “beijing”, Chinese characters meaning background may be displayed by mistake). The user may perform the first gesture input by drawing a gesture corresponding to a correction symbol in the example. In this case, the transmit end device receives the first gesture input and displays at least one piece of corrected content associated with incorrect content (that is, target content in the message content321), for example, pinyin “beijing” may refer to “background”, for example, “Beijing”, “North border”, or “Microscope”, so that the user can perform a selection input on target corrected content “Beijing” in the at least one piece of corrected content, so that the transmit end device replaces the target content “background” with the target corrected content “Beijing”, to implement correction modification of the incorrect content. In “This is very difficult” in the shown message content322, a word “question” is omitted by mistake. The user may perform the first gesture input by drawing a gesture corresponding to a shown addition symbol. In this case, the transmit end device receives the first gesture input and displays an input method interface, and the user may input added content “question” by using the input method interface, so that the transmit end device adds the added content “question”, to implement addition modification of missing content. In “We will have beef brisket stew with shredded potatoes” in the shown message content323, “potatoes” is input as “shredded potatoes” by mistake. The user may perform the first gesture input by drawing a gesture corresponding to a shown deletion symbol. In this case, the transmit end device receives the first gesture input, and deletes redundant content (target content in the message content323) “shredded”, to implement deletion modification of redundant content. In “I'm married” in the shown message content332, “home” is input as “married” by mistake. The user may perform the first gesture input by drawing a gesture corresponding to a shown exchange symbol. In this case, the transmit end device receives the first gesture input, determines first adjustment content to be adjusted as content (target content in the message content332) “married”, and displays at least one piece of second adjustment content associated with the first adjustment content, for example, “home”, “long-time”, or “for long” (herein, an input character of “married” is “jh”, and an input character of content such as “home” is “hj”). The user may perform a selection input on target adjustment content “home” in the at least one piece of second adjustment content, so that the transmit end device replaces the target content “married” with the target adjustment content “home”, to implement adjustment modification on adjustment content. In the foregoing example, a communication interface between a user and a first preset contact of the receive end device may include a contact information area31used to display contact information, a chat information area32used to display message content received or sent during communication with the first preset contact, an information input area33used to display message content being input, and an input method interface34used to perform a content input. To improve interaction experience in a communication process of the user, in a communication interface display process, the input method interface34may be hidden or displayed, and the input method interface34includes a keyboard area used for a user input. In addition, the information input area33may display a sending key331, to send input message content. In some examples, the sending key331may be hidden or cancelled, or the sending key331may be integrated into the input method interface34. Herein, the message content321,322, and323are displayed in the chat information area32, and the message content332is displayed in the information input area33. In this embodiment of the present disclosure, considering impact of different input manners on message content, herein, for correction modification, the step of displaying at least one piece of corrected content associated with target content in the marking area of the first gesture input may include: determining a target input manner used on an input method interface, and determining and displaying, based on the target input manner, the at least one piece of corrected content associated with the target content. For adjustment modification, the step of displaying at least one piece of second adjustment content associated with first adjustment content may include: determining a target input manner used by an input method interface, and determining and displaying, based on the target input manner, the at least one piece of second adjustment content associated with the first adjustment content. For example, if the target input manner is a Pinyin input manner, for correction modification, the step of displaying at least one piece of corrected content associated with target content in the marking area of the first gesture input may include: obtaining a first Pinyin character corresponding to the target content, and displaying at least one piece of corrected content associated with the first Pinyin character. For adjustment modification, the step of displaying at least one piece of second adjustment content associated with first adjustment content may include: obtaining a second Pinyin character corresponding to the first adjustment content, and displaying at least one piece of second adjustment content associated with the second Pinyin character. Herein, the first Pinyin character and the second Pinyin character may be determined based on a Pinyin character input manner accustomed to the user or a customary Pinyin character input manner of a related word. For example, if a customary Pinyin character input manner of a word such as “marriage” and “home” is input in a Pinyin initials manner, corresponding Pinyin characters are “jh” and “hj”. If the Pinyin character accustomed to the user is a Pinyin complete spelling manner, Pinyin characters corresponding to words such as “background” and “Beijing” are “beijing”. In some embodiments of the present disclosure, step103in which the transmit end device sends the second message content to the receive end device may include the following step: if the first message content is sent message content, the transmit end device determines a reading status of the receive end device on the first message content; the transmit end device determines prompt information corresponding to the reading status based on the reading status; and the transmit end device sends the second message content and the prompt information to the receive end device. In this way, the receive end device can implement different display prompts by using the prompt information and based on the reading status of the first message content through the corresponding preset mark, thereby improving personalized experience and communication quality. Herein, the reading status includes a read state and an unread state. The prompt information includes first prompt information corresponding to a read state of the first message content and second prompt information corresponding to an unread state of the first message content. Considering that modification of message content in an unread state has little impact on the user of the receive end, the first prompt information may include indication information used to instruct the receive end device to display a preset mark, and the second prompt information may include indication information used to instruct the receive end device not to perform a mark prompt. According to the message processing method applied to the transmit end device provided in this embodiment of the present disclosure, a first gesture input corresponding to a preset modifier is received and responded to, a first modification manner corresponding to the first gesture input is determined, a first modification operation corresponding to the first modification manner is performed on a marking area of the first gesture input in first message content to obtain modified second message content, and the modified second message content is sent to a receive end device, so that a modification operation can be simpler, and message content is modified by a user quickly and accurately, thereby improving user experience. Referring toFIG.2,FIG.2is a schematic flowchart of a message processing method applied to a receive end device according to an embodiment of the present disclosure. An embodiment of the present disclosure provides a message processing method, applied to a receive end device, and the receive end device is an electronic device. The message processing method may include the following steps. Step201: The receive end device receives second message content obtained after first message content is modified, where the first message content is message content sent by a transmit end device. In this step, the transmit end device modifies the sent first message content, and sends the modified second message content to the receive end device, so that the receive end device receives the second message content, and displays the modified second message content in a subsequent step. Step202: The receive end device displays the second message content in a display location of the first message content in a message record. In this step, the second message content received in step201is displayed in an original display location of the first message content on a communication interface of the receive end device, that is, the display location of the second message content on the communication interface of the receive end device is the display location of the first message content in the message record. In this way, browsing fluency of a user on message content is facilitated without changing the original display location of the first message content, thereby improving communication quality. In this embodiment of the present disclosure, received second message content that is obtained after first message content is modified is displayed in a display location of the first message content in a message record, to facilitate browsing fluency of a user on message content without changing an original display location of the first message content, thereby improving communication quality. In some embodiments of the present disclosure, in step201, before the receive end device displays the second message content in the display location of the first message content in the message record, the message processing method may further include: the receive end device receives prompt information that is sent by the transmit end device and that corresponds to a reading status of the first message content. In step202, after the receive end device displays the second message content in the display location of the first message content in the message record, the message processing method may further include: the receive end device displays a preset mark corresponding to the prompt information based on the prompt information. In this way, the receive end device can implement different display prompts by using the prompt information sent by the transmit end device and based on the reading status of the first message content through the corresponding preset mark, thereby improving personalized experience and communication quality. Herein, the reading status includes a read state and an unread state. In addition, in some embodiments of the present disclosure, the receive end device may directly implement a display prompt based on the reading status of the first message content without sending the prompt information by using the transmit end device. That is, after step201in which the receive end device receives the second message content that is obtained after the first message content is modified, the receive end device may obtain the reading status of the first message content, and display the preset mark corresponding to the reading status based on the read status of the first message content. In this embodiment of the present disclosure, considering that modification of message content in an unread state has little impact on a user of the receive end, if the prompt information is first prompt information corresponding to the read state of the first message content, the receive end device displays the corresponding preset mark based on the first prompt information or the read state, so that the user of the receive end can accurately learn of the modification of the message content in the read state. If the prompt information is second prompt information corresponding to the unread state of the first message content, the receive end device may perform no mark prompt based on the second prompt information or the unread state, so that the user of the receive end can be prevented from being confused with incorrect information content, and the user can accurately obtain correct message content. In some embodiments of the present disclosure, if the prompt information is first prompt information corresponding to a read state of the first message content, that the receive end device displays a preset mark corresponding to the prompt information based on the prompt information includes at least one of the following: the receives end device displays a first prompt identifier in a first predetermined area of a communication interface, where the first prompt identifier is used to switch a display location of the communication interface to a display location of the second message content; and the receive end device displays a second prompt identifier in a second predetermined area of the second message content, where the first prompt identifier is used to prompt that the second message content is modified message content. Herein, the receive end device performs, based on the first prompt information, an identification prompt on the message content in the read state in a manner of displaying the first prompt identifier or the second prompt identifier, so that the user of the receive end can learn that a user of the transmit end modifies the message content in the read state, and the user can accurately learn the modified message content. In this embodiment of the present disclosure, the first prompt identifier may include a predetermined prompt icon, such as an arrow or a bubble. As shown inFIG.4, an indication arrow424is used as the first prompt identifier, and a shape of the prompt icon may be preset. The second prompt identifier may be at least one of a text identifier and a background identifier. For example, in message content421shown inFIG.4, a text identifier “edited” is used as the second prompt identifier, and a second predetermined area in the shown message content421is a right area of a message content display area used to display the message content. In addition, a display type of the text identifier may be preset. For example, the display type of the text identifier may include at least one of a predetermined font, a predetermined color background, a predetermined pattern background, and a predetermined graph. In message content422shown inFIG.4, a background identifier (represented by a dot shadow inFIG.4) is used as the second identifier, and a second predetermined area in the shown message content422is a message content display area used to display the message content. The background identifier includes at least one of a background color identifier, a background pattern identifier, an area display shape identifier of the message content display area, and the like. The message content421and422shown inFIG.4are message content in a read state, and message content423is message content in an unread state. In the foregoing example, a communication interface between a user and a second preset contact of the transmit end device may include a contact information area41used to display contact information, a chat information area42used to display message content received or sent during communication with the second preset contact, an information input area43used to display message content being input, and an input method interface44used to perform a content input. To improve interaction experience in a communication process of the user, on a communication interface display process, the input method interface44may be hidden or displayed, and the input method interface44includes a keyboard area used for a user input. In addition, the information input area43may display a sending key431, to send input message content. In some examples, the sending key431may be hidden or cancelled, or the sending key431may be integrated into the input method interface44. Herein, the message content421,422, and423are displayed in the chat information area42. In some embodiments of the present disclosure, after the receive end device displays the corresponding preset mark based on the first prompt information or the read state, the following step may be further included: if a reading status of the second message content is a read state, cancelling the preset mark. In this way, the user of the receive end can be prevented from being confused by too many mark prompts, and therefore communication quality is not affected. According to the message processing method applied to the receive end device provided in this embodiment of the present disclosure, received second message content that is obtained after first message content is modified is displayed in a display location of the first message content in a message record, to facilitate browsing fluency of a user on message content without changing an original display location of the first message content, thereby improving communication quality and user experience. Based on the foregoing message processing method applied to the transmit end device, an embodiment of the present disclosure provides an electronic device for implementing the foregoing method. Referring toFIG.5,FIG.5is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure. An embodiment of the present disclosure provides an electronic device500, where the electronic device500is a transmit end device and may include:a first receiving module510, configured to receive a first gesture input performed on first message content, where the first gesture input is a gesture corresponding to a preset modifier;a processing module520, configured to: determine a first modification manner corresponding to the first gesture input in response to the first gesture input, and perform a first modification operation corresponding to the first modification manner on a marking area of the first gesture input in the first message content to obtain modified second message content; anda sending module530, configured to send the second message content to a receive end device. In some embodiments of the present disclosure, the electronic device500may further include a second receiving module and an enlarging module. The second receiving module is configured to receive a second gesture input performed on first message content on a communication interface; andthe enlarging module is configured to display the first message content through enlarging in response to the second gesture input. In some embodiments of the present disclosure, the processing module520may include at least one of the following: a first processing unit, a second processing unit, a third processing unit, and a fourth processing unit. The first processing unit is configured to: determine that the first modification manner is correction modification in response to the first gesture input, display at least one piece of corrected content associated with target content in the marking area of the first gesture input, and after a selection input performed on target corrected content in the at least one piece of corrected content is received, replace the target content with the target corrected content;the second processing unit is configured to: determine that the first modification manner is adjustment modification in response to the first gesture input, display at least one piece of second adjustment content associated with first adjustment content, and after a selection input performed on target adjustment content in the at least one piece of second adjustment content is received, replace the target content with the target adjustment content, where the first adjustment content is content obtained after the target content in the marking area of the first gesture input is adjusted;the third processing unit is configured to: determine that the first modification manner is addition modification in response to the first gesture input, display an input method interface, and after added content that is input on the input method interface is received, add the added content to the marking area of the first gesture input in the first message content; andthe fourth processing unit is configured to: determine that the first modification manner is deletion modification in response to the first gesture input, and delete the target content in the marking area of the first gesture input in the first message content. In some embodiments of the present disclosure, the sending module530may include a first determining unit, a second determining unit, and a sending unit. The first determining unit is configured to: if the first message content is sent message content, determine a reading status of the receive end device on the first message content;the second determining unit is configured to determine prompt information corresponding to the reading status based on the reading status; andthe sending unit is configured to send the second message content and the prompt information to the receive end device. The electronic device provided in this embodiment of the present disclosure can implement the processes implemented by the electronic device in the method embodiment inFIG.1. To avoid repetition, details are not described herein again. According to the electronic device provided in this embodiment of the present disclosure, a first receiving module and a processing module receive and respond to a first gesture input corresponding to a preset modifier, determine a first modification manner corresponding to the first gesture input, and perform a first modification operation corresponding to the first modification manner on a marking area of the first gesture input in first message content to obtain modified second message content; and a sending module sends the modified second message content to a receive end device, so that a modification operation can be simpler, and message content is modified by a user quickly and accurately, thereby improving user experience. Based on the foregoing message processing method applied to the receive end device, an embodiment of the present disclosure provides an electronic device for implementing the foregoing method. Referring toFIG.6,FIG.6is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure. An embodiment of the present disclosure provides an electronic device600. The electronic device600is a receive end device, and may include a third receiving module610and a first display module620. The third receiving module610is configured to receive second message content obtained after first message content is modified, where the first message content is message content sent by a transmit end device; and the first display module620is configured to display the second message content in a display location of the first message content in a message record. In some embodiments of the present disclosure, the electronic device600may further include a fourth receiving module and a second display module. The fourth receiving module is configured to receive prompt information that is sent by the transmit end device and that corresponds to a reading status of the first message content; andthe second display module is configured to display a preset mark corresponding to the prompt information based on the prompt information. In some embodiments of the present disclosure, if the prompt information is first prompt information corresponding to a read state of the first message content, the second display module may include at least one of the following: a first display unit and a second display unit. The first display unit is configured to display a first prompt identifier in a first predetermined area of a communication interface, where the first prompt identifier is used to switch a display location of the communication interface to a display location of the second message content; andthe second display unit is configured to display a second prompt identifier in a second predetermined area of the second message content, where the first prompt identifier is used to prompt that the second message content is modified message content. The electronic device provided in this embodiment of the present disclosure can implement the processes implemented by the electronic device in the method embodiment inFIG.2. To avoid repetition, details are not described herein again. According to the electronic device provided in this embodiment of the present disclosure, a first display module displays, in a display location of the first message content in a message record, received second message content obtained after first message content is modified, to facilitate browsing fluency of a user on message content without changing an original display location of the first message content, thereby improving communication quality and user experience. FIG.7is a schematic structural diagram of hardware of an electronic device according to the embodiments of the present disclosure. An electronic device700includes but is not limited to components such as a radio frequency unit701, a network module702, an audio output unit703, an input unit704, a sensor705, a display unit706, a user input unit707, an interface unit708, a memory709, a processor710, and a power supply711. A person skilled in the art may understand that a structure of the electronic device shown inFIG.7constitutes no limitation on the electronic device, and the electronic device may include more or fewer components than those shown in the figure, or have a combination of some components, or have a different component arrangement. In this embodiment of the present disclosure, the electronic device includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle terminal, a wearable device, a pedometer, and the like. In some embodiments of the present disclosure, the electronic device is a transmit end device for receiving and sending information, the user input unit707is configured to receive a first gesture input performed on first message content, where the first gesture input is a gesture corresponding to a preset modifier; the processor710is configured to: determine a first modification manner corresponding to the first gesture input in response to the first gesture input, and perform a first modification operation corresponding to the first modification manner on a marking area of the first gesture input in the first message content to obtain modified second message content; and the radio frequency unit701is configured to send the second message content to a receive end device. In this embodiment of the present disclosure, a modification operation can be simpler, and message content is modified by a user quickly and accurately, thereby improving user experience. In some embodiments of the present disclosure, the electronic device is a receive end device for receiving and sending information. The radio frequency unit701is configured to receive second message content obtained after first message content is modified, where the first message content is message content sent by a transmit end device; and the display unit706is configured to display the second message content in a display location of the first message content in a message record. In this embodiment of the present disclosure, browsing fluency of a user on message content can be implemented without changing an original display location of the first message content, thereby improving communication quality and user experience. It should be understood that, in this embodiment of the present disclosure, the radio frequency unit701may be configured to receive and send information or a signal in a call process. Specifically, after receiving downlink data from a base station, the radio frequency unit701sends the downlink data to the processor710for processing. In addition, the radio frequency unit701sends uplink data to the base station. Usually, the radio frequency unit701includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit701may communicate with a network and another device through a wireless communication system. The electronic device provides wireless broadband Internet access for the user by using the network module702, for example, helping the user to send and receive an e-mail, brows a web page, and access streaming media. The audio output unit703may convert audio data received by the radio frequency unit701or the network module702or stored in the memory709into an audio signal and output the audio signal as a sound. In addition, the audio output unit703may further provide an audio output (for example, a call signal received voice, or a message received voice) related to a specific function implemented by the electronic device700. The audio output unit703includes a speaker, a buzzer, a telephone receiver, and the like. The input unit704is configured to receive an audio signal or a video signal. The input unit704may include a Graphics Processing Unit (GPU)7041and a microphone7042, and the graphics processing unit7041processes image data of a still picture or video obtained by an image capture apparatus (such as a camera) in a video capture mode or an image capture mode. A processed image frame may be displayed on the display unit706. The image frame processed by the graphics processing unit7041may be stored in the memory709(or another storage medium) or sent by using the radio frequency unit701or the network module702. The microphone7042may receive a sound and can process such sound into audio data. Processed audio data may be converted, in a call mode, into a format that can be sent to a mobile communication base station by using the radio frequency unit701for output. The electronic device700further includes at least one sensor705such as a light sensor, a motion sensor, and another sensor. Specifically, the light sensor includes an ambient light sensor and a proximity sensor. The ambient light sensor may adjust luminance of the display panel7061based on brightness of ambient light. The proximity sensor may turn off the display panel7061and/or backlight when the electronic device700moves to an ear. As a type of the motion sensor, an accelerometer sensor may detect an acceleration value in each direction (generally, three axes), and detect a value and a direction of gravity when the accelerometer sensor is static, and may be used for recognizing a posture of the electronic device (such as screen switching between landscape and portrait modes, a related game, or magnetometer posture calibration), a function related to vibration recognition (such as a pedometer or a knock), and the like. The sensor705may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like. Details are not described herein. The display unit706is configured to display information input by a user or information provided for a user. The display unit706may include a display panel7061. The display panel7061may be configured in a form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The user input unit707may be configured to: receive input digital or character information, and generate key signal input related to a user setting and function control of the electronic device. Specifically, the user input unit707includes a touch panel7071and another input device7072. The touch panel7071is also referred to as a touchscreen, and may collect a touch operation performed by a user on or near the touch panel7071(such as an operation performed by a user on the touch panel7071or near the touch panel7071by using any proper object or accessory, such as a finger or a stylus). The touch panel7071may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus detects a touch location of the user, detects a signal brought by the touch operation, and sends the signal to the touch controller. The touch controller receives touch information from the touch detection apparatus, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor710, and can receive and execute a command sent by the processor710. In addition, the touch panel7071may be of a resistive type, a capacitive type, an infrared type, a surface acoustic wave type, or the like. The user input unit707may include the another input device7072in addition to the touch panel7071. Specifically, the input another device7072may include but is not limited to a physical keyboard, a functional button (such as a volume control button or a power on/off button), a trackball, a mouse, and a joystick. Details are not described herein. Further, the touch panel7071may cover the display panel7061. When detecting the touch operation on or near the touch panel7071, the touch panel7071transmits the touch operation to the processor710to determine a type of a touch event, and then the processor710provides corresponding visual output on the display panel7061based on the type of the touch event. InFIG.7, although the touch panel7071and the display panel7061are used as two independent parts to implement input and output functions of the electronic device, in some embodiments, the touch panel7071and the display panel7061may be integrated to implement the input and output functions of the electronic device. This is not specifically limited herein. The interface unit708is an interface for connecting an external apparatus with the electronic device700. For example, the external apparatus may include a wired or wireless headphone port, an external power supply (or a battery charger) port, a wired or wireless data port, a storage card port, a port used to connect to an apparatus having an identity module, an audio input/output (I/O) port, a video I/O port, a headset port, and the like. The interface unit708may be configured to receive input (for example, data information and power) from an external apparatus and transmit the received input to one or more elements in the electronic device700or may be configured to transmit data between the electronic device700and an external apparatus. The memory709may be configured to store a software program and various data. The memory709may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound play function or an image play function), and the like. The data storage area may store data (such as audio data or an address book) created based on use of the mobile phone, and the like. In addition, the memory709may include a high-speed random access memory, and may further include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash storage device, or another volatile solid-state storage device. The processor710is a control center of the electronic device, connects all parts of the entire electronic device by using various interfaces and lines, and performs various functions of the electronic device and data processing by running or executing a software program and/or a module that are/is stored in the memory709and by invoking data stored in the memory709, to overall monitor the electronic device. The processor710may include one or more processing units. In some embodiments, an application processor and a modem processor may be integrated into the processor710. The application processor mainly processes an operating system, a user interface, an application program, and the like. The modem processor mainly processes wireless communications. It can be understood that, alternatively, the modem processor may not be integrated into the processor710. The electronic device700may further include the power supply711(such as a battery) that supplies power to each component. In some embodiments, the power supply711may be logically connected to the processor710by using a power supply management system, so as to implement functions such as charging and discharging management, and power consumption management by using the power supply management system. In addition, the electronic device700includes some function modules not shown, and details are not described herein. An embodiment of the present disclosure further provides an electronic device, including: a processor710, a memory709, and a computer program that is stored in the memory709and that can be run on the processor710. When the computer program is executed by the processor710, the foregoing processes of the message processing method embodiment applied to the transmit end device are implemented, and a same technical effect can be achieved. To avoid repetition, details are not described herein again. An embodiment of the present disclosure further provides an electronic device, including: a processor710, a memory709, and a computer program that is stored in the memory709and that can be run on the processor710. When the computer program is executed by the processor710, the foregoing processes of the message processing method embodiment applied to the receive end device are implemented, and a same technical effect can be achieved. To avoid repetition, details are not described herein again. An embodiment of the present disclosure further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the foregoing processes of the message processing method embodiment applied to the transmit end device are implemented and a same technical effect can be achieved. To avoid repetition, details are not described herein again. An embodiment of the present disclosure further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the foregoing processes of the message processing method embodiment applied to the receive end device are implemented and a same technical effect can be achieved. To avoid repetition, details are not described herein again. The computer-readable storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disc. It should be noted that, in this specification, the terms “include”, “comprise”, or any other variants are intended to cover a non-exclusive inclusion, so that a process, a method, an article, or an apparatus that includes a list of elements not only includes those elements but also includes other elements which are not expressly listed, or further includes elements inherent to such process, method, article, or apparatus. An element limited by “includes a . . . ” does not, without more constraints, preclude the presence of additional identical elements in the process, method, article, or apparatus that includes the element. Based on the descriptions of the foregoing implementations, a person skilled in the art may clearly understand that the method in the foregoing embodiment may be implemented by software in addition to a necessary universal hardware platform or by hardware only. The technical solutions of the present disclosure essentially or the part contributing to the prior art may be implemented in a form of a software product. The computer software product is stored in a storage medium (such as a ROM/RAM, a hard disk, or an optical disc), and includes several instructions for instructing a terminal (which may be mobile phone, a computer, a server, an air conditioner, a network device, or the like) to perform the methods described in the embodiments of the present disclosure. The embodiments of the present disclosure is described above with reference to the accompanying drawings, but the present disclosure is not limited to the above specific implementations, and the above specific implementations are only illustrative and not restrictive. Under the enlightenment of the present disclosure, those of ordinary skill in the art can make many forms without departing from the purpose of the present disclosure and the protection scope of the claims, all of which fall within the protection of the present disclosure. | 49,588 |
11861159 | DESCRIPTION OF EMBODIMENTS Many portable electronic devices have a large number of features including applications, controls, and content with which a user engages, as well as settings that impact the functionality of the electronic device. A user may typically use different device features depending on contextual factors such as the time of day and/or the user's location. For example, at home, a user may frequently use applications related to entertainment, media delivery, and home automation, while these applications are less frequently accessed when the user is at work. Applications related to appointment scheduling and communication may play a larger role in device usage while the user is at work, as compared with other contexts. Within a particular communications application, such as a messaging application, a user may wish to view messages from contacts the user associates with a particular setting only when the user is present within that setting. The user may prefer for a device to always establish a connection to a particular network or network type in some contexts, and to connect to that network or network type only on an opt-in basis in other contexts. Device modes are a way to tie device features such as applications, controls, content, and settings to particular contexts (e.g., work, home, driving, or workout). A mobile device may use information such as time and device location to determine a device mode and apply and/or display a particular set of features that are desirable in that device mode. This streamlines access to device features, thereby reducing or eliminating the time spent by a user navigating user interfaces to locate desired features. Here, improved ways to select device modes and interact with device modes are described. In some embodiments, the device displays a plurality of mode icons (e.g., on a lock screen or wake screen), and visually highlights the mode icon that corresponds to the mode automatically recommended by the device (e.g., based on time and/or device location criteria), which helps the user to select the proper mode. The user then selects the recommended mode icon or another mode icon to activate the corresponding mode. In some embodiments, while in a first device mode, the device detects an input that overrides the first mode and activates a second mode. The device performs an operation while in the second mode in response to another input, and then returns to the first mode. This method helps a user to interact with different modes. The user can easily leave a first mode, perform an operation in a second mode, and then return to the first mode. Below,FIGS.1A-1B,2, and3provide a description of example devices.FIGS.4A-4B and5A-5Uillustrate example user interfaces for interacting with device modes.FIGS.6A-6Cillustrate a flow diagram of a method of overriding a device mode.FIGS.7A-7Billustrate a flow diagram of a method of recommending and activating a device mode from among a plurality of displayed mode affordances. The user interfaces inFIGS.5A-5Uare used to illustrate the processes inFIGS.6A-6C and7A-7B. Example Devices Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact, unless the context clearly indicates otherwise. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Example embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch-screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch-screen display and/or a touchpad). In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick. The device typically supports a variety of applications, such as one or more of the following: a note taking application, a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application. The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user. Attention is now directed toward embodiments of portable devices with touch-sensitive displays.FIG.1Ais a block diagram illustrating portable multifunction device100with touch-sensitive display system112in accordance with some embodiments. Touch-sensitive display system112is sometimes called a “touch screen” for convenience, and is sometimes simply called a touch-sensitive display. Device100includes memory102(which optionally includes one or more computer readable storage mediums), memory controller122, one or more processing units (CPUs)120, peripherals interface118, RF circuitry108, audio circuitry110, speaker111microphone113, input/output (I/O) subsystem106, other input or control devices116, and external port124. Device100optionally includes one or more optical sensors164. Device100optionally includes one or more intensity sensors165for detecting intensity of contacts on device100(e.g., a touch-sensitive surface such as touch-sensitive display system112of device100). Device100optionally includes one or more tactile output generators163for generating tactile outputs on device100(e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system112of device100or touchpad355of device300). These components optionally communicate over one or more communication buses or signal lines103. As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. Using tactile outputs to provide haptic feedback to a user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. It should be appreciated that device100is only one example of a portable multifunction device, and that device100optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown inFIG.1Aare implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits. Memory102optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory102by other components of device100, such as CPU(s)120and the peripherals interface118, is, optionally, controlled by memory controller122. Peripherals interface118can be used to couple input and output peripherals of the device to CPU(s)120and memory102. The one or more processors120run or execute various software programs and/or sets of instructions stored in memory102to perform various functions for device100and to process data. In some embodiments, peripherals interface118, CPU(s)120, and memory controller122are, optionally, implemented on a single chip, such as chip104. In some other embodiments, they are, optionally, implemented on separate chips. RF (radio frequency) circuitry108receives and sends RF signals, also called electromagnetic signals. RF circuitry108converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry108optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry108optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The wireless communication optionally uses any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSDPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. Audio circuitry110speaker111and microphone113provide an audio interface between a user and device100. Audio circuitry110receives audio data from peripherals interface118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker111. Speaker111converts the electrical signal to human-audible sound waves. Audio circuitry110also receives electrical signals converted by microphone113from sound waves. Audio circuitry110converts the electrical signal to audio data and transmits the audio data to peripherals interface118for processing. Audio data is, optionally, retrieved from and/or transmitted to memory102and/or RF circuitry108by peripherals interface118. In some embodiments, audio circuitry110also includes a headset jack (e.g.,212,FIG.2). The headset jack provides an interface between audio circuitry110and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). I/O subsystem106couples input/output peripherals on device100, such as touch-sensitive display system112and other input or control devices116, with peripherals interface118. I/O subsystem106optionally includes display controller156, optical sensor controller158, intensity sensor controller159, haptic feedback controller161, and one or more input controllers160for other input or control devices. The one or more input controllers160receive/send electrical signals from/to other input or control devices116. The other input or control devices116optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s)160are, optionally, coupled with any (or none) of the following: a keyboard, infrared port, USB port, stylus, and/or a pointer device such as a mouse. The one or more buttons (e.g.,208,FIG.2) optionally include an up/down button for volume control of speaker111and/or microphone113. The one or more buttons optionally include a push button (e.g.,206,FIG.2). Touch-sensitive display system112provides an input interface and an output interface between the device and a user. Display controller156receives and/or sends electrical signals from/to touch-sensitive display system112. Touch-sensitive display system112displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output corresponds to user interface objects. As used herein, the term “affordance” refers to a user-interactive graphical user interface object (e.g., a graphical user interface object that is configured to respond to inputs directed toward the graphical user interface object). Examples of user-interactive graphical user interface objects include, without limitation, a button, slider, icon, selectable menu item, switch, hyperlink, or other user interface control. Touch-sensitive display system112has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic/tactile contact. Touch-sensitive display system112and display controller156(along with any associated modules and/or sets of instructions in memory102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system112and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on touch-sensitive display system112. In an example embodiment, a point of contact between touch-sensitive display system112and the user corresponds to a finger of the user or a stylus. Touch-sensitive display system112optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system112and display controller156optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system112. In an example embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone®, iPod Touch®, and iPad® from Apple Inc. of Cupertino, California. Touch-sensitive display system112optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen video resolution is in excess of 400 dpi (e.g., 500 dpi, 800 dpi, or greater). The user optionally makes contact with touch-sensitive display system112using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. In some embodiments, in addition to the touch screen, device100optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch-sensitive display system112or an extension of the touch-sensitive surface formed by the touch screen. Device100also includes power system162for powering the various components. Power system162optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. Device100optionally also includes one or more optical sensors164.FIG.1Ashows an optical sensor coupled with optical sensor controller158in I/O subsystem106. Optical sensor(s)164optionally include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor(s)164receive light from the environment, projected through one or more lens, and converts the light to data representing an image. In conjunction with imaging module143(also called a camera module), optical sensor(s)164optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device100, opposite touch-sensitive display system112on the front of the device, so that the touch screen is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device so that the user's image is obtained (e.g., for selfies, for videoconferencing while the user views the other video conference participants on the touch screen, etc.). Device100optionally also includes one or more contact intensity sensors165.FIG.1Ashows a contact intensity sensor coupled with intensity sensor controller159in I/O subsystem106. Contact intensity sensor(s)165optionally include one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor(s)165receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112). In some embodiments, at least one contact intensity sensor is located on the back of device100, opposite touch-screen display system112which is located on the front of device100. Device100optionally also includes one or more proximity sensors166.FIG.1Ashows proximity sensor166coupled with peripherals interface118. Alternately, proximity sensor166is coupled with input controller160in I/O subsystem106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system112when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call). Device100optionally also includes one or more tactile output generators163.FIG.1Ashows a tactile output generator coupled with haptic feedback controller161in I/O subsystem106. Tactile output generator(s)163optionally include one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Tactile output generator(s)163receive tactile feedback generation instructions from haptic feedback module133and generates tactile outputs on device100that are capable of being sensed by a user of device100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device100) or laterally (e.g., back and forth in the same plane as a surface of device100). In some embodiments, at least one tactile output generator sensor is located on the back of device100, opposite touch-sensitive display system112, which is located on the front of device100. Device100optionally also includes one or more accelerometers167, gyroscopes168, and/or magnetometers169(e.g., as part of an inertial measurement unit (IMU)) for obtaining information concerning the position (e.g., attitude) of the device.FIG.1Ashows sensors167,168, and169coupled with peripherals interface118. Alternately, sensors167,168, and169are, optionally, coupled with an input controller160in I/O subsystem106. In some embodiments, information is displayed on the touch-screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device100optionally includes a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location of device100. In some embodiments, the software components stored in memory102include operating system126, communication module (or set of instructions)128, contact/motion module (or set of instructions)130, position module (or set of instructions)131, graphics module (or set of instructions)132, haptic feedback module (or set of instructions)133, text input module (or set of instructions)134, Global Positioning System (GPS) module (or set of instructions)135, and applications (or sets of instructions)136. Furthermore, in some embodiments, memory102stores device/global internal state157, as shown inFIGS.1A and3. Device/global internal state157includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch-sensitive display system112; sensor state, including information obtained from the device's various sensors and other input or control devices116; and location and/or positional information concerning the device's location and/or attitude. Operating system126(e.g., iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Communication module128facilitates communication with other devices over one or more external ports124and also includes various software components for handling data received by RF circuitry108and/or external port124. External port124(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with the 30-pin connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California In some embodiments, the external port is a Lightning connector that is the same as, or similar to and/or compatible with the Lightning connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California. Contact/motion module130optionally detects contact with touch-sensitive display system112(in conjunction with display controller156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module130includes software components for performing various operations related to detection of contact (e.g., by a finger or by a stylus), such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module130receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts or stylus contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts and/or stylus contacts). In some embodiments, contact/motion module130and display controller156detect contact on a touchpad. Contact/motion module130optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event. Similarly, tap, swipe, drag, and other gestures are optionally detected for a stylus by detecting a particular contact pattern for the stylus. In some embodiments, detecting a finger tap gesture depends on the length of time between detecting the finger-down event and the finger-up event, but is independent of the intensity of the finger contact between detecting the finger-down event and the finger-up event. In some embodiments, a tap gesture is detected in accordance with a determination that the length of time between the finger-down event and the finger-up event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4 or 0.5 seconds), independent of whether the intensity of the finger contact during the tap meets a given intensity threshold (greater than a nominal contact-detection intensity threshold), such as a light press or deep press intensity threshold. Thus, a finger tap gesture can satisfy particular input criteria that do not require that the characteristic intensity of a contact satisfy a given intensity threshold in order for the particular input criteria to be met. For clarity, the finger contact in a tap gesture typically needs to satisfy a nominal contact-detection intensity threshold, below which the contact is not detected, in order for the finger-down event to be detected. A similar analysis applies to detecting a tap gesture by a stylus or other contact. In cases where the device is capable of detecting a finger or stylus contact hovering over a touch sensitive surface, the nominal contact-detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch sensitive surface. The same concepts apply in an analogous manner to other types of gestures. For example, a swipe gesture, a pinch gesture, a depinch gesture, and/or a long press gesture are optionally detected based on the satisfaction of criteria that are either independent of intensities of contacts included in the gesture, or do not require that contact(s) that perform the gesture reach intensity thresholds in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of one or more contacts; a pinch gesture is detected based on movement of two or more contacts towards each other; a depinch gesture is detected based on movement of two or more contacts away from each other; and a long press gesture is detected based on a duration of the contact on the touch-sensitive surface with less than a threshold amount of movement. As such, the statement that particular gesture recognition criteria do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met means that the particular gesture recognition criteria are capable of being satisfied if the contact(s) in the gesture do not reach the respective intensity threshold, and are also capable of being satisfied in circumstances where one or more of the contacts in the gesture do reach or exceed the respective intensity threshold. In some embodiments, a tap gesture is detected based on a determination that the finger-down and finger-up event are detected within a predefined time period, without regard to whether the contact is above or below the respective intensity threshold during the predefined time period, and a swipe gesture is detected based on a determination that the contact movement is greater than a predefined magnitude, even if the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where detection of a gesture is influenced by the intensity of contacts performing the gesture (e.g., the device detects a long press more quickly when the intensity of the contact is above an intensity threshold or delays detection of a tap input when the intensity of the contact is higher), the detection of those gestures does not require that the contacts reach a particular intensity threshold so long as the criteria for recognizing the gesture can be met in circumstances where the contact does not reach the particular intensity threshold (e.g., even if the amount of time that it takes to recognize the gesture changes). Contact intensity thresholds, duration thresholds, and movement thresholds are, in some circumstances, combined in a variety of different combinations in order to create heuristics for distinguishing two or more different gestures directed to the same input element or region so that multiple different interactions with the same input element are enabled to provide a richer set of user interactions and responses. The statement that a particular set of gesture recognition criteria do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met does not preclude the concurrent evaluation of other intensity-dependent gesture recognition criteria to identify other gestures that do have a criteria that is met when a gesture includes a contact with an intensity above the respective intensity threshold. For example, in some circumstances, first gesture recognition criteria for a first gesture—which do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the first gesture recognition criteria to be met—are in competition with second gesture recognition criteria for a second gesture—which are dependent on the contact(s) reaching the respective intensity threshold. In such competitions, the gesture is, optionally, not recognized as meeting the first gesture recognition criteria for the first gesture if the second gesture recognition criteria for the second gesture are met first. For example, if a contact reaches the respective intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected rather than a swipe gesture. Conversely, if the contact moves by the predefined amount of movement before the contact reaches the respective intensity threshold, a swipe gesture is detected rather than a deep press gesture. Even in such circumstances, the first gesture recognition criteria for the first gesture still do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the first gesture recognition criteria to be met because if the contact stayed below the respective intensity threshold until an end of the gesture (e.g., a swipe gesture with a contact that does not increase to an intensity above the respective intensity threshold), the gesture would have been recognized by the first gesture recognition criteria as a swipe gesture. As such, particular gesture recognition criteria that do not require that the intensity of the contact(s) meet a respective intensity threshold in order for the particular gesture recognition criteria to be met will (A) in some circumstances ignore the intensity of the contact with respect to the intensity threshold (e.g. for a tap gesture) and/or (B) in some circumstances still be dependent on the intensity of the contact with respect to the intensity threshold in the sense that the particular gesture recognition criteria (e.g., for a long press gesture) will fail if a competing set of intensity-dependent gesture recognition criteria (e.g., for a deep press gesture) recognize an input as corresponding to an intensity-dependent gesture before the particular gesture recognition criteria recognize a gesture corresponding to the input (e.g., for a long press gesture that is competing with a deep press gesture for recognition). Position module131, in conjunction with accelerometers167, gyroscopes168, and/or magnetometers169, optionally detects positional information concerning the device, such as the device's attitude (roll, pitch, and/or yaw) in a particular frame of reference. Position module130includes software components for performing various operations related to detecting the position of the device and detecting changes to the position of the device. In some embodiments, position module131uses information received from a stylus being used with the device to detect positional information concerning the stylus, such as detecting the positional state of the stylus relative to the device and detecting changes to the positional state of the stylus. Graphics module132includes various known software components for rendering and displaying graphics on touch-sensitive display system112or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations and the like. In some embodiments, graphics module132stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module132receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller156. Haptic feedback module133includes various software components for generating instructions used by tactile output generator(s)163to produce tactile outputs at one or more locations on device100in response to user interactions with device100. Text input module134, which is, optionally, a component of graphics module132, provides soft keyboards for entering text in various applications (e.g., contacts module137, e-mail module140, IM module141, reminder module142, and any other application that needs text input). GPS module135determines the location of the device and provides this information for use in various applications (e.g., to telephone module138for use in location-based dialing, to camera module143as picture/video metadata, and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). Applications136optionally include the following modules (or sets of instructions), or a subset or superset thereof:contacts module137(sometimes called an address book or contact list);telephone module138;video conferencing module139;e-mail client module140;instant messaging (IM) module141;reminders module142;camera module143for still and/or video images;image management module144;browser module147;calendar module148;widget modules149, which optionally include one or more of: weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, dictionary widget149-5, and other widgets obtained by the user, as well as user-created widgets149-6;widget creator module150for making user-created widgets149-6;search module151;video and music player module152, which is, optionally, made up of a video player module and a music player module;notes module153;map module154; and/oronline video module155. Examples of other applications136that are, optionally, stored in memory102include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, voice replication, meeting information, location sharing, document reader (e.g., book reader), and/or accessory control (e.g., home accessory control). In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, contacts module137includes executable instructions to manage an address book or contact list (e.g., stored in application internal state192of contacts module137in memory102or memory370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers and/or e-mail addresses to initiate and/or facilitate communications by telephone module138, video conference module139, e-mail module140, or IM module141; and so forth. In conjunction with RF circuitry108, audio circuitry110speaker111microphone113, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, telephone module138includes executable instructions to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols and technologies. In conjunction with RF circuitry108, audio circuitry110speaker111microphone113, touch-sensitive display system112, display controller156, optical sensor(s)164, optical sensor controller158, contact module130, graphics module132, text input module134, contact list137, and telephone module138, videoconferencing module139includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, e-mail client module140includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module144, e-mail client module140makes it very easy to create and send e-mails with still or video images taken with camera module143. In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, the instant messaging module141includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, Apple Push Notification Service (APNs) or IMPS for Internet-based instant messages), to receive instant messages and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in a MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS). In conjunction with RF circuitry108, touch-sensitive display system112, display controller156, contact module130, graphics module132, text input module134, and GPS module135, reminders module142includes executable instructions to set reminders. In conjunction with touch-sensitive display system112, display controller156, optical sensor(s)164, optical sensor controller158, contact module130, graphics module132, and image management module144, camera module143includes executable instructions to capture still images or video (including a video stream) and store them into memory102, modify characteristics of a still image or video, and/or delete a still image or video from memory102. In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, text input module134, and camera module143, image management module144includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, and text input module134, browser module147includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, e-mail client module140, and browser module147, calendar module148includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to do lists, etc.) in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, and browser module147, widget modules149are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, and dictionary widget149-5) or created by the user (e.g., user-created widget149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets). In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, and browser module147, the widget creator module150includes executable instructions to create widgets (e.g., turning a user-specified portion of a web page into a widget). In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, and text input module134, search module151includes executable instructions to search for text, music, sound, image, video, and/or other files in memory102that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, audio circuitry110speaker111RF circuitry108, and browser module147, video and music player module152includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present or otherwise play back videos (e.g., on touch-sensitive display system112, or on an external display connected wirelessly or via external port124). In some embodiments, device100optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). In conjunction with touch-sensitive display system112, display controller156, contact module130, graphics module132, and text input module134, notes module153includes executable instructions to create and manage notes, to do lists, and the like in accordance with user instructions. In conjunction with RF circuitry108, touch-sensitive display system112, display system controller156, contact module130, graphics module132, text input module134, GPS module135, and browser module147, map module154includes executable instructions to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data on stores and other points of interest at or near a particular location; and other location-based data) in accordance with user instructions. In conjunction with touch-sensitive display system112, display system controller156, contact module130, graphics module132, audio circuitry110speaker111RF circuitry108, text input module134, e-mail client module140, and browser module147, online video module155includes executable instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen112, or on an external display connected wirelessly or via external port124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module141, rather than e-mail client module140, is used to send a link to a particular online video. Each of the above identified modules and applications correspond to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory102optionally stores a subset of the modules and data structures identified above. Furthermore, memory102optionally stores additional modules and data structures not described above. In some embodiments, device100is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device100, the number of physical input control devices (such as push buttons, dials, and the like) on device100is, optionally, reduced. The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device100to a main, home, or root menu from any user interface that is displayed on device100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad. FIG.1Bis a block diagram illustrating example components for event handling in accordance with some embodiments. In some embodiments, memory102(inFIG.1A) or370(FIG.3) includes event sorter170(e.g., in operating system126) and a respective application136-1(e.g., any of the aforementioned applications136,137-155,380-390). Event sorter170receives event information and determines the application136-1and application view191of application136-1to which to deliver the event information. Event sorter170includes event monitor171and event dispatcher module174. In some embodiments, application136-1includes application internal state192, which indicates the current application view(s) displayed on touch-sensitive display system112when the application is active or executing. In some embodiments, device/global internal state157is used by event sorter170to determine which application(s) is (are) currently active, and application internal state192is used by event sorter170to determine application views191to which to deliver event information. In some embodiments, application internal state192includes additional information, such as one or more of: resume information to be used when application136-1resumes execution, user interface state information that indicates information being displayed or that is ready for display by application136-1, a state queue for enabling the user to go back to a prior state or view of application136-1, and a redo/undo queue of previous actions taken by the user. Event monitor171receives event information from peripherals interface118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display system112, as part of a multi-touch gesture). Peripherals interface118transmits information it receives from I/O subsystem106or a sensor, such as proximity sensor166, accelerometer(s)167, gyroscope(s)168, magnetometer(s)169, and/or microphone113(through audio circuitry110). Information that peripherals interface118receives from I/O subsystem106includes information from touch-sensitive display system112or a touch-sensitive surface. In some embodiments, event monitor171sends requests to the peripherals interface118at predetermined intervals. In response, peripherals interface118transmits event information. In other embodiments, peripheral interface118transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). In some embodiments, event sorter170also includes a hit view determination module172and/or an active event recognizer determination module173. Hit view determination module172provides software procedures for determining where a sub-event has taken place within one or more views, when touch-sensitive display system112displays more than one view. Views are made up of controls and other elements that a user can see on the display. Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. Hit view determination module172receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module172identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. Active event recognizer determination module173determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module173determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module173determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. Event dispatcher module174dispatches the event information to an event recognizer (e.g., event recognizer180). In embodiments including active event recognizer determination module173, event dispatcher module174delivers the event information to an event recognizer determined by active event recognizer determination module173. In some embodiments, event dispatcher module174stores in an event queue the event information, which is retrieved by a respective event receiver module182. In some embodiments, operating system126includes event sorter170. Alternatively, application136-1includes event sorter170. In yet other embodiments, event sorter170is a stand-alone module, or a part of another module stored in memory102, such as contact/motion module130. In some embodiments, application136-1includes a plurality of event handlers190and one or more application views191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view191of the application136-1includes one or more event recognizers180. Typically, a respective application view191includes a plurality of event recognizers180. In other embodiments, one or more of event recognizers180are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application136-1inherits methods and other properties. In some embodiments, a respective event handler190includes one or more of: data updater176, object updater177, GUI updater178, and/or event data179received from event sorter170. Event handler190optionally utilizes or calls data updater176, object updater177or GUI updater178to update the application internal state192. Alternatively, one or more of the application views191includes one or more respective event handlers190. Also, in some embodiments, one or more of data updater176, object updater177, and GUI updater178are included in a respective application view191. A respective event recognizer180receives event information (e.g., event data179) from event sorter170, and identifies an event from the event information. Event recognizer180includes event receiver182and event comparator184. In some embodiments, event recognizer180also includes at least a subset of: metadata183, and event delivery instructions188(which optionally include sub-event delivery instructions). Event receiver182receives event information from event sorter170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device. Event comparator184compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator184includes event definitions186. Event definitions186contain definitions of events (e.g., predefined sequences of sub-events), for example, event1(187-1), event2(187-2), and others. In some embodiments, sub-events in an event187include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event1(187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase. In another example, the definition for event2(187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display system112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers190. In some embodiments, event definition187includes a definition of an event for a respective user-interface object. In some embodiments, event comparator184performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display system112, when a touch is detected on touch-sensitive display system112, event comparator184performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler190, the event comparator uses the result of the hit test to determine which event handler190should be activated. For example, event comparator184selects an event handler associated with the sub-event and the object triggering the hit test. In some embodiments, the definition for a respective event187also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type. When a respective event recognizer180determines that the series of sub-events do not match any of the events in event definitions186, the respective event recognizer180enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. In some embodiments, a respective event recognizer180includes metadata183with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. In some embodiments, a respective event recognizer180activates event handler190associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer180delivers event information associated with the event to event handler190. Activating an event handler190is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer180throws a flag associated with the recognized event, and event handler190associated with the flag catches the flag and performs a predefined process. In some embodiments, event delivery instructions188include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process. In some embodiments, data updater176creates and updates data used in application136-1. For example, data updater176updates the telephone number used in contacts module137, or stores a video file used in video player module145. In some embodiments, object updater177creates and updates objects used in application136-1. For example, object updater177creates a new user-interface object or updates the position of a user-interface object. GUI updater178updates the GUI. For example, GUI updater178prepares display information and sends it to graphics module132for display on a touch-sensitive display. In some embodiments, event handler(s)190includes or has access to data updater176, object updater177, and GUI updater178. In some embodiments, data updater176, object updater177, and GUI updater178are included in a single module of a respective application136-1or application view191. In other embodiments, they are included in two or more software modules. It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices100with input-devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc., on touch-pads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. FIG.2illustrates a portable multifunction device100having a touch screen (e.g., touch-sensitive display system112,FIG.1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI)200. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers202(not drawn to scale in the figure) or one or more styluses203(not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward) and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. Device100optionally also includes one or more physical buttons, such as “home” or menu button204. As described previously, menu button204is, optionally, used to navigate to any application136in a set of applications that are, optionally executed on device100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch-screen display. In some embodiments, device100includes the touch-screen display, menu button204, push button206for powering the device on/off and locking the device, volume adjustment button(s)208, Subscriber Identity Module (SIM) card slot210, head set jack212, and docking/charging external port124. Push button206is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In some embodiments, device100also accepts verbal input for activation or deactivation of some functions through microphone113. Device100also, optionally, includes one or more contact intensity sensors165for detecting intensity of contacts on touch-sensitive display system112and/or one or more tactile output generators163for generating tactile outputs for a user of device100. FIG.3is a block diagram of an example multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device300need not be portable. In some embodiments, device300is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device300typically includes one or more processing units (CPU's)310, one or more network or other communications interfaces360, memory370, and one or more communication buses320for interconnecting these components. Communication buses320optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device300includes input/output (I/O) interface330comprising display340, which is typically a touch-screen display. I/O interface330also optionally includes a keyboard and/or mouse (or other pointing device)350and touchpad355, tactile output generator357for generating tactile outputs on device300(e.g., similar to tactile output generator(s)163described above with reference toFIG.1A), sensors359(e.g., touch-sensitive, optical, contact intensity, proximity, acceleration, attitude, and/or magnetic sensors similar to sensors112,164,165,166,167,168, and169described above with reference toFIG.1A). Memory370includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory370optionally includes one or more storage devices remotely located from CPU(s)310. In some embodiments, memory370stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory102of portable multifunction device100(FIG.1A), or a subset thereof. Furthermore, memory370optionally stores additional programs, modules, and data structures not present in memory102of portable multifunction device100. For example, memory370of device300optionally stores drawing module380, presentation module382, word processing module384, website creation module386, disk authoring module388, and/or spreadsheet module390, while memory102of portable multifunction device100(FIG.1A) optionally does not store these modules. Each of the above identified elements inFIG.3are, optionally, stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory370optionally stores a subset of the modules and data structures identified above. Furthermore, memory370optionally stores additional modules and data structures not described above. Attention is now directed towards embodiments of user interfaces (“UI”) that are, optionally, implemented on portable multifunction device100. FIG.4Aillustrates an example user interface for a menu of applications on portable multifunction device100in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device300. In some embodiments, user interface400includes the following elements, or a subset or superset thereof:Signal strength indicator(s)402for wireless communication(s), such as cellular and Wi-Fi signals;Time404;Bluetooth indicator405;Battery status indicator406;Tray408with icons for frequently used applications, such as:Icon416for telephone module138, labeled “Phone,” which optionally includes an indicator414of the number of missed calls or voicemail messages;Icon418for e-mail client module140, labeled “Mail,” which optionally includes an indicator410of the number of unread e-mails;Icon420for browser module147, labeled “Browser;” andIcon422for video and music player module152, also referred to as iPod (trademark of Apple Inc.) module152, labeled “iPod;” andIcons for other applications, such as:Icon424for IM module141, labeled “Messages;”Icon426for calendar module148, labeled “Calendar;”Icon428for image management module144, labeled “Photos;”Icon430for camera module143, labeled “Camera;”Icon432for online video module155, labeled “Online Video;”Icon434for stocks widget149-2, labeled “Stocks;”Icon436for map module154, labeled “Map;”Icon438for weather widget149-1, labeled “Weather;”Icon440for alarm clock widget149-4, labeled “Clock;”Icon442for reminders module142, labeled “Reminders;”Icon444for notes module153, labeled “Notes;” andIcon446for a settings application or module, which provides access to settings for device100and its various applications136. It should be noted that the icon labels illustrated inFIG.4Aare merely example. For example, in some embodiments, icon422for video and music player module152is labeled “Music” or “Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. FIG.4Billustrates an example user interface on a device (e.g., device300,FIG.3) with a touch-sensitive surface451(e.g., a tablet or touchpad355,FIG.3) that is separate from the display450. Device300also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors359) for detecting intensity of contacts on touch-sensitive surface451and/or one or more tactile output generators359for generating tactile outputs for a user of device300. FIG.4Billustrates an example user interface on a device (e.g., device300,FIG.3) with a touch-sensitive surface451(e.g., a tablet or touchpad355,FIG.3) that is separate from the display450. Although many of the examples that follow will be given with reference to inputs on touch screen display112(where the touch sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown inFIG.4B. In some embodiments, the touch-sensitive surface (e.g.,451inFIG.4B) has a primary axis (e.g.,452inFIG.4B) that corresponds to a primary axis (e.g.,453inFIG.4B) on the display (e.g.,450). In accordance with these embodiments, the device detects contacts (e.g.,460and462inFIG.4B) with the touch-sensitive surface451at locations that correspond to respective locations on the display (e.g., inFIG.4B,460corresponds to468and462corresponds to470). In this way, user inputs (e.g., contacts460and462, and movements thereof) detected by the device on the touch-sensitive surface (e.g.,451inFIG.4B) are used by the device to manipulate the user interface on the display (e.g.,450inFIG.4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein. Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse based input or a stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously. As used herein, the term “focus selector” is an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector,” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad355inFIG.3or touch-sensitive surface451inFIG.4B) while the cursor is over a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch-screen display (e.g., touch-sensitive display system112inFIG.1Aor the touch screen inFIG.4A) that enables direct interaction with user interface elements on the touch-screen display, a detected contact on the touch-screen acts as a “focus selector,” so that when an input (e.g., a press input by the contact) is detected on the touch-screen display at a location of a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch-screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch-screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact or a stylus contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or a sum) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be readily accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). In some embodiments, contact/motion module130and/or430uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device100). For example, a mouse “click” threshold of a trackpad or touch-screen display can be set to any of a large range of predefined thresholds values without changing the trackpad or touch-screen display hardware. Additionally, in some embodiments, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). As used in the specification and claims, the term “characteristic intensity” of a contact is a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second intensity threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more intensity thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective option or forgo performing the respective operation) rather than being used to determine whether to perform a first operation or a second operation. In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface may receive a continuous swipe contact transitioning from a start location and reaching an end location (e.g., a drag gesture), at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location may be based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm may be applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity. The user interface figures described herein optionally include various intensity diagrams that show the current intensity of the contact on the touch-sensitive surface relative to one or more intensity thresholds (e.g., a contact detection intensity threshold IT0, a light press intensity threshold ITL, a deep press intensity threshold ITD(e.g., that is at least initially higher than IL), and/or one or more other intensity thresholds (e.g., an intensity threshold IHthat is lower than IL)). This intensity diagram is typically not part of the displayed user interface, but is provided to aid in the interpretation of the figures. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold IT0below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures. In some embodiments, the response of the device to inputs detected by the device depends on criteria based on the contact intensity during the input. For example, for some “light press” inputs, the intensity of a contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to inputs detected by the device depends on criteria that include both the contact intensity during the input and time-based criteria. For example, for some “deep press” inputs, the intensity of a contact exceeding a second intensity threshold during the input, greater than the first intensity threshold for a light press, triggers a second response only if a delay time has elapsed between meeting the first intensity threshold and meeting the second intensity threshold. This delay time is typically less than 200 ms in duration (e.g., 40, 100, or 120 ms, depending on the magnitude of the second intensity threshold, with the delay time increasing as the second intensity threshold increases). This delay time helps to avoid accidental deep press inputs. As another example, for some “deep press” inputs, there is a reduced-sensitivity time period that occurs after the time at which the first intensity threshold is met. During the reduced-sensitivity time period, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to detection of a deep press input does not depend on time-based criteria. In some embodiments, one or more of the input intensity thresholds and/or the corresponding outputs vary based on one or more factors, such as user settings, contact motion, input timing, application running, rate at which the intensity is applied, number of concurrent inputs, user history, environmental factors (e.g., ambient noise), focus selector position, and the like. Example factors are described in U.S. patent application Ser. Nos. 14/399,606 and 14/624,296, which are incorporated by reference herein in their entireties. For example,FIG.4Cillustrates a dynamic intensity threshold480that changes over time based in part on the intensity of touch input476over time. Dynamic intensity threshold480is a sum of two components, first component474that decays over time after a predefined delay time p1from when touch input476is initially detected, and second component478that trails the intensity of touch input476over time. The initial high intensity threshold of first component474reduces accidental triggering of a “deep press” response, while still allowing an immediate “deep press” response if touch input476provides sufficient intensity. Second component478reduces unintentional triggering of a “deep press” response by gradual intensity fluctuations of in a touch input. In some embodiments, when touch input476satisfies dynamic intensity threshold480(e.g., at point481inFIG.4C), the “deep press” response is triggered. FIG.4Dillustrates another dynamic intensity threshold486(e.g., intensity threshold ID).FIG.4Dalso illustrates two other intensity thresholds: a first intensity threshold IHand a second intensity threshold IL. InFIG.4D, although touch input484satisfies the first intensity threshold IHand the second intensity threshold ILprior to time p2, no response is provided until delay time p2has elapsed at time482. Also inFIG.4D, dynamic intensity threshold486decays over time, with the decay starting at time488after a predefined delay time p1has elapsed from time482(when the response associated with the second intensity threshold ILwas triggered). This type of dynamic intensity threshold reduces accidental triggering of a response associated with the dynamic intensity threshold IDimmediately after, or concurrently with, triggering a response associated with a lower intensity threshold, such as the first intensity threshold IHor the second intensity threshold IL. FIG.4Eillustrate yet another dynamic intensity threshold492(e.g., intensity threshold ID). InFIG.4E, a response associated with the intensity threshold ILis triggered after the delay time p2has elapsed from when touch input490is initially detected. Concurrently, dynamic intensity threshold492decays after the predefined delay time p1has elapsed from when touch input490is initially detected. So a decrease in intensity of touch input490after triggering the response associated with the intensity threshold IL, followed by an increase in the intensity of touch input490, without releasing touch input490, can trigger a response associated with the intensity threshold ID(e.g., at time494) even when the intensity of touch input490is below another intensity threshold, for example, the intensity threshold IL. An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold ITLto an intensity between the light press intensity threshold ITLand the deep press intensity threshold ITDis sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold ITDto an intensity above the deep press intensity threshold ITDis sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold IT0to an intensity between the contact-detection intensity threshold IT0and the light press intensity threshold ITLis sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold IT0to an intensity below the contact-detection intensity threshold IT0is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments IT0is zero. In some embodiments, IT0is greater than zero. In some illustrations a shaded circle or oval is used to represent intensity of a contact on the touch-sensitive surface. In some illustrations, a circle or oval without shading is used represent a respective contact on the touch-sensitive surface without specifying the intensity of the respective contact. In some embodiments, described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., the respective operation is performed on a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., the respective operation is performed on an “up stroke” of the respective press input). In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., the respective operation is performed on an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances). For ease of explanation, the description of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold. As described above, in some embodiment, the triggering of these responses also depends on time-based criteria being met (e.g., a delay time has elapsed between a first intensity threshold being met and a second intensity threshold being met). User Interfaces and Associated Processes Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that may be implemented on an electronic device with a display, such as portable multifunction device100or device300. FIGS.5A-5Uillustrate example user interfaces for selecting and interacting with device modes in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIGS.6A-6C and7A-7B. Although some of the examples which follow will be given with reference to inputs on a touch-screen display (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface451that is separate from the display450, as shown inFIG.4B. In some embodiments, the device detects inputs provided by a pointing device or other input device. FIG.5Aillustrates mode affordances5002,5004,5006corresponding to an activity mode, a work mode, and a drive mode, respectively, of the device in accordance with some embodiments. Mode affordances5002,5004,5006are shown on a lock screen of the device. The device becomes locked, e.g., in response to user input (such as a user input received at one or more physical buttons, such as push button206) and/or when the device has been idle for a predetermined amount of time. In some embodiments, when the device is locked, a set of device features (e.g., a set of applications and/or content) is inaccessible. In some embodiments, the device shown inFIG.5Ais locked and in a sleep state. The device enters a sleep state, e.g., in response to user input received at push button206received while the device is awake to instruct the device to sleep and/or when the device has been idle for a predetermined amount of time. A lock screen is displayed, e.g., when the device enters a locked state and/or when the device is awakened while in a locked state. The device is awakened, for example, in response to received user input (e.g., input received at one or more physical buttons, such as push button206and/or menu button204). In some embodiments, in response to receiving input to unlock and/or awaken the device (and/or in response to receiving other input that results in a lock screen being displayed), the device determines a mode of the device, such as a mode of the device that is currently active and/or a mode of the device that is recommended for activation. Determining a mode of the device includes, e.g., determining that time and/or device location criteria for the mode are met. For example, the device determines whether a current time (e.g., 2:40 PM) meets time criteria for the work mode. Time criteria for the work mode include, e.g., one or more time ranges indicating when the device user is typically at work, such as the time range 9:00 AM-1:30 and 2:30-6:00 PM. In accordance with a determination that the current time falls within the time range, the device determines that the work mode is a mode of the device that is recommended for activation. In some embodiments (e.g., when time and/or device location criteria for multiple modes are met), multiple modes are concurrently recommended. In some embodiments, the device displays a visual indication that corresponds to a recommendation to activate a mode of the device. For example, an area corresponding to mode affordance5004is shaded to indicate that the work mode is the recommended mode for activation and/or to visually distinguish mode affordance5004from mode affordances5002and5006. The text of mode affordance5004is bolded to indicate that the work mode is the recommended mode for activation and/or to visually distinguish text associated with mode affordance5004from text associated with mode affordances5002and5006. In some embodiments, the device displays information associated with a mode of the device that is recommended for activation. The information includes, e.g., one or more information items such as notifications (including, e.g., calendar appointment information, reminder information, e-mail information, telephone call information, message information, stock information, activity information, and/or navigation information, such as currently predicted drive time), content items, content item metadata, application data, and/or graphical items (e.g., icons) indicating a source of the information. The information includes, e.g., one or more information items having a highest priority among the information items of the mode (e.g., determined in accordance with automatic weighting or in accordance with user preference information). For example, information associated with a work mode includes calendar appointment notification5010. In some embodiments, calendar appointment notification5010includes calendar icon5012. In some embodiments, the information associated with the work mode is displayed at a location corresponding to mode affordance5004, e.g., a location proximate to mode affordance5004. In some embodiments, to activate the recommended mode, a user provides input at a location corresponding to the recommended mode affordance. For example, recommended mode5004is activated in response to user input such as a gesture (e.g., a finger tap gesture) detected while a focus selector5008is at a location corresponding to the recommended mode affordance5004. FIG.5Billustrates work mode interface5014, in accordance with some embodiments. In some embodiments, work mode interface5014is displayed in accordance with activation of mode affordance5004. In some embodiments, a mode interface (such as work mode interface5014) displays a plurality of information items associated with a mode. For example, work mode interface includes a view5016of data from a calendar application, meeting information5018, a notification5020from a messaging application, and a notification5022from a reminders application. In some embodiments, in response to user input such as a gesture (e.g., a finger tap gesture) detected while a focus selector is at a location of an information item, additional information corresponding to the information item is displayed, an application corresponding to the information item is activated, and/or an interface for an application corresponding to the information item is displayed. For example, in some embodiments, in response to user input at a location corresponding to notification5020from the messaging application, the device activates a messaging application and/or displays a messaging application interface. In some embodiments, additional information items associated with a mode interface are revealed in response to user input such as a gesture (e.g., a drag gesture that drags work mode interface5014, e.g., in a vertical direction, to reveal additional mode interface information items below and/or above the information items currently displayed). In some embodiments, work mode interface5014includes a mode affordance chrome5024including, e.g., activity mode affordance5002, work mode affordance5004, drive mode affordance5006, and/or home mode affordance5026. In some embodiments, work mode interface5014includes a mode settings affordance5028. In some embodiments, a mode interface (e.g., work mode interface5014) includes information items in addition to a set of currently displayed information items. The additional information items are displayed, e.g., in response to user input (e.g., a vertical drag gesture to reveal information items below or above currently displayed information items and/or a horizontal drag gesture to reveal additional information items to the left or right of currently displayed information items). FIG.5Cillustrates activation of a mode that is not the recommended mode, in accordance with some embodiments. For example, a drive mode corresponding to drive mode affordance5006is activated (while the drive mode is not the recommended mode, as indicated by the lack of visually distinguishing features applied to drive mode affordance5006) in response to a user input such as a gesture (e.g., a finger tap gesture) provided while a focus selector5008is at a location corresponding to the drive mode affordance5006. FIG.5Dillustrates drive mode interface5030, in accordance with some embodiments. In some embodiments, drive mode interface5030is displayed in accordance with activation of drive mode affordance5006. In some embodiments, drive mode interface5030includes data from a map application (e.g., navigation instructions5032, map5034, and/or place information affordance5036), a Share ETA affordance5038for sharing (e.g., with one or more preconfigured contacts of the user) an estimated time of arrival (“ETA”), and/or notification5046from a reminders application. In some embodiments, a user shares a current location (e.g., with one or more preconfigured contacts of the user) in lieu of and/or in addition to an ETA. In some embodiments, drive mode interface5014includes a mode settings affordance5028. In some embodiments, one or more information items of a mode interface are constrained to have a minimum area and/or minimum font size. Larger interface features are provided to improve safety when drive mode interface5030is active and/or when the device determines that it is located in a moving vehicle. For example, the font size of navigation instructions5032, place information affordance5036, and notification5046of drive mode interface5030is larger than font sizes used in other mode interfaces such as work mode interface5014. The area of icons shown in drive mode interface5030, such as reminders icon5042, is larger than the area of icons shown in work mode interface5014, such as reminders icon5044. In some embodiments, one or more information items (and/or types of information items) are not shown (e.g., not permitted to be shown) when drive mode interface5030is active and/or when the device determines that it is located in a moving vehicle. For example, in some embodiments, mode settings affordance5028ceases to be shown when the device determines that a vehicle in which the device is located is moving. In some embodiments, different mode interfaces are configured to display different information originating from the same application. For example, when a reminder application includes reminder5022and reminder5046, reminder5046is displayed in drive mode interface5030but not in work mode interface5014, and reminder5022is displayed in work mode interface5014but not in drive mode interface5030. A further description of configuring a mode interface to display or to not display a category of reminders is provided below with reference toFIG.5J. FIGS.5E-5Gillustrate a gesture on a lock screen used to activate a mode that is not a recommended mode, in accordance with some embodiments. InFIG.5E, a work mode corresponding to mode affordance5004is recommended for activation, e.g., as indicated by the visually distinguishing features of mode affordance5004(such as the shading and bold text of mode affordance5004). In some embodiments, in response to user input, such as a horizontal drag gesture, the mode affordances are scrolled relative to the displayed lock screen (e.g., to adjust and/or reveal one or more modes of the device). For example, the device detects a gesture (e.g., a drag gesture) as focus selector5008moves from a first position5008ato a second position5008balong a path indicated by arrow5043, as shown inFIG.5E, and from the second position5008bto a third position5008calong a path indicated by arrow5044, as shown inFIG.5F.FIG.5Fillustrates an intermediate state of lock screen as a drag gesture is received. InFIG.5F, mode affordances5002,5004,5006have shifted to the left, such that mode affordance5002has partially “slid off” of the left edge of display400and mode affordance5026is partially revealed at the right edge of display400. As mode affordance5004shifts to the left, calendar appointment notification5010corresponding to mode affordance5004also shifts to the left. InFIG.5G, in response to receiving the user input including the drag gesture, drive mode affordance5006is shown in a position corresponding to the final focus selector position5008cof the drag gesture. In some embodiments, in response to determining that user input to change a position of a mode affordance has moved the mode affordance beyond a threshold distance, the device centers the mode affordance (e.g., relative to the lock screen). In some embodiments, user input moving drive mode affordance5006to a position that is at or near the center of display400selects the drive mode as the active mode of the device. In some embodiments, in response to detecting the gesture, the device displays information associated with a drive mode, e.g., reminder notification5046and/or reminder icon5042. In some embodiments, when the drive mode is active, one or more settings associated with the drive mode take effect. In some embodiments, in response to receiving a user input while a focus selector is at a location corresponding to mode affordance5006, such as a tap gesture at mode affordance5006, the device displays a drive mode interface (e.g., drive mode interface5030as described with regard toFIG.5D). FIG.5Hillustrates overriding an active mode of a device, in accordance with some embodiments. InFIG.5H, a work mode is an active mode of the device, as indicated in work mode interface5014(e.g., by the title “Work” as shown at5050and the visually distinguishing features, such as the shading, of work mode affordance5004in mode affordance chrome5024). In some embodiments, in response to user input such as a gesture (e.g., a tap gesture) detected while a focus selector5052is at a location of a mode affordance for a mode other than the work mode affordance5004(e.g., while the focus selector5052is at home mode affordance5026), the selected mode (e.g., the home mode) overrides the work mode. When the home mode overrides the work mode, the home mode is activated and, in some embodiments, home mode interface5054is displayed as shown inFIG.5I. In some embodiments, activating the home mode allows a user to perform an operation in the home mode, such as making changes to the interface and/or settings of the home mode. In some embodiments, activating the home mode causes one or more settings of the home mode to take effect. FIG.5Iillustrates home mode interface5054, in accordance with some embodiments. In some embodiments, home mode interface5054is displayed in response to user input received at a location corresponding to home mode affordance5026. In some embodiments, home mode interface5054includes information items for a video viewer application (e.g., content thumbnail image5056, content identification information5058, and/or content episode/chapter information5060). The content episode/chapter information5060includes, e.g., information indicating unwatched episodes (e.g., dot5062indicates that an episode is unwatched) and/or information indicating progress in an episode that has been partially watched (e.g., “21 of 29 minutes watched, as indicated at5060). Content episode/chapter information5060is, e.g., information associated with recently viewed content (such as recently viewed content while the home mode was previously active on the device). In this way, when the home mode is activated, a user is provided with a convenient interface for resuming playback of recently played content. In some embodiments, information items of home mode interface5054include one or more control affordances5064,5066for controlling devices communicatively connected to the device (e.g., widgets for controlling devices in a home automation system, such as HomeKit™ from Apple Inc. of Cupertino, California). In some embodiments, information items of home mode interface554include one or more application icons5068,5070,5072,5074, e.g., for applications that are frequently used while the home mode is active. In some embodiments, information items of home mode interface5054include a notification5076from a reminders application. In some embodiments, in response to user input such as a gesture (e.g., a tap gesture) detected while a focus selector5078is at a location of a mode settings affordance5028, a mode settings interface5080is displayed. FIG.5Jillustrates a mode settings interface5080, in accordance with some embodiments. In some embodiments, mode settings interface5080is an interface for adding, removing, and/or modifying information items and/or settings of one or more modes. In some embodiments, mode settings interface5080includes one or more affordances for adjusting network settings (e.g., affordances5082,5084,5086for enabling and disabling Wi-Fi, Cellular Data, and/or Bluetooth, respectively; affordance5090for removing a currently added Wi-Fi network, affordance5092for adding one or more Wi-Fi Networks, affordance5094for removing a currently added accessory that connects to the device via Bluetooth, and/or affordance5096for adding one or more accessories to connect to the device via Bluetooth). In some embodiments, mode settings interface5080includes one or more affordances for adding, modifying, and/or removing application information items. For example, affordance5098corresponds to information items5056,5058,5060and/or5062for a video viewer application. In some embodiments, in response to a user input received while a focus selector is at a location corresponding to affordance5098, information items5056,5058,5060and5062for a video viewer application are removed from home mode interface5054. In some embodiments, mode settings interface5080includes affordance5100that corresponds to home accessory control. For example, to remove a home accessory control region from home mode interface5054, a user provides input at a location corresponding to affordance5100. To remove individual controls5064or5066for controlling home accessory devices, user input is provided at a location corresponding to affordances5112or5113, respectively. To add controls for home accessory devices, user input is provided at a location corresponding to affordance5114. Affordances5102,5104,5106, and5108are usable to remove application icons5068,5070,5072, and5074, respectively from home mode interface5054. For example, to remove weather application icon5072from home mode interface5054, a user provides user input while a focus selector is at weather application removal affordance5104. Affordance5110is usable to remove reminder notification area5076from home mode interface5054. To remove reminders from home mode interface5054, a user provides user input while a focus selector is at a location that corresponds to reminders removal affordance5114. In some embodiments, a mode is configurable to display communications and/or communication notifications from a filtered set of contacts. For example, a user may wish to avoid displaying e-mail from work contacts while the user is at home (e.g., when the home mode is active). In another example, a user wishes to avoid displaying message notifications from personal contacts while the user is at work (e.g., when the work mode is active). In some embodiments, mode settings interface5080includes one or more affordances5116for removing contacts (and/or groups of contacts) from which messages and/or message notifications are to be displayed while the home mode is active. In some embodiments, mode settings interface5080includes an affordance5118for adding contacts (and/or groups of contacts) from which messages and/or message notifications are to be displayed while the home mode is active. In some embodiments, application settings such as contact settings for e-mail, phone, calendar, and/or location sharing are configurable (e.g., in mode settings interface5080) to limit information displayed in a mode to information associated with a limited set of contacts. In some embodiments, a mode is configurable to display reminders and/or reminder notifications from a filtered set of reminders. For example, a set of reminders are stored in a “Work Reminders” group and a different set of reminders is stored in a “Home Reminders” group. In some embodiments, mode settings interface5080includes one or more affordances5120for removing a reminder group from which reminders and/or reminder notifications are to be displayed while the home mode is active. In some embodiments, mode settings interface5080includes an affordance5122for adding a reminder group from which reminders and/or reminder notifications are displayed while the home mode is active. In some embodiments, mode settings interface5080includes one or more affordances5124,5126,5128,5130for adding application information items (e.g., application icons, application notification areas, application notifications, and/or application widgets) to home mode interface5054. For example, to add a view of data from a calendar application to home mode interface5054(e.g., similar to view5016of data from a calendar application shown in work mode interface5014), a user provides user input while a focus selector is at a location corresponding to affordance5124. In some embodiments, mode settings interface5080includes settings affordances in addition to a set of currently displayed settings affordances. The additional settings affordances are displayed, e.g., in response to user input. For example, in some embodiments, in response to a vertical drag gesture, the device reveals settings affordances below or above currently displayed affordances (e.g., settings affordances for the same mode as the currently displayed mode affordances). In some embodiments, in response to a horizontal drag gesture, the device reveals settings affordances to the left or right of currently displayed settings (e.g., settings for different modes). In some embodiments, mode settings interface5080includes an affordance5146corresponding to a “Done” option to apply changes (e.g., all changes made using affordances of mode settings interface5080since mode settings interface5080was displayed). In some embodiments, mode settings interface5080includes an affordance5148corresponding to a “Cancel” option to cancel changes (e.g., all changes made using affordances of mode settings interface5080since mode settings interface5080was displayed). In some embodiments, a mode settings interface5080is used to perform an operation in a mode that is not the currently active mode. For example, a user may wish to wrap up a task stored in a work reminders group after the user returns home. While the user is still at work, and the work mode is active, the user can override the work mode to activate the home mode. While the home mode is activated (and the home mode interface5054is displayed), the user visits the mode settings interface5080and adds the work reminders group to the reminders notifications for the home mode. The device returns to the work mode. Later, when the user is at home and the home mode is active, the work reminder from the work reminders group is displayed in the home mode interface5054. A process for performing an operation in an override mode that is not the currently active mode is illustrated in the series of user interfaces ofFIGS.5I-5M. While a work mode is active (e.g., at a time that is within a time range associated with the work mode), an input is detected to activate the home mode, overriding the currently active work mode. An operation is performed, e.g., while the home mode is overriding the work mode. For example, as indicated inFIG.5I, in response to user input detected while a focus selector5078is at a location corresponding to a mode settings affordance5028, a mode settings interface5080is displayed. InFIG.5J, a user input is detected while a focus selector5132is at a location corresponding to affordance5122(e.g., corresponding to an “Add Reminders” setting of a reminders application). In response to detecting the user input while the focus selector5132is at the location corresponding to affordance5122, the device displays additional reminders groups5136,5138,5140,5142, as indicated atFIG.5K.FIG.5Kindicates that a user input is detected while a focus selector5134is at affordance5136, corresponding to a “Work Reminders” group to be added to reminder notification area5076of work mode interface5054. As indicated atFIG.5L, a user input is detected while a focus selector5144is at affordance5146, for applying changes made using mode settings interface5080(e.g., to apply the change that used affordance5136to add the Work Reminders group to home mode interface5054). InFIG.5M, a work group notification (“Organize welcome reception,” as shown at5022of work mode interface5014) has been added to home mode interface5054in reminders notification area5076. In some embodiments, as indicated in the series of user interfaces ofFIGS.5M-5N, after performing the operation in the overriding home mode (e.g., changing a setting of the home mode by adding the “Work Reminders” group to the home mode interface5054), the device returns to the work mode. In some embodiments, returning to the work mode includes detecting user input, such as user input received while a focus selector5150is at work mode affordance5004, as shown inFIG.5M, and in response to detecting the user input, displaying the work mode interface5014, as shown inFIG.5N. In some embodiments, as indicated in the series of user interfaces ofFIGS.5N-5P, a list of mode affordances for the modes of the device is displayed in response to user input that includes an increase in a characteristic intensity of a contact on a touch-sensitive surface above a mode display intensity threshold. For example, inFIG.5N, a contact on touch screen112is detected while a focus selector5152is at a location corresponding to mode affordance chrome5024. In response to determining that a characteristic intensity of the contact has increased above a mode display intensity threshold (e.g., a deep press intensity threshold ITD, as illustrated at contact intensity meter5156, or another intensity threshold such as a light press intensity threshold ITL), a mode selection interface5158is displayed, as indicated atFIG.5O. In some embodiments, mode selection interface5158includes all of the modes of the device (e.g., including modes not shown in mode affordance chrome5024, such as weekend mode affordance5161). In some embodiments, mode selection interface includes a subset, less than all, of the modes of the device (e.g., and modes not currently displayed are accessible via input that scrolls the listed modes). As indicated inFIG.5O, a contact/focus selector5160is at a position corresponding to a home mode affordance5026. In response to the detected contact (e.g., a contact having a characteristic intensity above a contact detection threshold IT0), the home mode interface5054is displayed, as indicated atFIG.5P. In some embodiments, (e.g., when a user input meets information item movement criteria), a gesture is used to override a current mode, remove the information item from the current mode, activate an alternate mode, and/or add an information item to the alternate mode. In some embodiments, a user input meets information item movement criteria when the duration of a contact with touch screen112exceeds a threshold duration. In some embodiments, as explained below with reference toFIGS.5Q-5U, a user input meets information item movement criteria when a characteristic intensity of a detected contact is above a threshold intensity. FIGS.5Q-5Uillustrate an input detected to activate the home mode, (e.g., thereby overriding the work mode) and an operation is performed while the home mode is activated and the work mode is overridden, in accordance with some embodiments. InFIG.5Q, a work mode interface5014including message notification5020is displayed. A contact/focus selector5162is at a position corresponding to message notification5020. The contact has a characteristic intensity above a contact detection threshold IT0, as indicated at contact intensity meter5156. As shown inFIG.5R, when a characteristic intensity of a detected contact at is above a hint intensity threshold, an information item at a location corresponding to the location of the contact is highlighted. For example, inFIG.5R, a characteristic intensity of the detected contact at a location indicated by focus selector5162(corresponding to a location of message notification5020) is above a hint intensity threshold (e.g., above a contact detection threshold ITHas indicated by contact intensity meter5156). In response to detecting the characteristic intensity of the contact above the hint intensity threshold, an information item at a location corresponding to focus selector5162(e.g., message notification5020) is visually distinguished from other information items of work mode interface5014. For example, message notification is shown with a bold outline, as shown inFIG.5R. InFIG.5S, a characteristic intensity of the detected contact (e.g., at a location indicated by focus selector5162a) is above an information item movement relocation threshold (e.g., above a light press threshold ITDas indicated by contact intensity meter5156, or above another intensity threshold). In response to detecting a characteristic intensity of the contact above the movement relocation threshold, an information item at a position corresponding to the contact (e.g., message notification5020) is displayed detached from its current location. After message notification5020is detached from its current location, message notification5020is movable to a different mode. For example, as indicated atFIGS.5S-5T, the user input includes a gesture, such as a gesture that moves the contact from a first location of the focus selector at5162ato a subsequent location of the focus selector at5162balong a path indicated by arrow5164. In some embodiments, the message notification is “attached” to the contact/focus selector and moves along a path5164as the contact/focus selector moves along the path5164. In some embodiments, the received gesture overrides the work mode, activates the home mode, and adds the message notification5020to the home mode. In some embodiments, the message notification5020is displayed in home mode interface5054, as indicated atFIG.5U. In some embodiments, a user input (e.g., received after a characteristic intensity of the detected contact is above an information item movement relocation threshold) is a gesture that moves contact/focus selector5162to the left. When the movement to the left exceeds a mode change threshold distance, the gesture overrides the work mode, activates the activity mode, and adds message notification5020to an activity mode interface (because the activity mode is the mode preceding the current mode, e.g., as indicated by the location of activity mode affordance5002to the left of work mode affordance5004in mode affordance chrome5024). In some embodiments, a user input (e.g., received after a characteristic intensity of the detected contact is above an information item movement relocation threshold) is a gesture that moves contact/focus selector5162to the right. When the movement to the right exceeds a mode change threshold distance, the gesture overrides the work mode, activates the drive mode, and adds message notification5020to drive mode interface5030(because drive mode is the mode following the current mode, e.g., as indicated by the location of drive mode affordance5006to the right of work mode affordance5004in mode affordance chrome5024). In some embodiments, when message notification5020is added to another mode, such as an activity mode (e.g., an activity mode interface), a drive mode (e.g., drive mode interface5030), or a home mode (e.g., home mode interface5054), message notification5020is removed from the work mode (e.g., message notification5020is no longer displayed in work mode interface5014). FIGS.6A-6Cillustrate a flow diagram of a method600of overriding a device mode, in accordance with some embodiments. The method600is performed at an electronic device (e.g., device300,FIG.3, or portable multifunction device100,FIG.1A) with a display. In some embodiments, the display is a touch-screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method600are, optionally, combined and/or the order of some operations is, optionally, changed. As described below, the method600provides an intuitive way to interact with different device modes. The method reduces the number, extent, and/or nature of the inputs from a user when changing device modes, thereby creating a more efficient human-machine interface. For battery-operated electronic devices, enabling a user to changing device modes faster and more efficiently conserves power and increases the time between battery charges. The device (602) displays a first mode of a plurality of modes of the device. Examples of modes are provided below. In some embodiments, a mode is a mode of the entire device (e.g., the mode provides the primary interface for interacting with device features). In some embodiments, a mode is subsidiary to a primary interface for interacting with device features. The plurality of modes of the device include (604) the first mode and a second mode. In some embodiments, a mode is a state of the device in which a set of operations that are relevant to, e.g., a place, time and/or an activity of the device or its user are performed, available, or otherwise made prominent on the device. For example, icons for applications relevant to a particular mode are displayed concurrently when the mode is active (e.g., application initiation icons5068,5070,5072, and5074are displayed concurrently in home mode interface5054when the home mode of the device is active, as indicated inFIG.5I). In some embodiments, a mode is activated in the foreground (e.g., information associated with a mode, such as a mode interface, is displayed), a mode is activated in the background (e.g., one or more settings associated with the mode are applied to the device), or a mode is activated in the foreground and in the background. Examples of modes, mode activation criteria, and mode features are described below in Table 1. Time criteria, location criteria, movement criteria, and transit criteria that correspond to the various modes described in Table 1 are explained below. In some embodiments, time criteria are met when a current time (e.g., determined by the device) is within defined time parameters (e.g., a user input time range, a default time range, and/or an automatically determined time range) for a particular mode (e.g., a mode listed below in Table 1). In some embodiments, location criteria are met when a current location of the device is within defined location parameters that correspond to a particular mode (e.g., a mode listed below in Table 1). The current location of a device is determined using, e.g., position data determined from GPS, a Wi-Fi network, a location beacon (e.g., iBeacon), and/or Bluetooth pairing (e.g., detecting availability of a Bluetooth connection with a Bluetooth transmitter in a known location, such as an automobile). For example, a device determines that a location criterion is satisfied in response to determining that a current location of the device is within (and/or within a predetermined distance from) a location such as a neighborhood, a city block, an address, a point location, a set of coordinates (e.g. latitude/longitude), and/or another defined boundary (such as a virtual boundary associated with a campus, a building, or a portion of a building). Location parameters are determined, e.g., in accordance with user input (a text and/or map pin entry indicating a location) and/or automatically (e.g., based on data stored by the device, such as a location of the device determined by the device during a typical time frame associated with the mode, data from communications, and/or data from calendar entries). In some embodiments, a device determines that a location criterion is satisfied in response to detecting a signal from a Wi-Fi network that corresponds to a particular mode (e.g., as indicated by a mode settings interface). In some embodiments, travel criteria are met when a current location of the device does not match a previously defined location, matches a previously defined location for a mode that uses travel criteria, is in a geographic region that is different from the defined home and/or work location (e.g., a different city, state, and/or country), or is beyond a defined distance from a defined home and/or work location. In some embodiments, movement criteria are met when output from one or more device sensors (e.g., the device accelerometer) indicates a velocity exceeding a predetermined threshold and/or indicates a velocity that falls within a predetermined velocity range. In some embodiments, determining whether device movement meets movement criteria includes determining, at a first time, a first location of the device; determining, at a second time, a second location of the device; determining a distance between the first location of the device and the second location of the device; and, based on a velocity determined from the distance between the first location and the second location divided by the difference between the first time and the second time, determining whether the first movement exceeds a velocity threshold and/or falls within a predetermined velocity range. For example, movement criteria include walking movement criteria (e.g., the velocity exceeds a movement threshold and/or the velocity falls within a range that corresponds to typical walking movement), running movement criteria (e.g., the velocity exceeds a movement threshold and/or the velocity falls within a range that corresponds to typical running movement), bicycling movement criteria (e.g., the velocity exceeds a movement threshold and/or the velocity falls within a range that corresponds to typical bicycling movement), and/or automobile/transit movement criteria (e.g., the velocity exceeds a movement threshold and/or falls within a range that corresponds to typical automobile movement and/or transit movement, such as bus, train, airplane, etc.). TABLE 1Mode ExamplesMode TitleMode Activation CriteriaMode FeaturesWork ModeTime criteria (e.g., device time isSee, e.g., work mode interface 5014within a time range thatas illustrated by FIG. 5Bcorresponds to typical work hoursand/or typical work days of theweek) and/or location criteria(e.g., device location correspondsto workplace location)Home ModeTime criteria (e.g., device time isSee, e.g., work mode interface 5054within a time range thatas illustrated by FIG. 5Icorresponds to typical homehours when the user is home)and/or location criteria (e.g.,device location corresponds tohome location)Drive ModeTime criteria (e.g., time is withinSee, e.g., drive mode interface 5030a time range that corresponds toas illustrated by FIG. 5Dtypical commute hours), locationcriteria (e.g., device locationcorresponds to a typical commutestart point, end point, and/orroute, and/or the current devicelocation is a location that is nothome or work), and/or movementcriteria (e.g., device movementmeets transit movement criteria).TransitTime criteria (e.g., device time isTransit information applications,Modewithin a time range thatnavigation information (e.g., acorresponds to typical commutenavigation application is activated inhours), location criteria (e.g.,a transit mode), and/or placedevice location corresponds toinformation regarding destinationstypical transit stops and/or aalong a transit linelocation that is not home or work)and/or movement criteria (e.g.,movement meets transitmovement criteria)Walk ModeTime criteria (e.g., device time isNavigation information (e.g., awithin a time range thatnavigation application is activated incorresponds to typical commutea walk mode) and/or placeand/or activity hours), locationinformation regarding locations in thecriteria (e.g., device locationvicinity of the devicecorresponds to location that is nothome or work) and/or movementcriteria (e.g., device movementmeets walk movement criteria)On-the-goMovement criteria (e.g., deviceIncorporates features of drive, transit,Modemovement meets walk movementwalk, run, and/or travel modes.criteria, run movement criteria,bicycle movement criteria, and/ortransit movement criteria) and/ortravel criteriaTravelLocation Criteria: DetectedInformation applicable to travel,Modedevice location that does nottourism, and/or a particular region,match and/or is beyond asuch as a region in which the devicethreshold distance from anyis currently located (determined, forpredefined or predeterminedexample, based on features that userslocations associated with anotherin the region typically use), e.g., onemode of the deviceor more translation applications,currency conversion applications,distance conversion applications, sizeconversion applications, temperatureconversion applications, weatherapplications, editorial content and/oruser based content, cameraapplication, and/or photo viewerapplication.WeekendTime criteria (e.g., device timeApplication icons for applicationsModecorresponds to typical non-workassociated with leisure activitiesdays) and/or location criteriaand/or hobbies(e.g., device location correspondsto home and/or typical weekendlocation)ActivityTime criteria (e.g., device timeFitness and other health and/or(Workout)corresponds to typical activitynutrition related applications, anModetime), location criteria (e.g.,interface for selecting a currentdevice location corresponds toworkout activity for tracking and/or atypical activity location, such asmap that indicates a path traveled bythe user's gym), and/orthe user during a workout andmovement criteria (e.g., deviceassociated statisticsmovement meets run movementcriteria, for example, the deviceuser has been moving at a runningspeed for a predetermined amountof time (e.g., not just running forthe bus)).Guest ModeThe phone is in possession of aRestricted access to applications,user other than the owner of thecontacts, and/or device settingsdevice, e.g., as determined fromuser input and/or logininformationVenueTime criteria (e.g., device timeInformation regarding performers,Modecorresponds to scheduled event atvendors, services, and/or artworks)a venue) and/or location criteriaand/or communications within a(e.g., device location correspondsregion associated with the venueto a venue location)“You areNo mode criteria are met (forInformation regarding places, events,here” Modeexample, the device location issales, menus, and/or movies playing.not a location associated with anFor example, the information isexisting mode, the time is not aprovided for locations near the devicetime associated with an existinglocation.mode, and/or device movement isnot a device movement associatedwith an existing mode).DefaultNo mode criteria are met. In someDefault interface, default operations,Modeembodiments, a defined modeand/or default settings.(e.g., one of the examples ofmodes listed above) is configuredto be used as a default mode thatis activated when no modecriteria are met. The first mode of the device is active (and, in some embodiments, a first mode interface is displayed) when (606) a first set of time and/or device location criteria are met. For example, the first mode is a “work mode” and the first set of criteria are satisfied when a current time as determined by the device falls within a predefined (e.g., defined by the user) or predetermined (e.g., defined from data collected by the device) time range (e.g., work hours) for a user of the mobile device and/or GPS or other location data for the mobile device indicate that the mobile device is at a predefined or predetermined work location for the user. For example, inFIG.5A, the device determines that a current time is 2:40 PM. Based on a determination that this current time is within a time range defined by the user to indicate working hours (e.g., 9:30 AM to 6:30 PM), a work mode is active. The second mode of the device is active (and, in some embodiments, a second mode interface is displayed) when (608) a second set of time and/or device location criteria, distinct from the first set of time and/or device location criteria, are met. For example, the second mode is a “home mode” and the second set of criteria are satisfied when a current time as determined by the device falls within predefined or predetermined time range for the home mode and/or GPS or other location data for the mobile device indicate that the mobile device is at a predefined or predetermined location for the home mode. While the first set of time and/or device location criteria are met (610) (e.g., and the second set of time and/or device location criteria are not met), and while the first mode of the device is active (e.g., and information associated with a mode, such as a mode interface is displayed), the device detects (612) a first input that overrides the first mode of the device. In some embodiments, the first input is, e.g., a voice command or a gesture (e.g., a contact or other input received while a focus selector is at a location corresponding to a mode affordance, such as a “tap” gesture or mouse-click, and/or a user input that moves a focus selector across a display, such a movement of a contact across a touch-sensitive surface in a “swipe” gesture). In response to detecting the first input, the device activates (614) the second mode of the device. In some embodiments, the device displays information associated with a second mode of the device (e.g., a second mode interface) when the device activates the second mode of the device. For example, inFIG.5H, a work mode of the device is active and the device is displaying work mode interface5014. The device detects a user input while focus selector5052is at a location corresponding to home mode affordance5026. In response to detecting the user input, the device activates a home mode. In some embodiments, activating the home mode includes displaying home mode interface5054, as indicated inFIG.5I. After responding to the first input, while the second mode of the device is active (and, in some embodiments, while information associated with the second mode is displayed) the device detects (616) a second input, such as an input that corresponds to a request to perform an operation in the second mode of the device. In some embodiments, an input that corresponds to a request to perform an operation includes, e.g., a user input for displaying a mode settings interface5080, a user input for modifying mode settings (e.g., modifications made using mode settings interface5080), and/or a user input for applying modifications (e.g., user input received at a location corresponding to affordance5146for applying changes made using mode settings interface5080). In some embodiments, an input that corresponds to a request to perform an operation is, e.g., a user input to move an object from a first mode to a second mode (e.g., as described with regard toFIGS.5Q-5U). In response to detecting the second input, the device performs (618) an operation in the second mode of the device. For example, the device applies a change made using a mode settings interface5080(e.g., when user input is received at a location corresponding to affordance5146, for applying changes made using mode settings interface5080to add a “Work Reminders” group to a reminder notifications area5076of home mode interface5054, as described with regard toFIGS.5I-5J). In another example, the device adds a message notification5020to the home mode interface5054(e.g., as described with regard toFIGS.5Q-5U). Example operations in the second mode include:adding a notification to the second mode (e.g., the second mode interface) such that the notification will be displayed when the second mode is active;adding an affordance (e.g., an application initiation icon) to the second mode (e.g., to a second mode interface) such that the affordance will be displayed and/or activated when the second mode is active;displaying a second mode interface (e.g., home mode interface5054);displaying a mode settings interface (e.g., home mode settings interface5080);displaying and/or activating a control of the second mode (e.g., a home automation system control, such as5064,5066shown inFIG.5I);displaying and/or playing back media or other content of the second mode (e.g., a video, a song, an image, a news article, a website link, and/or a wallet pass);displaying a subset of contacts of the second mode (e.g., a subset that excludes certain contacts and/or a subset of contacts that are only available while the device is in a particular mode or modes);restricting outbound communications and/or indications of inbound communications to communications with a particular subset of contacts (e.g., using affordances in a mode settings interface such as5116,5518shown inFIG.5J);displaying a subset of calendar appointment information of the second mode (e.g., meeting, attendees);displaying navigation information (e.g., a map and/or directions);displaying fuel station information (e.g., locations of fuel stations, prices of fuel at nearby fuel stations);displaying a location or pin (e.g., location of parked car);enabling, disabling, and/or modifying voice assistant settings (e.g., a voice assistant such as Siri has different users/permissions in different modes);displaying indications of applications used while the mode was previously active; and/orapplying a setting, such as a privacy setting (e.g., a privacy setting applicable to device location and/or ETA sharing), a permission (e.g., while driving mode is active, no notifications are permitted), an access restriction (e.g., restrict access to particular applications and/or content when guest and/or family mode is active), and/or a network setting (e.g., a network type is designated for a particular mode, for example, Wi-Fi is used for a call when home mode is active and LTE is used for a call when on-the-go mode is active). After performing the operation in the second mode of the device, the device returns (620) to (and, in some embodiments, resumes display of) the first mode of the device. For example, after adding a “Work Reminders” group to a reminder notification area5076of home mode interface5054, the device displays work mode interface5014. In some embodiments, the device returns to the first mode automatically, such as after a predetermined period of time since detecting the second input and/or at a time when the device enters a sleep state. In some embodiments, the device returns to the first mode in response to the second input (e.g., a second input includes input received at affordance5146for applying changes made using mode settings interface5080). For example, in some embodiments, in response to receiving input at affordance5146for applying changes made using mode settings interface5080, the device displays work mode interface5014(e.g., rather than displaying home mode interface5054as modified by the second input). In some embodiments, the device returns to the first mode in response to a third input (e.g., a tap gesture received at work mode affordance5004). In some embodiments, performing the operation in the second mode of the device (e.g., while the first set of time and/or device location criteria are met and the second set of time and/or device location criteria are not met) includes adding (622) an affordance to the second mode of the device. An affordance is, e.g., an application initiation icon (e.g., an application icon5068,5070,5072,5074as shown inFIG.5I) that, when activated, initiates an application; a control affordance (e.g., a home automation control item5064,5066as shown inFIG.5I) that, when activated, initiates a function (e.g., of an accessory device); a media affordance (e.g., content episode/chapter information5060) that, when activated, initiates playback of content; and/or an information affordance (e.g., message notification5020as shown inFIG.5Uor reminder notification5022in reminder notifications area5076as shown inFIG.5M) that displays information for the respective mode. In some embodiments, after returning to the first mode of the device, the device determines (624) that the second set of time and/or device location criteria are met and, in response to determining that the second set of time and/or device location criteria are met, the device activates and displays the second mode of the device and the device displays the affordance in the second mode of the device. For example, the device receives input to override a work mode and activate a home mode, and while the home mode is active, a work mode reminder5022is added to home mode interface5054, as described with regard toFIGS.5I-5M. After the work mode reminder5022is added to home mode interface5054, the device returns to displaying work mode interface5014, e.g., in response to an input received at work affordance5004. Subsequently, in response to determining that time and/or device location criteria for the home mode are met, the device activates its home mode and displays home mode interface5054. The work mode reminder5022of the work mode reminders group is displayed in reminder notifications area5076of home mode interface5054(because it was previously added to home mode interface5054). In some embodiments, overriding the first mode includes (626) ceasing to display the first mode of the device and displaying the second mode of the device. For example, overriding the first mode, as described with regard toFIGS.5H-5I, includes ceasing to display home mode interface5014and displaying work mode interface5054. In some embodiments, overriding the first mode includes displaying affordances of the second mode without activating settings of the second mode. For example, the device user is able to see how the second mode interface appears, but adjustments to settings are not applied during the override. For example, when overriding a work mode to activate a home mode, affordances of the home mode (e.g., affordances5056,5058,5060,5062,5064,5066,5068,5070,5070,5074,5076of home mode interface5054) are displayed, but settings (e.g., network settings, such as5082,5084,5086ofFIG.5J; permission settings; access settings; and/or privacy settings) are not modified, even if the home mode settings differ from the work mode settings. In some embodiments, second mode settings are implemented after the second mode has been displayed for a predetermined period of time (e.g., a predetermined period of time after the first input is received, after the second mode interface is displayed, and/or after the second input is received). For example, the user provides input to override the work mode with the home mode. After a predetermined period of time (e.g., during which the user does not provide subsequent input to return to the work mode), the device applies the home mode settings. In some embodiments, a respective mode of the plurality of modes includes (628) one or more affordances. The one or more affordances includes an application affordance (e.g., application initiation icons5068,5070,5072,5074, as shown inFIG.5I) that, when activated, initiates an application. The one or more affordances includes a media affordance (e.g., content episode/chapter information5060), that, when activated, initiates playback of content. For example, playback is initiated from a starting point of the content or from a point that is at or before (e.g., 5 seconds before) a position at which playback of the content was stopped (e.g., while the device was previously in the respective mode). For example, content episode/chapter information5060indicates that 21 of 29 minutes of Episode 1 was watched (e.g., when the home mode was previously active). In response to a user input received to select content episode/chapter information5060(e.g., a user input received while a focus selector is at a location corresponding to content episode/chapter information5060), the device initiates playback of Episode 1 from a point in time at which Episode 1 was stopped (or from a point in time slightly before the point in time at which Episode 1 was stopped) during a previous viewing. For example, playback begins at the 21 minute mark (or playback begins 5 seconds before the 21 minute mark) in Episode 1. The one or more affordances includes a control affordance (e.g., a home automation control item5064,5066as shown inFIG.5I), that, when activated, initiates a function; and/or an information affordance that displays information (e.g., message notification5020as shown inFIG.5Uor reminder notification5022in reminder notifications area5076as shown inFIG.5M). In some embodiments, an interface (e.g., mode settings interface5080) is provided for manipulating the inclusion, exclusion, and/or layout of affordances for a mode. In some embodiments, performing the operation in the second mode of the device includes (630) modifying a setting for a parameter (e.g., a permission setting for an application and/or an access setting for a communication type, a contact, or a user; a privacy setting; or a network setting) in the second mode of the device. For example, mode interface5080as shown inFIG.5Jis used to modify a network setting, e.g., via user input received at affordance5092(for adding a Wi-Fi network to which the device can connect while the mode is active), affordance5090(for removing a Wi-Fi network to which the device can connect while the mode is active), affordance5096(to add an accessory device that can connect with the device via Bluetooth), affordance5094(to remove an accessory device that can connect with the device via Bluetooth), affordance5082(to enable or disable Wi-Fi connectivity while the mode is active), affordance5084(to enable/disable cellular connectivity while the mode is active), and affordance5086(to enable/disable Bluetooth connectivity while the mode is active). In some embodiments, after returning to the first mode of the device, the device determines (632) that the second set of time and/or device location criteria are met and, in response to determining that the second set of time and/or location criteria are met, the device initiates the second mode of the device and applies the setting to the parameter of the device. For example, the device receives input to override a work mode and activate a home mode, as described with regard toFIGS.5I-5J. While the home mode is active, a network setting is changed (e.g., while the home mode is active, mode settings affordance5028is used to access mode settings interface5080, and in mode settings interface5080, affordance5084is switched to an off state to disable cellular connectivity while the home mode is active). After the cellular connectivity setting is changed for the home mode, the device returns to activating the work mode (e.g., in response to receiving user input at affordance5146corresponding to a “Done” option to apply changes). Subsequently, in response to determining that time and/or device location criteria for the home mode are met, the device activates its home mode. When the home mode is activated (e.g., and cellular connectivity is enabled from a previous mode), the device applies the cellular connectivity network setting to disable cellular connectivity. In some embodiments, overriding the first mode includes (634) applying the setting to the parameter of the device in the second mode. For example, network settings of the home mode indicate that Wi-Fi connectivity is enabled for the home mode. When the device overrides a work mode and activates the home mode (e.g., in response to receiving input to activate the home mode), the device enables Wi-Fi connectivity (e.g., if Wi-Fi connectivity is not already enabled). In some embodiments, returning to the first mode of the device includes (636) automatically returning to the first mode of the device after a predetermined period of time. In some embodiments, returning to the first mode of the device includes automatically returning to the first mode of the device after the second mode of the device has been active for the predetermined period of time. In some embodiments, returning to the first mode of the device includes automatically returning to the first mode of the device after a predetermined period of time since a user input (e.g., the first user input or the second user input) was received by the device. In some embodiments, returning to the first mode of the device includes automatically returning to the first mode of the device after a predetermined period of time since a change was made to the second mode. In some embodiments, while the second mode is active (and the first mode is overridden), the device detects (638) a third input (e.g., a voice command and/or a gesture); and, in response to detecting the third input, the device re-activates the first mode of the device. In some embodiments, re-activating the first mode of the device includes displaying a first mode interface (e.g., work mode interface5014). For example, while the home mode is active and home mode interface5054is displayed (e.g., in response to a first input received to override a home mode, as described with regard toFIGS.5H-5I), after a second input is received to add a reminder notification5022to home mode interface5054, a third input is received (e.g., while a focus selector5150is at a location corresponding to work mode affordance5004, as described with regard to Figure M), and the device returns to the work mode in response to the third input. In some embodiments, detecting the first input (that overrides the first mode of the device) includes (640) detecting a gesture at a location corresponding to a mode selection affordance. The gesture is, e.g., a tap gesture, a swipe gesture, a drag gesture, a contact that has a characteristic intensity above a threshold intensity level, and/or a combination of these. For example, a first input includes a tap gesture received while a contact/focus selector5052is at a location corresponding to home mode affordance5026to override a work mode and activate a home mode, as described with regard toFIGS.5H-5I. In some embodiments, the gesture is a swipe gesture received at indicia of a mode, such as a text indicator displayed in a user interface (e.g., on a lock screen). For example, a first input includes a swipe gesture that includes movement (e.g., by a contact across a touch-sensitive surface112) of a focus selector5008from a position5008ato5008con a lock screen to override a work mode and activate a drive mode, as described with regard toFIGS.5E-5G. In some embodiments, the device includes a touch-sensitive surface112and one or more sensors for detecting intensities of contacts on the touch-sensitive surface. While a focus selector is at a location of a mode selection affordance, the device detects (642) an increase in a characteristic intensity of the contact on the touch-sensitive surface112above a mode display intensity threshold and, in response to detecting the increase in the characteristic intensity of the contact above the mode display intensity threshold, the device displays a plurality of mode affordances that correspond to at least a subset of the plurality of modes (e.g., as a scrollable list) of the device. For example, a contact with touch-sensitive surface112is detected while a focus selector5152is at a location of mode affordance chrome5024, as indicated inFIG.5N. In response to detecting an increase in the characteristic intensity of the contact above a mode display intensity threshold (e.g., a light press intensity threshold ITL, as indicated by intensity meter5156, or another intensity threshold), the device displays mode selection interface5158, as indicated inFIG.5O. In some embodiments, detecting the first input (that overrides the first mode of the device) includes receiving a selection of a mode affordance that corresponds to the second mode (e.g., from mode selection interface5158). For example, detecting the first input includes detecting a contact/focus selector5160at a location corresponding to home mode affordance5026, as indicated atFIGS.5O-5P. In some embodiments, the mode selection is received when, after the increase in the characteristic intensity above the mode display intensity threshold is detected, a decrease in the characteristic intensity is detected, followed by a subsequent increase above the mode display intensity threshold. For example, detecting the first input includes detecting a subsequent increase in a characteristic intensity of the contact while focus selector5160is at a location corresponding to home mode affordance5026. In some embodiments, detecting the first input (that overrides the first mode of the device) includes detecting an increase in the characteristic intensity of the contact on the touch-sensitive surface above a mode selection intensity threshold (e.g., a light press intensity threshold ITL, as indicated by intensity meter5156, or another intensity threshold) when a focus selector is at an indication of the second mode. For example, in response to detecting an increase in the characteristic intensity of the contact on the touch-sensitive surface above a mode selection intensity threshold when a focus selector5152is at a location of activity mode affordance5002, an activity mode of the device is activated (and, in some embodiments, an activity mode interface is displayed). In some embodiments (644), the first mode is a work mode (e.g., as indicated by work mode interface5014) and the second mode is a home mode (e.g., as indicated by home mode interface5054). In some embodiments, the first set of time and/or device location criteria are met when (646) at least one of a work time criterion or a work location criterion is satisfied. In some embodiments, a work time criterion is satisfied when a current time (e.g., determined by the device, e.g., according to a device clock) is within work time parameters. Work time parameters are, e.g., a user input time range (e.g., input via a calendar application), a default time range, and/or an automatically determined time range. A work time range is determined automatically, for example, based on a time range during which a user is typically at a work location (e.g., a work location indicated by the user or a location at which the device is located during a typical work time range). A time range during which a user is typically at a work location is determined, for example, based on data stored by the device, such as a location of the device determined (using GPS, Wi-Fi or other location information) during a typical work time frame, data from calendar entries, and/or data from communications. In some embodiments, a work location criterion is satisfied when a current location of the device (e.g., a location indicated by positioning data such as positioning data determined from GPS, Wi-Fi network, location beacon (e.g., iBeacon), and/or Bluetooth pairing) is within work location parameters. For example, a device determines that a location criterion is satisfied in response to determining that a current location of the device is within (and/or within a predetermined distance from) a location such as a neighborhood, a city block, an address, a point location, and/or a set of coordinates (e.g. latitude/longitude). Location parameters are determined, e.g., in accordance with user input (a text and/or map pin entry indicating a location) and/or automatically (e.g., based on data stored by the device, such as a location of the device determined by the device during a typical work time frame, data from communications, and/or data from calendar entries). In some embodiments, a device determines that a work location criterion is satisfied in response to detecting a signal from a Wi-Fi network of the work mode (e.g., as indicated by a mode settings interface). It should be understood that the particular order in which the operations inFIGS.6A-6Chave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., method700) are also applicable in an analogous manner to method600described above with respect toFIGS.6A-6C. For example, the contacts, gestures, affordance, user interface objects, intensity thresholds, and focus selectors described above with reference to method600optionally have one or more of the characteristics of the contacts, gestures, affordance, user interface objects, intensity thresholds, and focus selectors described herein with reference to other methods described herein (e.g., method700). For brevity, these details are not repeated here. The operations described above with reference toFIGS.6A-6Care, optionally, implemented by components depicted inFIGS.1A-1BorFIG.3. For example, detection operations612and614, activation operation614, performing operation618and returning operation620are, optionally, implemented by event sorter170, event recognizer180, and event handler190. Event monitor171in event sorter170detects a contact on touch-sensitive display112, and event dispatcher module174delivers the event information to application136-1. A respective event recognizer180of application136-1compares the event information to respective event definitions186, and determines whether a first contact at a first location on the touch-sensitive surface (or whether rotation of the device) corresponds to a predefined event or sub-event, such as selection of an object on a user interface, or rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, event recognizer180activates an event handler190associated with the detection of the event or sub-event. Event handler190optionally uses or calls data updater176or object updater177to update the application internal state192. In some embodiments, event handler190accesses a respective GUI updater178to update what is displayed by the application. Similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted inFIGS.1A-1B. FIGS.7A-7Billustrate a flow diagram of a method700of recommending and activating a device mode from among a plurality of displayed mode affordances, in accordance with some embodiments. The method700is performed at an electronic device (e.g., device300,FIG.3, or portable multifunction device100,FIG.1A) with a display. In some embodiments, the display is a touch-screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method700are, optionally, combined and/or the order of some operations is, optionally, changed. As described below, the method700provides an intuitive way to recommend and activate a device mode from among a plurality of displayed mode affordances. The method reduces the number, extent, and/or nature of the inputs from a user when selecting and activating a mode, thereby creating a more efficient human-machine interface. For battery-operated electronic devices, enabling a user to select and activate a mode faster and more efficiently conserves power and increases the time between battery charges. The device concurrently displays (702) a plurality of mode affordances (e.g., user-activatable icons that correspond to respective modes of the mobile device, such as icons that display thumbnail images of the user interfaces for the respective modes of the mobile device). In some embodiments, the device concurrently displays the plurality of mode affordances on a lock screen. For example, as shown inFIG.5A, mode affordances5002,5004,5006, corresponding to an activity mode, a work mode, and a drive mode, respectively, are concurrently displayed on a lock screen. In another example, as shown inFIG.5B, mode affordances5004,5004,5006,5026, corresponding to an activity mode, a work mode, a drive mode, and a home mode, respectively, are concurrently displayed in a mode affordance chrome5024. The plurality of mode affordances includes (704) a first mode affordance that, when activated, initiates a first mode of the mobile device. For example, work mode affordance5004, when activated (e.g., in response to a user input received when a focus selector5008is at a location corresponding to work mode affordance5004), initiates a work mode of the device and, in some embodiments, displays a work mode interface5014, as described with regard toFIGS.5A-5B. The plurality of mode affordances includes (706) a second mode affordance that, when activated, initiates a second mode of the mobile device, distinct from the first mode of the mobile device. For example, drive mode affordance5006, when activated (e.g., in response to a user input received when a focus selector5008is at a location corresponding to drive mode affordance5006), initiates a drive mode of the device and, in some embodiments, displays a drive mode interface5030, as described with regard toFIGS.5C-5D. The device is configured to recommend (708) activating a respective mode of the device in accordance with a determination that a respective set of time and/or device location criteria that correspond to the respective mode of the device are met. For example, inFIG.5A, a work mode is recommended as indicated by, e.g., the shading and/or bold text of mode affordance5004. In another example, inFIG.5B, a work mode is recommended (and/or is currently active) as indicated by, e.g., the shading of mode affordance5004. In some embodiments, the recommendation is automatically provided. For example, the recommendation is automatically provided, e.g., in response to waking the device from a sleep state, in response to detecting that time and/or location criteria for a respective mode are met, and/or in response to detecting that no time and/or location criteria for the plurality of modes are met (in which case, in some embodiments, a default mode is recommended). In some embodiments, the device determines (710) that a first set of time and/or device location criteria that correspond to the first mode of the device are met. For example, the device determines that a first set of time and/or device location criteria that correspond to the work mode of the device are met, e.g., as described with regard to (646) above. In response to determining that the first set of time and/or device location criteria are met, the device displays (712) a visual indication that corresponds to a recommendation to activate the first mode of the device. In some embodiments, the first mode of the device is recommended for activation by visually distinguishing the first mode affordance from the other mode affordances in the plurality of mode affordances (e.g., by highlighting the first mode affordance, enlarging the first mode affordance and/or altering displayed text of the first mode affordance). For example, inFIG.5A, to indicate that the work mode is recommended for activation, work mode affordance5004is shaded and the text of work mode affordance5004is bolded. In some embodiments, to indicate that the work mode is recommended for activation, work mode interface5014is displayed, as indicated inFIG.5B. While the first set of time and/or device location criteria are met and the visual indication that corresponds to the recommendation to activate the first mode of the device is displayed, the device detects (714) activation of a respective mode affordance in the plurality of concurrently displayed mode affordances. For example, the device detects activation of a work mode (e.g., the device detects a user input received when a focus selector5008is at a location corresponding to work mode affordance5004), as described with regard toFIGS.5A-5B. In response to detecting activation of the respective mode affordance in the plurality of concurrently displayed mode affordances, the device (716) ceases to display the plurality of mode affordances and activates a mode of the device that corresponds to the respective mode affordance. For example, in response to detecting a user input received when a focus selector5008is at a location corresponding to work mode affordance5004, the device ceases to display the plurality of mode affordances5002,5004, and5006, as shown inFIG.5A, and the device activates a work mode (and, in some embodiments, the device displays a work mode interface, as shown inFIG.5B). In some embodiments, displaying the visual indication that corresponds to the recommendation to activate the first mode of the device, in response to determining that the first set of time and/or device location criteria are met, occurs (718) while maintaining concurrent display of the plurality of mode affordances. For example, inFIG.5A, work mode affordance5004is shaded and the text of work mode affordance5004is bolded to visually distinguish work mode affordance5004from activity mode affordance5002and drive mode affordance5006that are concurrently displayed with work mode affordance5004. In some embodiments, in accordance with a determination that the respective mode is a vehicle operation mode, a displayed area of at least one affordance is increased (720) from a default area to a vehicle operation mode area that is larger than the default area. For example, inFIG.5G, reminder notification5046is shown with a first (e.g., default) area, and inFIG.5D, reminder notification5046is shown with a second area that is larger than the first area. In another example, inFIG.5G, reminder icon5042is shown with a first (e.g., default) area, and inFIG.5D, reminder icon5042is shown with a second area that is larger than the first area. In some embodiments, in accordance with a determination that the respective mode is a vehicle operation mode, at least a part of displayed text is increased from a default text size to a vehicle operation mode text size that is larger than the default text size. For example, inFIG.5G, reminder notification5046is shown with a first (e.g., default) text size, and inFIG.5D, reminder notification5046is shown with a second text size that is larger than the first text size. In some embodiments, in accordance with a determination that the respective mode is a vehicle operation mode, at least a part of displayed text is shown in conformance with a minimum text size. In some embodiments, in accordance with a determination that the respective mode is a vehicle operation mode, one or more user interface objects are shown in conformance with a minimum user interface object size. In some embodiments, detecting activation of the respective mode affordance includes (722) detecting a gesture at a location of the respective mode affordance. The gesture is, e.g., a tap gesture, a swipe gesture, a drag gesture, a contact that has a characteristic intensity above a threshold intensity level, and/or a combination of these. For example, detecting activation of work mode affordance5004includes detecting a tap gesture received while a focus selector5008is at a location corresponding to work mode affordance5004, as described with regard toFIGS.5A-5B. In some embodiments, the gesture is a swipe gesture received at indicia of a mode, such as a text indicator displayed in a user interface (e.g., on a lock screen). For example, detecting activation of drive mode affordance5006includes detecting a swipe gesture that includes movement (e.g., by a contact across a touch-sensitive surface112) of a focus selector5008from a position5008ato5008c, as described with regard toFIGS.5E-5G. In some embodiments, the device includes (724) a microphone (e.g., microphone113) and detecting activation of the respective mode affordance includes detecting a voice command that indicates the respective mode affordance. For example, a drive mode of the device is activated in response to a detected voice command (e.g., a voice command including a particular word or phrase, such as the word “drive” or the phrase “drive mode”). In some embodiments, determining whether the first set of time and/or device location criteria are met includes (726) determining a current time at the device (e.g., according to a device clock) and determining whether the current time is within time parameters for the first mode. In accordance with a determination that the current time is within the time parameters for the first mode (e.g., the current time is within the block of time), the device determines that the first set of time and/or device location criteria are met. Time parameters for the first mode are, e.g., a user input block of time (e.g., input via a calendar application), a default block of time, and/or an automatically determined block of time. A block of time for a mode is determined automatically, for example, based on a time range during which a user is typically at a location associated with the mode (e.g., a location indicated by the user). In some embodiments, determining whether the first set of time and/or device location criteria are met includes (728) determining a current location of the device (e.g., a location indicated by positioning data such as positioning data determined from data acquired by a GPS module and/or Wi-Fi component of the device), and determining whether the current location is within location parameters for the first mode. Determining whether the current location is within location parameters for the first mode, includes, e.g., determining whether the current location is within (or within a predetermined distance from) a designated location such as a neighborhood, a city block, an address, a point location (e.g., designated with a pin on a map), and/or a set of coordinates (e.g. latitude/longitude). In some embodiments, the location is determined automatically or in accordance with user input. In some embodiments, in accordance with a determination that the current location is within the location parameters for the first mode, the device determines that the first set of time and/or device location criteria are met. For example, the device detects a signal produced by Wi-Fi network5090(“Hal”) that is associated with a home mode, as indicated inFIG.5J. In response to detecting the signal produced by Wi-Fi network5090, the device determines that the current location of the device is within location parameters for the home mode of the device. In some embodiments, determining whether the first set of time and/or device location criteria are met includes (730) determining whether device movement meets movement criteria. For example, in some embodiments, an “on-the-go” mode is activated in response to a determination that device movement meets movement criteria. In some embodiments, movement criteria are met when output from the device accelerometer indicates a velocity exceeding a predetermined threshold. In some embodiments, determining whether device movement meets movement criteria includes determining, at a first time, a first location of the device; determining, at a second time, a second location of the device; determining a distance between the first location of the device and the second location of the device; and, based on the distance between the first location and the second location, determining that the first set of time and/or device location criteria are met. In some embodiments, one or more features of a mode are user-configured. In some embodiments, the device automatically adds features to (and/or removes features from) a mode, e.g., based on typical usage. For example, the device automatically adds an application icon to home mode interface5054for an application that is frequently used when time and/or location criteria for the home mode are met. In some embodiments, data from modes (e.g., an amount of time during which the mode has been active) is exposed for use by applications of the device and/or an operating system of the device. In some embodiments, information identifying a current mode of the device is available and/or communicated to applications (e.g., third party applications) of the device. In some embodiments, mode interfaces are generated based on user data for the user of the device and/or data about usage of the device.) It should be understood that the particular order in which the operations inFIGS.7A-7Bhave been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., method600) are also applicable in an analogous manner to method700described above with respect toFIGS.7A-7B. For example, the contacts, gestures, user interface objects, and focus selectors described above with reference to method700optionally have one or more of the characteristics of the contacts, gestures, user interface objects, and focus selectors described herein with reference to other methods described herein (e.g., method600). For brevity, these details are not repeated here. The operations described above with reference toFIGS.7A-7Bare, optionally, implemented by components depicted inFIGS.1A-1BorFIG.3. For example, determination operation710, detection operation708, and activation operation716are, optionally, implemented by event sorter170, event recognizer180, and event handler190. Event monitor171in event sorter170detects a contact on touch-sensitive display112, and event dispatcher module174delivers the event information to application136-1. A respective event recognizer180of application136-1compares the event information to respective event definitions186, and determines whether a first contact at a first location on the touch-sensitive surface (or whether rotation of the device) corresponds to a predefined event or sub-event, such as selection of an object on a user interface, or rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, event recognizer180activates an event handler190associated with the detection of the event or sub-event. Event handler190optionally uses or calls data updater176or object updater177to update the application internal state192. In some embodiments, event handler190accesses a respective GUI updater178to update what is displayed by the application. Similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted inFIGS.1A-1B. In accordance with some embodiments,FIG.8shows a functional block diagram of an electronic device800configured in accordance with the principles of the various described embodiments. The functional blocks of the device are, optionally, implemented by hardware, software, firmware, or a combination thereof to carry out the principles of the various described embodiments. It is understood by persons of skill in the art that the functional blocks described inFIG.8are, optionally, combined or separated into sub-blocks to implement the principles of the various described embodiments. Therefore, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein. As shown inFIG.8, an electronic device800includes a display unit802configured to display a user interface and a processing unit804coupled with the display unit802. In some embodiments, the processing unit804includes: a display enabling unit806, a detecting unit808, an activating unit810, a performing unit812, a returning unit814, an adding unit816, a determining unit818, a ceasing unit820, an initiating unit822, a modifying unit824, and an applying unit826. In some embodiments, the processing unit804is configured to: enable display (e.g., with display enabling unit806) a first mode of a plurality of modes of the device, wherein: the plurality of modes of the device includes the first mode and a second mode; the first mode of the device is active when a first set of time and/or device location criteria are met; and the second mode of the device is active when a second set of time and/or device location criteria, distinct from the first set of time and/or device location criteria, are met; and, while the first set of time and/or device location criteria are met: while the first mode of the device is active, detect (e.g., with detecting unit808) a first input that overrides the first mode of the device; in response to detecting the first input, activate (e.g., with activating unit810) the second mode of the device; after responding to the first input, while the second mode of the device is active, detect (e.g., with detecting unit808) a second input; in response to detecting the second input, perform (e.g., with performing unit812) an operation in the second mode of the device; and, after performing the operation in the second mode of the device, return (e.g., with returning unit814) to the first mode of the device. In some embodiments, performing the operation in the second mode of the device includes adding (e.g., with adding unit816) an affordance to the second mode of the device. In some embodiments, the processing unit804is configured to: after returning to the first mode of the device, determine (e.g., with determining unit818) that the second set of time and/or device location criteria are met; and, in response to determining that the second set of time and/or device location criteria are met: activate (e.g., with activating unit810) and enable display (e.g., with display enabling unit806) of the second mode of the device; and enable display (e.g., with display enabling unit806) of the affordance in the second mode of the device. In some embodiments, overriding the first mode includes ceasing (e.g., with ceasing unit820) to display the first mode of the device and enabling display of the second mode of the device. In some embodiments, a respective mode of the plurality of modes includes one or more affordances, and the one or more affordances includes: an application affordance that, when activated, initiates (e.g., with initiating unit822) an application; a media affordance, that, when activated, initiates (e.g., with initiating unit822) playback of content; a control affordance, that, when activated, initiates (e.g., with initiating unit822) a function; and/or an information affordance that displays information. In some embodiments, performing the operation in the second mode of the device includes modifying (e.g., with modifying unit824) a setting for a parameter in the second mode of the device. In some embodiments, the processing unit804is configured to: after returning to the first mode of the device, determine (e.g., with determining unit818) that that the second set of time and/or device location criteria are met; and in response to determining that the second set of time and/or location criteria are met: initiate (e.g., with initiating unit822) the second mode of the device; and, apply (e.g., with applying unit826) the setting to the parameter of the device. In some embodiments, overriding the first mode includes applying (e.g., with applying unit826) the setting to the parameter of the device in the second mode. In some embodiments, returning to the first mode of the device includes automatically returning (e.g., with returning unit814) to the first mode of the device after a predetermined period of time. In some embodiments, the processing unit804is configured to: while the second mode is active, detect (e.g., with detecting unit808) a third input; and, in response to detecting the third input, re-activate (e.g., with activating unit810) the first mode of the device. In some embodiments, detecting the first input includes detecting a gesture at a mode selection affordance. In some embodiments, the electronic device800includes a touch-sensitive surface unit828(e.g., coupled with the processing unit804) configured to receive contacts and one or more sensor units830(e.g., coupled with the processing unit804) for detecting intensity of contacts on the touch-sensitive surface unit828. The processing unit804is configured to: while a focus selector is at a location of a mode selection affordance, detecting (e.g., with detecting unit808) an increase in a characteristic intensity of the contact on the touch-sensitive surface unit828above a mode display intensity threshold; and, in response to detecting the increase in the characteristic intensity of the contact above the mode display intensity threshold, enable display (e.g., with display enabling unit806) of a plurality of mode affordances that correspond to at least a subset of the plurality of modes of the device; wherein detecting the first input includes receiving a selection of a mode affordance that corresponds to the second mode. In some embodiments, the first mode is a work mode and the second mode is a home mode. In some embodiments, the first set of time and/or device location criteria are met when at least one of a work time criterion or a work location criterion is satisfied, a work time criterion is satisfied when a current time is within work time parameters; anda work location criterion is satisfied when a current location of the device is within work location parameters. In accordance with some embodiments,FIG.9shows a functional block diagram of an electronic device900configured in accordance with the principles of the various described embodiments. The functional blocks of the device are, optionally, implemented by hardware, software, firmware, or a combination thereof to carry out the principles of the various described embodiments. It is understood by persons of skill in the art that the functional blocks described inFIG.9are, optionally, combined or separated into sub-blocks to implement the principles of the various described embodiments. Therefore, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein. As shown inFIG.9, an electronic device900includes a display unit902configured to display a user interface and a processing unit904coupled with the display unit902. In some embodiments, the processing unit904includes: a display enabling unit906, a determining unit908, a detecting unit910, a ceasing unit912, and an activating unit914. The processing unit904is configured to concurrently enable display (e.g., with display enabling unit906) of a plurality of mode affordances, wherein the plurality of mode affordances includes a first mode affordance that, when activated, initiates a first mode of the mobile device; the plurality of mode affordances includes a second mode affordance that, when activated, initiates a second mode of the mobile device, distinct from the first mode of the mobile device; and the mobile device is configured to recommend activating a respective mode of the device in accordance with a determination that a respective set of time and/or device location criteria that correspond to the respective mode of the device are met; determine (e.g., with determining unit908) that a first set of time and/or device location criteria that correspond to the first mode of the device are met; in response to determining that the first set of time and/or device location criteria are met, enable display (e.g., with display enabling unit906) of a visual indication that corresponds to a recommendation to activate the first mode of the device; while the first set of time and/or device location criteria are met and the visual indication that corresponds to the recommendation to activate the first mode of the device is displayed, detect (e.g., with detecting unit910) activation of a respective mode affordance in the plurality of concurrently displayed mode affordances; and, in response to detecting activation of the respective mode affordance in the plurality of concurrently displayed mode affordances: cease (e.g., with ceasing unit912) to display the plurality of mode affordances; and activate (e.g., with activating unit914) a mode of the device that corresponds to the respective mode affordance. In some embodiments, enabling display of the visual indication that corresponds to the recommendation to activate the first mode of the device, in response to determining that the first set of time and/or device location criteria are met, occurs while maintaining concurrent display of the plurality of mode affordances. In some embodiments, in accordance with a determination that the respective mode is a vehicle operation mode, a displayed area of at least one affordance is increased from a default area to a vehicle operation mode area that is larger than the default area. In some embodiments, detecting activation of the respective mode affordance includes detecting a gesture at a location of the respective mode affordance. In some embodiments, the electronic device900includes a microphone unit916(e.g., coupled with processing unit904); and detecting activation of the respective mode affordance includes detecting a voice command that indicates the respective mode affordance. In some embodiments, determining whether the first set of time and/or device location criteria are met includes: determining a current time at the device, and determining whether the current time is within time parameters for the first mode; and in accordance with a determination that the current time is within the time parameters for the first mode, determine (e.g., with determining unit908) that the first set of time and/or device location criteria are met. In some embodiments, determining whether the first set of time and/or device location criteria are met includes: determining a current location of the device, and determining whether the current location is within location parameters for the first mode; and in accordance with a determination that the current location is within the location parameters for the first mode, determine (e.g., with determining unit908) that the first set of time and/or device location criteria are met. In some embodiments, determining whether the first set of time and/or device location criteria are met includes determining whether device movement meets movement criteria. The operations in the information processing methods described above are, optionally implemented by running one or more functional modules in information processing apparatus such as general purpose processors (e.g., as described above with respect toFIGS.1A and3) or application specific chips. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described embodiments with various modifications as are suited to the particular use contemplated. | 201,989 |
11861160 | DETAILED DESCRIPTION The embodiments of the present application provide a writing interaction method applied to the interaction between a smart interactive display device and a smart pen. Please refer toFIG.1,FIG.1is a flowchart of a first embodiment of a writing interaction method according to the present application. The writing interaction method includes step100. Step100: detecting a first touch event of one or more smart pens and acquiring one or more corresponding identifiers of the one or more smart pens. In this step, the smart pens may generate a touch event on a screen of the smart interactive display device, the touch event includes the above first touch event, and the type of the touch event can be capacitive touch, infrared touch and the like. At the same time, a communication connection is established between the smart pens and the smart interactive device, and the connection between the smart pens and the smart interactive device may transmit the identifiers of the smart pens. For example, an active capacitive pen may establish a communication connection with a touch panel card of a capacitive touch screen through Microsoft Pen Protocol (MPP) of Microsoft or proprietary Active Electrostatic Solution (AES) of Wacom, and transmit its own identifier. It can be understood that, the smart pen may also transmit its own identifier through the electromagnetic induction input technology or the infrared communication technology. The touch events of different smart pens may occur simultaneously or sequentially. From the perspective of application, although there are multiple people operating the smart pens at the same time, the start time of the writing of each person is generally not exactly the same. Technically speaking, it is easier to match touch events that occur one after the other with the corresponding identifiers of the smart pens. As shown inFIG.9, the initial stroke of the character “” is a horizontal line, and the initial stroke of the character “” is a dot. The time when the initial positions of the two strokes are generated can be simultaneous or sequential. The writing interaction method further includes step102: generating handwriting according to the first touch event, and determining whether the generated handwriting corresponds to multiple smart pens of the smart pens according to the identifiers of the smart pens. In this step, in the example shown inFIG.1, although step102A of generating handwriting is arranged before step102B of determining the number of the smart pens, determining the number of the smart pens is actually determining the occurred first touch event belongs to which smart pen(s) and then determining the number of the smart pens according to the identifiers of the smart pens. Therefore, apparently, the handwriting is associated with the identifiers of the smart pens, and substantially, the first touch event is associated with the identifiers of the smart pens. Therefore, the sequence between step102A and step102B may not be limited. In addition, step102B can also be performed after the first touch event occurs. For example, inFIG.9, the determination of step102B is completed after the writing of the words “” and “” is completed. The writing interaction method further includes step104: under a condition that the handwriting corresponds to the multiple smart pens, generating multiple non-overlapping writing regions, wherein the multiple writing regions are in one-to-one correspondence with the multiple smart pens, each writing region covers handwriting of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting. In this step, under a condition that multiple writing regions are generated, it is not necessary to completely fill the entire display region of the smart interactive display device. Optionally, the size and position of the writing region may be determined according to the customary size and the position where the initial first touch event occurs. For example, the two writing regions generated inFIG.10are set adjacent to the upper side of the display region, and according to the default left-aligned typesetting mode, the left border of the writing region is placed close to the position where the initial first touch event occurs, that is, the position where the initial handwriting is generated. The left and right writing regions inFIG.10do not overlap with each other, but in order to make full use of the display region, the two writing regions can share a common border. It is understandable that the writing region can be determined in the form of a graphic window, or can be defined by only colored lines. After multiple writing regions are generated, since all writing regions are independent of each other, suppose that the left writing region inFIG.10belongs to the smart pen marked 010, and the right writing region belongs to the smart pen marked 011. Then, under a condition that the smart pen marked 011 is writing in the left writing region, it will not be responded as displaying handwriting. It is understandable that if the handwriting corresponds to only one smart pen, the default writing region, such as the display region of the entire display screen, is used; or only a suitable size of writing region is generated for the smart pen. According to the writing interaction method, the smart interactive display device and the writing interaction system provided by the embodiments of the present application, the generation source of the first touch event (that is, the smart pens) may be identified. Under a condition that it is detected that the written handwriting comes from multiple smart pens, a corresponding number of writing regions may be automatically generated. Each writing region covers handwriting of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting. Therefore, it is ensured that when multiple people use multiple pens to write, the generated multiple writing regions are independent of each other, the content generated by different smart pens will not be cluttered, and the writing content will be clear, thereby improving the user experience of multiple people operating smart pens for writing. In addition, the writing region is generated by matching the initial handwriting of each smart pen, in other words, the writing region is generated with the initial handwriting of each smart pen, and is positioned according to the initial handwriting. Therefore, a more flexible region layout can be realized, thereby adapting to different application scenarios. Further, please refer toFIG.2.FIG.2is a flowchart of a second embodiment of a writing interaction method according to the present application. After the step104, the writing interaction method further includes step206. Step206: detecting a second touch event of a smart pen and acquiring a corresponding identifier of the smart pen. This step is used to detect the touch event of the smart pen again after the latest generation of the writing regions, that is, to detect the second touch event of the smart pen and match the identifier of the smart pen corresponding to the second touch event. By associating each touch event of the smart pen with the corresponding identifier of the smart pen, a data basis can be provided for subsequent smart control. The writing interaction method further includes step208: determining whether the second touch event is generated by a newly added smart pen according to the identifier of the smart pen. In this step, although the second touch event has been detected in the previous step, it is not necessary to display corresponding handwriting immediately in response to the second touch event. Since each touch event is associated with the identifier of the smart pen, by querying the data of the identifiers of the smart pens, whether an identifier of a new smart pen is added can be determined, and thus whether the new smart pen or an operator of the new smart pen is added can be indirectly determined. The writing interaction method further includes step210: under a condition that the second touch event is generated by the newly added smart pen, determining whether the second touch event is generated in the generated writing regions according to a location where the second touch event is generated. In this step, since the identifier of the smart pen corresponding to the second touch event is the identifier of the newly added smart pen, there is no existing writing region matching the second touch event. In order to determine an appropriate response, it is necessary to firstly know whether the location where the second touch event is generated is located in the remaining region outside the writing regions in the display region or in any existing writing regions. The writing interaction method further includes step212: under a condition that the second touch event is generated in the generated writing regions, generating no handwriting according to the second touch event. In this step, the principle is that the existing writing regions are prioritized and their sizes are not adjusted. Under a condition that the second touch event occurs in the existing writing regions, in order to ensure the independence of the existing writing regions relative to the newly added smart pen, the second touch event of the newly added smart pen is not responded and no handwriting is generated. For example, referring toFIG.10, under a condition that a third smart pen with an identification number of 001 is added, then no handwriting is generated if the smart pen with the identification number of 001 writes in any of the left and right writing regions inFIG.10. The writing interaction method further includes step214: under a condition that the second touch event is not generated in the generated writing regions, generating handwriting according to the second touch event, and generating a new writing region corresponding to the newly added smart pen, wherein the new writing region is not overlapped with the existing multiple writing regions, the new writing region covers the handwriting corresponding to the newly added smart pen, and the new writing region only responds to a touch event of the newly added smart pen for generating handwriting. In this step, the principle is that the existing writing regions are prioritized and their sizes are not adjusted. Under a condition that the second touch event is generated in the remaining region of the display region, in order to comprehensively respond to the second touch event of the newly added smart pen and generate corresponding handwriting, a new writing region is generated for the newly added smart pen. For example, referring toFIG.10, under a condition that a third smart pen with an identification number of 001 is newly added, a new writing region is generated if the second touch event of the smart pen is generated near the lower edge of the display region and outside the existing left and right writing regions. In this embodiment, by establishing the principle of giving priority to the existing writing regions, the high independence of the existing writing regions can be ensured. Further, within the remaining region of the display region, a new writing region can be generated for the second touch event of the newly added smart pen, thereby ensuring a certain flexibility. Further, please refer toFIG.3.FIG.3is a flowchart of a third embodiment of a writing interaction method according to the present application. After the step104, the writing interaction method further includes steps306,308and310. Step306: detecting a third touch event of a smart pen and acquiring a corresponding identifier of the smart pen. Step308: determining whether the third touch event is generated by a newly added smart pen according to the identifier of the smart pen. Step310: under a condition that the third touch event is generated by the newly added smart pen, determining whether the third touch event is generated in the generated writing regions according to a location where the third touch event is generated. In this step, on the basis that the response strategy for the newly added smart pen is determined, in order to execute the corresponding strategy smoothly, it is necessary to firstly know whether the location where the third touch event is generated is located in the remaining region outside the writing regions in the display region or in any existing writing regions. The writing interaction method further includes step312: under a condition that the third touch event is generated in the generated writing regions, regenerating multiple non-overlapping writing regions, wherein the regenerated multiple writing regions are in one-to-one correspondence with the multiple smart pens and the newly added smart pen, each writing region covers a touch track of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting. In this step, since it is determined that the newly added smart pen has priority to the existing writing regions, each time a new smart pen is added, the multiple writing regions need to be regenerated. For example, under a condition that the third touch event of the newly added smart pen falls in the left or right writing region, the multiple writing regions need to be regenerated. For example, under a condition that the newly added smart pen writes “” between the words “” and “”, the regenerated writing regions can include three writing regions (left, middle and right), with the word “” in the middle writing region. The writing interaction method further includes step314: under a condition that third touch event is not generated in the generated writing regions, generating a new writing region corresponding to the newly added smart pen, wherein the existing multiple writing areas remain unchanged, wherein the new writing region is not overlapped with the existing multiple writing regions, the new writing region covers handwriting corresponding to the newly added smart pen, and the new writing region only responds to a touch event of the newly added smart pen for generating handwriting. In this step, under a condition that the third touch event of the newly added smart pen falls in the remaining region of the display region, a new writing region is generated for the newly added smart pen, and the existing multiple writing regions remain unchanged. In this way, the third touch event of the newly added smart pen can be flexibly responded, and the independence of the existing writing regions can be maintained. For example, under a condition that the third touch event of the newly added smart pen occurs at the lower right corner of the display region, then a new writing region is generated for the newly added smart pen at the lower right corner, while keeping the original left and right writing regions unchanged. In this embodiment, by establishing the principle of giving priority to the newly added smart pen, it can be ensured that a writing region can be generated for the newly added smart pen. It is understandable that by using this principle, the sizes of the existing writing regions can be adjusted if necessary. Further, please refer toFIG.4.FIG.4is a flowchart of a fourth embodiment of a writing interaction method according to the present application. The writing interaction method further includes steps406,408and410. Step406: detecting a fourth touch event of a smart pen and acquiring a corresponding identifier of the smart pen. Step408: determining whether the fourth touch event is generated by a newly added smart pen according to the identifier of the smart pen. Step410: under a condition that the fourth touch event is not generated by the newly added smart pen, determining whether the fourth touch event is generated in the generated writing regions according to a location where the fourth touch event is generated. In this step, it is understandable that, in some application scenarios, whether it is an existing smart pen or a newly added smart pen, the fourth touch event generated by it may fall within the generated writing regions or may fall outside the generated writing regions. However, when considering whether to expand the generated writing regions, according to the principle of not overlapping with other writing regions, it is especially necessary to consider touch events that fall in the remaining region of the display region. The writing interaction method further includes step412: under a condition that the fourth touch event is not generated in the generated writing regions, tentatively expanding the writing region corresponding to the smart pen according to a preset rule, so as to cover a track corresponding to the fourth touch event. In this step, the writing region is usually a rectangle, but in some embodiments, the writing region can also be other shapes, such as an ellipse. The preset rule is to expand the corresponding writing region according to the preset rule to cover the track of the fourth touch event. For example, please refer toFIG.10, the right writing region belongs to the smart pen marked as 011. Under a condition that the smart pen marked as 011 generates a fourth touch event near the lower edge of the writing region, the right writing region can be tentatively expanded downward. Under a condition that the smart pen marked as 011 generates a fourth touch event near the left edge of the writing region, the right writing region can be tentatively expanded to the left. The writing interaction method further includes step414: determining whether the tentatively expanded writing region overlaps with other writing regions. In this step, under the principle of ensuring the independence of the existing writing regions, not all tentative expansions are suitable, and the expanded region still needs to keep from overlapping with other writing regions. The writing interaction method further includes step416: under a condition that the tentatively expanded writing region overlaps with the other writing regions, canceling expansion of the writing region corresponding to the smart pen and generating no handwriting according to the fourth touch event. In this step, for example, following the example in step412, under a condition that the right writing region is extended to the left, the right writing region will overlap the left writing region. Therefore, the expansion of the right writing region should be canceled, and no handwriting will be generated according to the fourth touch event. The writing interaction method further includes step418: under a condition that the tentatively expanded writing region is not overlapped with the other writing regions, expanding the writing region corresponding to the smart pen and generating handwriting according to the fourth touch event. In this step, for example, following the example in step412, the right writing region is extended downward, for example, the right writing region expands downward until the lower edge is flush with the lower edge of the left writing region, thereby actually extending the writing region corresponding to the smart pen, and generating handwriting in respond to the fourth touch event. In this embodiment, it is considered that the initially generated writing region may not be enough for writing. Therefore, under a condition that the touch event of the corresponding smart pen falls outside the original writing region, the original writing region may be tentatively expanded. Under the premise of not overlapping with other writing regions, the original writing region may be expanded, so as to more flexibly adapt to the needs of the users. Further, under a condition that the handwriting corresponds to the multiple smart pens, the generating multiple non-overlapping writing regions specifically includes:acquiring a location of a leftmost coordinate point of the handwriting corresponding to each smart pen;determining whether a distance between two adjacent leftmost coordinate points is greater than or equal to a first preset distance;under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, generating the multiple non-overlapping writing regions on a screen display region. Further, under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, generating the multiple non-overlapping writing regions on a screen display region includes:under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, except for a leftmost coordinate point close to a left side of the screen display region, forming the writing regions by setting vertical dividing lines referring to the other leftmost coordinate points, wherein the first preset distance is equal to a product of δ and L, L is a horizontal length of the screen display region, and δ is less than 0.5; orunder a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, forming the writing regions by using a vertical centerline of the screen display region as a dividing line, wherein the first preset distance is equal to L/2, where L is the horizontal length of the screen display region. Further, please refer toFIG.5.FIG.5is a flowchart of a fifth embodiment of a writing interaction method according to the present application. The generating multiple non-overlapping writing regions specifically includes step541. Step541: acquiring a location of a leftmost coordinate point of the handwriting corresponding to each smart pen. In this step, since left alignment is the most commonly used typesetting method, in order to adapt to this typesetting method, the leftmost coordinate point of the handwriting of each smart pen is obtained as a reference point for generating the writing region. For example, inFIG.9, the leftmost point of the word “” is P2, and its coordinates are (c, d), and the leftmost point of the word “” is P1, and its coordinates are (a, b). The generating multiple non-overlapping writing regions further includes step543: determining whether a distance between two adjacent leftmost coordinate points is greater than or equal to a product of δ and L, wherein L is a horizontal length of the screen display region, and δ is less than 0.5. In this step, following the example in step541, δ=⅓, it is determined whether the distance between P1and P2is greater than or equal to L/3, that is, whether the value of a-c is greater than or equal to L/3. The distance can cover the possible horizontal length of the left handwriting among the two adjacent handwritings in the horizontal direction, thereby ensuring that a vertical dividing line can be set between the two handwritings. The horizontal lengths of the two handwritings are generally determined by the time required for determining the number of the smart pens, and the horizontal length is usually less than L/3. Of course, the time required for the determination can also be extended a little longer, so as to extend the length occupied by the initial handwriting of each smart pen. The generating multiple non-overlapping writing regions further includes step545: under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the product of δ and L, except for a leftmost coordinate point close to a left side of the screen display region, forming the writing regions by setting vertical dividing lines referring to the other leftmost coordinate points. In this step, for the leftmost coordinate point close to the left side of the screen display region, the writing region corresponding to the handwriting can be bounded by the left side of the display region, like the left writing region inFIG.10. For the handwriting of the other smart pens, the corresponding writing regions may be formed by setting vertical dividing lines referring to the leftmost coordinate points in turn. It is understandable that the most basic principle of “referring” is not to divide handwriting. Therefore, for example, the most extreme value of the boundary line X in the left writing region inFIG.10is X=a, and a is the abscissa of P1. On this basis, the dividing line X can be shifted to the left by a certain distance, that is, X=a−a1, a1 is a constant, such as a1=5 cm, which can be different according to the size of the screen in practice. In this embodiment, the touch screen of the smart interactive display device is usually a wide screen, such as a 16:9 wide screen, and multiple people are standing along the horizontal direction in sequence when writing. Therefore, under a condition that multiple people are writing at the same time, it is also appropriate to only consider dividing the display region horizontally to generate the writing regions, and the algorithm required for this setup is relatively simpler. Further, please refer toFIG.6.FIG.6is a flowchart of a sixth embodiment of a writing interaction method according to the present application. The step104specifically includes step642. Step642: under a condition that the handwriting corresponds to the multiple smart pens, and under a condition that the generated handwriting corresponds to two smart pens, acquiring a location of a leftmost coordinate point of the handwriting corresponding to each smart pen. In this step, which is the same as step541, since left alignment is the most commonly used typesetting method, in order to adapt to this typesetting method, the leftmost coordinate point of the handwriting of each smart pen is obtained as a reference point for generating the writing region. The step104further includes step644: determining whether a distance between two adjacent leftmost coordinate points is greater than or equal to L/2, wherein L is a horizontal length of the screen display region. In this step, please refer toFIG.9, the distance between P1and P2is greater than or equal to L/2, that is, the value of a-c is greater than or equal to L/2. It is indicated that the distance between P1and P2is large enough to completely cover the possible horizontal length of the left handwriting among the two adjacent handwritings in the horizontal direction, thereby ensuring greater flexibility in setting a vertical dividing line between the two handwritings. The step104further includes step646: under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to L/2, forming the writing regions by using a vertical centerline of the screen display region as a dividing line. In this step, considering the greater flexibility in setting a vertical dividing line between the two handwritings, in order to facilitate the typesetting of the two handwritings, the vertical centerline of the display region is used as the boundary to form two writing regions of the same size. In this embodiment, under a condition that the generated handwriting corresponds to two smart pens, and there is a sufficient distance between the two handwritings, that is, the distance between the two handwritings is L/2 or more, in order to better typeset the display region and coordinate the space utilization of the display region, the writing region corresponding to different handwriting may not be formed by setting the vertical dividing line referring to the leftmost coordinate point of the handwriting, but the vertical centerline of the display region may be used as the boundary to form two writing regions of the same size. Further, please refer toFIG.7.FIG.7is a flowchart of a seventh embodiment of a writing interaction method according to the present application. The step104includes step703A. Step703A: under a condition that the handwriting corresponds to the multiple smart pens, popping up an instruction window requesting a user to confirm whether to generate the multiple writing regions. In this step, in order to give the user the right to choose whether to generate multiple writing regions, a pop-up window is set to request the user to confirm. The step104further includes step703B: monitoring an instruction command input by the user in the instruction window. In this step, the input instruction command may be a touch event, a voice command, or a character command input from a keyboard. By monitoring the instruction command, it is convenient for the machine to perform subsequent actions according to the user's instruction. The step104further includes step703C: under a condition that a command of determining to generate the multiple writing regions is received from the user, generating the multiple non-overlapping writing regions, wherein the multiple writing regions are in one-to-one correspondence with the multiple smart pens, each writing region covers handwriting of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting. In this step, multiple writing regions are generated only when a confirmed command is received, so as to more accurately meet the needs of the user. After monitoring the instruction command input by the user in the instruction window, the writing interaction method further includes step705. Step705: under a condition that a command of cancelling generation of the multiple writing regions is received from the user, maintaining original writing regions. In this step, under a condition that a canceling command is received, the original writing regions are maintained, for example, multiple smart pens write in a same writing region. It is understandable that, optionally, under a condition that the newly added smart pen writes in one of the generated multiple writing regions, and the user chooses to cancel the instruction command of regenerating multiple writing regions, the identifier of the newly added smart pen may be included in that writing region, that is, the writing region can accept the writing input of two smart pens. In this embodiment, a user confirmation process is set before determining whether to generate multiple writing regions, so that the smart interactive display device can more accurately meet the needs of the user, thereby achieving a better user experience. Further, each writing region includes an editing region and a menu bar region, and the editing region of each writing region covers the handwriting corresponding to the smart pen and corresponds to the touch event of the smart pen; the menu bar region includes a writing main menu, and submenus of the writing main menu include color, eraser, and stroke thickness submenus. In this embodiment, as shown inFIG.10, the upper parts of the left and right writing regions are provided with a menu bar region, the square patterns in the menu bar in the figure represents graphic function buttons or icons, and the outside of the menu bar is the editing region. Of course, in some embodiments, the menu bar region can also be set to be hidden and can be called up when needed. Specifically, the user can set the color of the handwriting by invoking the color command, or erase the generated handwriting by invoking the eraser, and can also set the thickness of the handwriting by invoking the stroke thickness. By setting a menu bar region in each writing region, the user can independently edit the handwriting in each writing region. Further, please refer toFIG.8.FIG.8is a flowchart of an eighth embodiment of a writing interaction method according to the present application. The menu bar region includes a partition canceling main menu; the writing interaction method further includes step806. Step806: monitoring the input command of the user in the menu bar. In this step, which is similar to step703B, the input instruction command may be a touch event, a voice command, or a character command input from a keyboard. By monitoring the instruction command, it is convenient for the machine to perform subsequent actions according to the user's instruction. The writing interaction method further includes step810: under a condition that the command of canceling the partitions is detected, restoring the writing regions to the original single writing region, and deleting the content in each of the writing regions. In this step, under a condition that the user wants to cancel the partitions, a corresponding trigger command can be conveniently input in the menu bar, which restores the writing regions to the original single region, and deletes the content to quickly clear the screen. For example, in the classroom scene, the math class is coming to an end and the next class is English class. The display region of the smart interactive display device has multiple writing regions, all operators have finished writing, and the teacher or any operator can trigger the command of canceling the partitions. The screen can be quickly cleared, and it is convenient to use the writing interaction method of the smart interactive display device of the present application in English class. In this embodiment, when the user wants to cancel multiple writing regions that have been generated and clear the screen, the user can use the partition canceling main menu to quickly achieve the goal. The present application also provides a smart interactive display device for interacting with a smart pen. Please refer toFIG.11, the smart interactive display device includes a touch event matching module, a touch event response module, and a writing area generation module, wherein,the touch event matching module includes a touch detection unit and a touch matching unit, the touch detection unit is configured to detect a first touch event of one or more smart pens, and the touch matching unit is configured to acquire one or more identifiers of the one or more smart pens and match the corresponding first touch event;the touch event response module includes a handwriting generation unit and a quantity monitoring unit, the handwriting generation unit is configured to generate handwriting corresponding to the first touch event; the quantity monitoring unit is configured to determine whether the generated handwriting corresponds to multiple smart pens of the smart pens according to the identifiers of the smart pens;the writing region generation module includes a matching generation unit and a response control unit, the matching generation unit is configured to generate multiple non-overlapping writing regions under a condition that the generated handwriting corresponds to the multiple smart pens, wherein the multiple writing regions are in one-to-one correspondence with the multiple smart pens, and each writing region covers handwriting of a corresponding smart pen; and the response control unit is configured to control each writing region to only respond to a touch event of the corresponding smart pen for generating handwriting. Further, after the matching generation unit is configured to generate multiple non-overlapping writing regions,the touch detection unit is further configured to detect a second touch event of a smart pen, and the touch matching unit is further configured to acquire a identifier of the smart pen and match the corresponding second touch event;the quantity monitoring unit is further configured to determine whether the second touch event is generated by a newly added smart pen according to the identifier of the smart pen;the response control unit is further configured to determine whether the second touch event is generated in the generated writing regions according to a location where the second touch event is generated;under a condition that the second touch event is generated in the generated writing regions, the handwriting generation unit is configured to generate no handwriting according to the second touch event, that is, the handwriting generation unit does not respond to generate handwriting;under a condition that the second touch event is not generated in the generated writing regions, the handwriting generation unit is configured to generate handwriting according to the second touch event; the matching generation unit is further configured to generate a new writing region corresponding to the newly added smart pen, wherein the new writing region is not overlapped with the existing multiple writing regions, the new writing region covers the handwriting corresponding to the newly added smart pen, and the new writing region only responds to a touch event of the newly added smart pen for generating handwriting. Further, after the matching generation unit is configured to generate multiple non-overlapping writing regions,the touch detection unit is further configured to detect a third touch event of a smart pen, and the touch matching unit is further configured to acquire an identifier of the smart pen and match the corresponding third touch event;the quantity monitoring unit is further configured to determine whether the third touch event is generated by a newly added smart pen according to the identifier of the smart pen;under a condition that the third touch event is generated by the newly added smart pen, the response control unit is further configured to determine whether the third touch event is generated in the generated writing regions according to a location where the third touch event is generated;under a condition that the third touch event is generated in the generated writing regions, the matching generation unit is further configured to regenerate multiple non-overlapping writing regions, wherein the regenerated multiple writing regions are in one-to-one correspondence with the multiple smart pens and the newly added smart pen, each writing region covers a touch track of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting;under a condition that third touch event is not generated in the generated writing regions, the matching generation unit is further configured to generate a new writing region corresponding to the newly added smart pen, wherein the new writing region is not overlapped with the existing multiple writing regions, the new writing region covers handwriting corresponding to the newly added smart pen, and the new writing region only responds to a touch event of the newly added smart pen for generating handwriting. Further, after the matching generation unit is configured to generate multiple non-overlapping writing regions,the touch detection unit is further configured to detect a fourth touch event of a smart pen, and the touch matching unit is further configured to acquire an identifier of the smart pen and match the corresponding fourth touch event;the quantity monitoring unit is further configured to determine whether the fourth touch event is generated by a newly added smart pen according to the identifier of the smart pen;under a condition that the fourth touch event is not generated by the newly added smart pen, the response control unit is further configured to determine whether the fourth touch event is generated in the generated writing regions according to a location where the fourth touch event is generated;under a condition that the fourth touch event is not generated in the generated writing regions, the matching generation unit is further configured to tentatively expand the writing region corresponding to the smart pen according to a preset rule, so as to cover a track corresponding to the fourth touch event, and determine whether the tentatively expanded writing region overlaps with other writing regions;under a condition that the tentatively expanded writing region overlaps with the other writing regions, the matching generation unit is further configured to cancel expansion of the writing region corresponding to the smart pen, and the handwriting generation unit is configured to generate no handwriting according to the fourth touch event;under a condition that the tentatively expanded writing region is not overlapped with the other writing regions, the matching generation unit is further configured to expand the writing region corresponding to the smart pen, and the handwriting generation unit is configured to generate handwriting according to the fourth touch event. Further, the matching generation unit includes a reference point acquisition unit, a distance determination unit and a region generation unit, wherein,the reference point acquisition unit is configured to acquire a location of a leftmost coordinate point of the handwriting corresponding to each smart pen under a condition that the generated handwriting corresponds to the multiple smart pens;the distance determination unit is configured to determine whether a distance between two adjacent leftmost coordinate points is greater than or equal to a first preset distance;under a condition that the result of the distance determination unit is yes, the region generation unit is configured to generate the multiple non-overlapping writing regions on a screen display region. Further, the region generation unit is specifically configured to:under a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, except for a leftmost coordinate point close to a left side of the screen display region, form the writing regions by setting vertical dividing lines referring to the other leftmost coordinate points, wherein the first preset distance is equal to a product of δ and L, L is a horizontal length of the screen display region, and δ is less than 0.5; orunder a condition that the distance between the two adjacent leftmost coordinate points is greater than or equal to the first preset distance, form the writing regions by using a vertical centerline of the screen display region as a dividing line, wherein the first preset distance is equal to L/2, where L is the horizontal length of the screen display region. Further, the writing region generation module also includes a partition pop-up window unit and a command monitoring unit, wherein,under a condition that the generated handwriting corresponds to the multiple smart pens, the partition pop-up window unit is configured to pop up an instruction window requesting a user to confirm whether to generate the multiple writing regions;the command monitoring unit is configured to monitor an instruction command input by the user in the instruction window;under a condition that a command of determining to generate the multiple writing regions is received from the user, the matching generation unit is further configured to generate the multiple non-overlapping writing regions, wherein the multiple writing regions are in one-to-one correspondence with the multiple smart pens, each writing region covers handwriting of a corresponding smart pen, and each writing region only responds to a touch event of the corresponding smart pen for generating handwriting,under a condition that a command of cancelling generation of the multiple writing regions is received from the user, the matching generation unit is further configured to maintain original writing regions. Further, each writing region includes an editing region and a menu bar region, and the editing region of each writing region covers the handwriting corresponding to the smart pen and corresponds to the touch event of the smart pen; the menu bar region includes a writing main menu, and submenus of the writing main menu include color, eraser, and stroke thickness submenus. Further, the writing region generation module further includes a command monitoring unit, the command monitoring unit monitors the input command of the user in the menu bar; under a condition that the command monitoring unit detects the command of canceling the partitions, the matching generation module is further configured to restore the writing regions to the original single writing region, and delete the content in each of the writing regions. For the specific description of each module and unit of the smart interactive display device involved in each embodiment of the present application, reference may be made to the specific description of the embodiments corresponding to the writing interaction method, which is not repeated here. The present application also provides a smart interactive display device, including a capacitive touch screen, a processor, and a computer-readable storage medium, wherein the computer-readable storage medium stores a writing interaction program, and the writing interaction program, when being executed, implements the writing interaction method as described above. The specific steps of the writing interaction method may refer to the above embodiments. Since the smart interactive display device adopts all technical solutions of the above embodiments, it has at least all beneficial effects brought by the technical solutions of the above embodiments, will not be repeated here. The present application also provides a writing interactive system, including a smart pen and the above smart interactive display device including various modules. The specific structure of the smart interactive display device may refer to the above embodiments. Since the smart interactive display device adopts all technical solutions of the above embodiments, it has at least all beneficial effects brought by the technical solutions of the above embodiments, will not be repeated here. The present application also provides a writing interactive system, including an active capacitive smart pen and the above smart interactive display device including the computer storage medium. The specific structure of the smart interactive display device and the steps implemented by executing the writing interactive program may refer to the above embodiments. Since the writing interactive system adopts all technical solutions of the above embodiments, it has at least all beneficial effects brought by the technical solutions of the above embodiments, will not be repeated here. It should be noted that, in the present application, the terms “comprise”, “include”, or any other variants thereof, are intended to represent a non-exclusive inclusion, such that a process, method, article or device including a series of elements includes not only those elements, but also other elements that are not explicitly listed or elements inherent to such a process, method, article or device. Without more constraints, the elements following an expression “comprise/include . . . ” do not exclude the existence of additional identical elements in the process, method, article or device that includes the elements. In the present application, relational terms, such as first and second, are used merely to distinguish one entity or operation from another entity or operation, without necessarily requiring or implying any actual such relationships or orders of these entities or operations. The above serial numbers of the embodiments of the present application are only for description, and do not represent the advantages or disadvantages of the embodiments. According to the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware or software, but in many cases, the former is a better implementation. Based on this understanding, the technical solutions of the present application or the parts that make contributions to the prior art can be embodied in the form of software products. The computer software products are stored in a storage medium (such as a ROM/RAM, a magnetic disk, an optical disk) as described above, including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) execute the methods described in the various embodiments of the present application. Those skilled in the art can understand that, on the premise of no conflict, the above solutions can be combined and superimposed freely. It should be understood that the above embodiments are only exemplary rather than restrictive, and without departing from the basic principles of the present application, those skilled in the art can make various obvious or equivalent modifications or substitutions to the above details, which will all be included within the scope of the claims of the present application. | 49,028 |
11861161 | DESCRIPTION OF EMBODIMENTS The terms “first” and “second” in the following are used merely for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature restricted by “first” or “second” may explicitly indicate or implicitly include one or more such features. In the descriptions in the embodiments of this application, unless otherwise provided, “a plurality of” means two or more than two. The embodiments of this application provide a display method. The method may be applied to mobile officing and another application scenario in which a terminal needs to perform multi-screen display. For example, as shown inFIG.1, a first terminal100may be a mobile phone, a second terminal200may be a desktop computer or a standalone display. As shown in (a) inFIG.1, the first terminal100may connect to the second terminal200in a wireless communication mode (for example, Wi-Fi); or as shown in (b) inFIG.1, the first terminal100may communicatively connect to the second terminal200in a wired communication mode (for example, a data cable). This is not limited in this embodiment of this application. The first terminal100may serve as a controlling device, and the second terminal200may serve as a controlled device of the first terminal100. After the first terminal100connects to the second terminal200, it means that the first terminal100has two displays: one is a display of the first terminal100, and the other is a display of the second terminal200connected to the first terminal100. Currently, after the first terminal100connects to the second terminal200, the first terminal100may display related display content (for example, an application icon), on a desktop, on both the display of the first terminal100and the display of the second terminal200. In this way, a user may perform a series of touch operations on the first terminal100to implement a corresponding function of a related application, for example, tap an icon of a video application on the first terminal100to watch a video. However, when the user needs to continue to watch the video on the second terminal200by using the video application, the user needs to connect a mouse to the first terminal100, or use a touchscreen of the first terminal100as an input device, to adjust a cursor on the second terminal200to an icon location of the video application to perform a double-tap operation, to re-open the video application. It may be learned that after the first terminal100connects to the second terminal200, although a function of simultaneously controlling display interfaces on the two displays (namely, the display of the first terminal100and the display of the second terminal200) by the first terminal100is implemented, when the user needs to switch an application between different terminals, the user needs to frequently switch the application manually. Consequently, user experience in using the terminal as a PC is reduced and the user has relatively low operation efficiency. In this embodiment of this application, after the first terminal100connects to the second terminal200, the first terminal100may project an application installed on the first terminal100onto the display of the second terminal200by using a homogenous display method or a heterogenous display method. Homogenous display means that signal sources of display interfaces on different displays, such as the display of the first terminal100and the display of the second terminal200, are the same. Heterogenous display means that a signal source of a display interface on the display of the first terminal100is independent of a signal source of a display interface on the display of the second terminal200. When the first terminal heterogeneously projects the application installed on the first terminal onto the display of the second terminal, as shown inFIG.2FIG.2AandFIG.2B, before the first terminal100connects to the second terminal200, the first terminal100stores generated to-be-displayed display content in a screen container in a memory. In this way, the display of the first terminal100can display a related image simply by reading the display content from the screen container. After the first terminal100connects to the second terminal200, still as shown inFIG.2AandFIG.2B, the display of the second terminal200may read the display content from the screen container to display the display content the same as that on the first terminal100, to implement simultaneous display of the first terminal100and the second terminal200. When the first terminal100heterogeneously projects the application installed on the first terminal100onto the display of the second terminal200, as shown inFIG.3AandFIG.3B, before the first terminal100connects to the second terminal200, the first terminal100stores generated to-be-displayed display content 1 in a screen container 1 in a memory. In this way, the display of the first terminal100can display a related image simply by reading the display content 1 from the screen container 1. After the first terminal100connects to the second terminal200, the first terminal100may identify related specification information of the display of the second terminal200connected to the first terminal100, for example, a resolution and dots per inch (Dots Per Inch, DPI) of the display of the second terminal200. In this case, the first terminal100may set up, in a memory of the first terminal100, an independent screen container, namely, a screen container 2 inFIG.3AandFIG.3B, for the second terminal200based on the specification information of the display of the second terminal200. The screen container 2 and the screen container 1 may be distinguished by using different display identifiers (Display ID). After connecting to the second terminal200, still as shown inFIG.3AandFIG.3B, the first terminal100may initialize the display of the second terminal200, convert information such as an application icon on the first terminal100into desktop display content, such as an application icon, an icon layout, and a status bar, that matches the specification information of the second terminal200, and store the desktop display content in the screen container 2. In this way, the display of the second terminal200can independently project all applications installed on the first terminal100onto the display of the second terminal200simply by reading the desktop display content stored in the screen container 2, to complete a process of initializing the display of the second terminal200. Subsequently, the first terminal100and the second terminal200may independently run the two display interfaces in a same operating system simply by reading the display content from the respective screen containers of the terminals. Certainly, in a heterogeneous projection system shown inFIG.3AandFIG.3B, the display content in the screen container 1 may alternatively be the same as the display content in the screen container 2 (for example, both the display content in the screen container 1 and the display content in the screen container 2 each are a display interface at the tenth minute of a video A). Alternatively, the first terminal100may convert the display content in the screen container 1 into display content that matches the specification information of the second terminal200and then store the display content in the screen container 2 (for example, adjust a resolution of a photo A in the screen container 1 to a resolution value that matches the display of the second terminal200, and store, in the screen container 2, a photo A obtained after the conversion). In this way, the display content on the display of the first terminal100may be simultaneously displayed in the display of the second terminal200. Specifically, in this embodiment of this application, to save the user from a trouble in frequently switching an application between a plurality of display interfaces, the first terminal100may receive a specific gesture made by the user on target content or a target shortcut in the display interface (namely, a first display interface) of the first terminal100, to trigger the first terminal100to store, in the screen container 2 corresponding to a display ID 2, display data (for example, first display data) generated by a related application. Then the first terminal100sends the first display data in the screen container 2 to the second terminal, so that the display of the second terminal200can read the first display data from the screen container 2 and display the first display data in a second display interface of the second terminal200. In this way, the user can control, on the first terminal100, both display content on the first terminal100and display content the second terminal200, thereby reducing complexity in switching an application between a plurality of terminal displays, and improving user experience when a terminal performs multi-screen display. For example, after the user opens an application A on the first terminal100, the first terminal100stores, in the screen container 1, generated related display content of the application A. If the user performs, in the current first display interface, a preset press operation (namely, the specific gesture) in a window (namely, the target content) of the application A, after detecting the press operation, the first terminal100stores, in the screen container 2, the related display content (namely, the first display data) that is of the application A and that is in the screen container 1. In this way, the display of the second terminal200can read the related display content of the application A from the screen container 2 and display the related display content of the application A. It may be learned that the user can implement seamless switching of the application A between the first terminal100and the second terminal200simply by performing only one press operation. This improves efficiency in switching an application when the terminal performs multi-screen display. It should be noted that the foregoing screen container may be specifically an application stack that is set up when a terminal (for example, the first terminal) runs an application. Content at the top of the application stack is usually content that is currently run and displayed on the terminal. For example, the application A generates a plurality of tasks (task) in an application stack of the application A during running, and the terminal executes each task starting from a stack top, outputs an execution result of each task, and displays the execution result on a display of the terminal. The foregoing shortcut is a command line used to quickly start a task, for example, a link provided by a system to quickly start a program or open a file or a file folder. The shortcut may usually exist in a form of an icon. The shortcut is a general term of a plurality of possible operation entries. The shortcut may be but is not limited to an application program, a specific function, a contact, a setting option, a notification bar, a shortcut operation bar, and the like. In this embodiment of this application, the shortcut may be an icon for starting an application program, for example, an icon for starting one of the following application programs: WeChat, Google search, a camera, and the like. Alternatively, the shortcut may be an identifier of an application program running in a background, for example, an application window of WeChat in a multitask window. Certainly, the shortcut may be an icon of a target file that is not opened in a terminal, for example, a thumbnail of a photo A in a gallery. Alternatively, the shortcut may be an identifier of a target file that is opened in a background of a terminal, for example, a window in which the photo A is located in a multitask window. This is not limited in this embodiment of this application. In addition, when the second terminal200displays the display content in the screen container 2 in the second display interface, the first terminal100may adjust the display content in the screen container 2 and a display layout in the second display interface, based on parameters such as a screen resolution of the second terminal200and an operation habit of using the second terminal200by a user, for example, may adjust a size, a location, an icon, and the like of the display content. This is not limited in this embodiment of this application. The display method provided in the embodiments of this application may be applied to any terminal such as a mobile phone, a wearable device, an AR (augmented reality)\VR (virtual reality) device, a tablet computer, a notebook computer, an UMPC (ultra-mobile personal computer), a netbook, and a PDA (personal digital assistant). This is not limited in this embodiment of this application. As shown inFIG.4, the first terminal100(or the second terminal200) in this embodiment of this application may be a mobile phone100. This embodiment is described in detail by using the mobile phone100as an example. It should be understood that the mobile phone100shown in the figure is merely an example of the terminal, and the mobile phone100may include more or fewer components than those shown in the figure, may combine two or more components, or may have different component configurations. As shown inFIG.4, the mobile phone100may specifically include components such as a processor101, a radio frequency (RF) circuit102, a memory103, a touchscreen104, a Bluetooth apparatus105, one or more sensors106, a Wi-Fi apparatus107, a positioning apparatus108, an audio circuit109, a communications interface110, and a power supply system111. These components may communicate with each other by using one or more communications buses or signal cables (not shown inFIG.4). A person skilled in the art may understand that a hardware structure shown inFIG.4constitutes no limitation on the mobile phone. The mobile phone100may include more or fewer components than those shown in the figure, or combine some components, or have different component arrangements. The components of the mobile phone100are described below in detail with reference toFIG.4. The processor101is a control center of the mobile phone100. The processor101is connected to all parts of the mobile phone100by using various interfaces and lines, and performs various functions of the mobile phone100and data processing by running or executing an application program (which may be briefly referred to as an App below) stored in the memory103, and by invoking data stored in the memory103. In some embodiments, the processor101may include one or more processing units. For example, the processor101may be a Kirin 960 chip manufactured by Huawei Technologies Co., Ltd. The radio frequency circuit102may be configured to receive and send a radio signal during information receiving and sending or during a call. In particular, the radio frequency circuit102may receive downlink data from a base station and then send the downlink data to the processor101for processing, and send uplink data to the base station. A radio frequency circuit usually includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency circuit102may further communicate with another device through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to a Global System for Mobile Communications, a general packet radio service, Code Division Multiple Access, Wideband Code Division Multiple Access, Long Term Evolution, an email, a short messaging service, and the like. The memory103is configured to store an application program and data. The processor101runs the application program and the data stored in the memory103, to perform various functions of the mobile phone100and process the data. The memory103mainly includes a program storage area and a data storage area. The program storage area may store an operating system and an application program (such as a sound playing function or an image playing function) that is required by at least one function. The data storage area may store data (such as audio data or a phonebook) that is created based on use of the mobile phone100. In addition, the memory103may further include a high-speed random access memory, and may include a nonvolatile memory, for example, a magnetic disk storage device, a flash storage device, or another volatile solid-state storage device. The memory103may store various operating systems, for example, an iOS operating system developed by Apple Inc. or an Android operating system developed by Google Inc. The touchscreen104may include a touchpad104-1and a display104-2. The touchpad104-1may collect a touch event (for example, an operation performed by a user on the touchpad104-1or near the touchpad104-1by using any appropriate object such as a finger or a stylus) performed by a user of the mobile phone100on or near the touchpad104-1and send collected touch information to another device such as the processor101. Although the touchpad104-1and the display screen104-2serve as two independent components inFIG.4to implement input and output functions of the mobile phone100, in some embodiments, the touchpad104-1and the display screen104-2may be integrated to implement the input and output functions of the mobile phone100. It may be understood that the touchscreen104is made of a plurality of layers of materials that are stacked together. In this embodiment of this application, only the touchpad (layer) and the display screen (layer) are displayed, and other layers are not recorded in this embodiment of this application. In addition, in some other embodiments of this application, the touchpad104-1may cover the display screen104-2, and a size of the touchpad104-1is greater than a size of the display screen104-2, so that the display screen104-2is fully covered by the touchpad104-1. Alternatively, the touchpad104-1may be configured in a form of a full panel on a front surface of the mobile phone100. In other words, all touches of a user on the front surface of the mobile phone100can be sensed by the mobile phone. In this way, full-touch experience can be implemented on the front surface of the mobile phone. In some other embodiments, the touchpad104-1is configured in a form of a full panel on a front surface of the mobile phone100, and the display screen104-2is also configured in a form of a full panel on the front surface of the mobile phone100. In this way, a bezel-less structure can be implemented on the front surface of the mobile phone. In this embodiment of this application, the mobile phone100may further have a fingerprint recognition function. For example, a fingerprint sensor112may be configured on a rear surface (for example, below a rear-facing camera) of the mobile phone100, or a fingerprint sensor112may be configured on the front surface (for example, below the touchscreen104) of the mobile phone100. For another example, a fingerprint collection component112may be configured inside the touchscreen104to implement the fingerprint recognition function. In other words, the fingerprint collection component112may be integrated with the touchscreen104to implement the fingerprint recognition function of the mobile phone100. In this case, the fingerprint collection component112may be configured inside the touchscreen104as a part of the touchscreen104or may be configured inside the touchscreen104in another manner. A fingerprint sensor is a main component of the fingerprint collection component112in this embodiment of this application. The fingerprint sensor may use any type of sensing technology, including but not limited to an optical sensing technology, a capacitive sensing technology, a piezoelectric sensing technology, an ultrasonic sensing technology, and the like. In this embodiment of this application, the mobile phone100may further include the Bluetooth apparatus105, configured to implement data exchange between the mobile phone100and another terminal (for example, a mobile phone or a smartwatch) at a short distance from the mobile phone100. The Bluetooth apparatus in this embodiment of this application may be an integrated circuit, a Bluetooth chip, or the like. The Wi-Fi apparatus107is configured to provide the mobile phone100with network access that complies with a Wi-Fi related standard protocol. The mobile phone100may gain access to a Wi-Fi access point by using the Wi-Fi apparatus107, to help a user receive and send an email, browse a web page, access streaming media, and so on. The Wi-Fi apparatus107provides the user with wireless broadband Internet access. In some other embodiments, the Wi-Fi apparatus107may serve as a Wi-Fi radio access point and may provide another terminal with Wi-Fi network access. The mobile phone100may further include at least one type of sensor106, for example, an optical sensor, a motion sensor, or another sensor. Specifically, the optical sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor may adjust luminance of the display of the touchscreen104based on brightness of ambient light, and the proximity sensor may turn off the display when the mobile phone100moves to an ear. As a type of motion sensor, an accelerometer sensor may detect an acceleration value in each direction (usually three axes), may detect a value and a direction of gravity when the accelerometer sensor is static, and may be applied to an application for recognizing a mobile phone gesture (such as switching between a landscape mode and a portrait mode, a related game, or magnetometer posture calibration), a function related to vibration recognition (such as a pedometer or a knock), and the like. For other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor that may be further configured on the mobile phone100, details are not described herein. The positioning apparatus108is configured to provide the mobile phone100with a geographical location. It may be understood that the positioning apparatus108may be specifically a receiver in a positioning system such as a Global Positioning System (GPS), the BeiDou Navigation Satellite System, or the European GLONASS. After receiving a geographical location sent by the foregoing positioning system, the positioning apparatus108sends the information to the processor101for processing, or sends the information to the memory103for storage. In some other embodiments, the positioning apparatus108may be a receiver in an Assisted Global Positioning System (AGPS). The AGPS system serves as an assisted server to assist the positioning apparatus108in completing ranging and positioning services. In this case, the assisted positioning server provides positioning assistance by communicating with the positioning apparatus108(namely, a GPS receiver) in a terminal such as the mobile phone100by using a wireless communications network. In some other embodiments, the positioning apparatus108may be a positioning technology based on a Wi-Fi access point. Because each Wi-Fi access point has one globally unique MAC address, when Wi-Fi is enabled, the terminal can scan and collect a broadcast signal of a surrounding Wi-Fi access point. Therefore, the terminal can obtain a MAC address broadcast by the Wi-Fi access point. The terminal sends, to a location server by using a wireless communications network, the data (for example, the MAC address) that can identify the Wi-Fi access point. The location server retrieves a geographical location of each Wi-Fi access point, calculates a geographical location of the terminal with reference to strength of the Wi-Fi broadcast signal, and sends the geographical location of the terminal to the positioning apparatus108in the terminal. The audio circuit109, a loudspeaker113, and a microphone114can provide an audio interface between a user and the mobile phone100. The audio circuit109may transmit, to the loudspeaker113, an electrical signal converted from received audio data, and the loudspeaker113converts the electrical signal into a sound signal for output. In addition, the microphone114converts the collected sound signal into an electrical signal. The audio circuit109receives the electrical signal, and then converts the electrical signal into audio data and outputs the audio data to the RF circuit102, to send the audio data to, for example, another mobile phone, or output the audio data to the memory103for further processing. The communications interface110is configured to provide various interfaces for an external input/output device (for example, a keyboard, a mouse, an external display, an external memory, and a subscriber identity module card). For example, the terminal is connected to a mouse or a display by using a universal serial bus (USB) port, is connected, by using a metal contact in a card slot of a subscriber identity module card, to the subscriber identity module card (SIM) provided by a telecommunications operator, and communicates with another terminal by using an interface of the Wi-Fi apparatus107, an interface of a near field communication (NFC) apparatus, an interface of a Bluetooth module, and the like. The communications interface110may be used to couple the foregoing external input/output device to the processor101and the memory103. In this embodiment of this application, the first terminal100may connect to the display of the second terminal200by using the communications interface110of the first terminal100, so that the first terminal100and the second terminal200can communicate with each other. The mobile phone100may further include the power supply apparatus111(for example, a battery and a power management chip) that supplies power to all the components. The battery may be logically connected to the processor101by using the power management chip, to implement functions such as charging management, discharging management, and power consumption management by using the power supply apparatus111. Although not shown inFIG.4, the mobile phone100may further include a camera (a front-facing camera and/or a rear-facing camera), a camera flash, a micro projection apparatus, a near field communication (NFC) apparatus, and the like. Details are not described herein. A display method provided in an embodiment of this application is described below in detail with reference to specific embodiments. As shown inFIG.5, the method includes the following steps. 501. A first terminal connects to a second terminal, so that the first terminal and the second terminal can communicate with each other. The first terminal may connect to the second terminal by using Bluetooth, Wi-Fi, a ZigBee protocol (ZigBee), or another communication mode. This is not limited in this embodiment of this application. 502. The first terminal obtains a first gesture triggered by a user on target content or a target shortcut in a first display interface, where the first display interface is a display interface of the first terminal. The first gesture may be any gesture customized by the user or preset in the first terminal, for example, a single tap, a floating touch gesture, a two-finger sliding gesture, or a press gesture. This is not limited in this embodiment of this application. In addition, content corresponding to the target content or the target shortcut may be an interface element displayed in the first display interface. The interface element may be specifically an application, an application window, or a photo, a document, or the like that is selected from an application by a user. Alternatively, the target content may be a display control in a current display interface. This is not limited in this embodiment of this application. For example, the content corresponding to the target shortcut may be an application running in a background of the first terminal, or an application that is not started on the first terminal, or may be content that is not running in a foreground of the first terminal, for example, a target file that is opened in a background of the first terminal. This is not limited in this embodiment of this application. In this embodiment of this application, an application that can be operated by the user and that is being displayed on the terminal may be referred to as an application running in the foreground of the terminal, and an application that cannot be operated by the user currently but is also running on the terminal is referred to as a background application. 503. The first terminal sends first display data to the second terminal in response to the first gesture, so that the second terminal displays the first display data in a second display interface, where the second display interface is a display interface of the second terminal. The first display data is display data correlated with the target content or the target shortcut. For example, when the target shortcut is an icon of an application A, the first display data correlated with the target shortcut may include display content that is in a display interface of the application A when the application A is started. When the target shortcut is a window of an application A in a multitask window, the first display data correlated with the target shortcut may include display content that is in a display interface of the application A when the application A is running. For another example, when the target content is a photo B that is being displayed, the first display data correlated with the target content may be display content (for example, gray-scale values of pixels in the photo B) of the photo B. A specific implementation of sending, by the first terminal, the first display data correlated with the target content to the second terminal is described below in detail with reference to specific embodiments. For example, as shown inFIG.6, after the first terminal100connects to the second terminal200, the first interface is a desktop of the first terminal100, and the user triggers the first gesture for an icon (namely, the target shortcut) of a player application on the desktop of the first terminal100, for example, drags the player application to a specified area21. In this case, in response to the first gesture, the first terminal may send, to the second terminal200, the first display data such as a display interface generated by the player application, and the second terminal200runs and displays the player application in a display interface (namely, the second display interface) of the second terminal200. In this way, the user may control, on the first terminal100by using the first gesture, the application to switch between the first terminal100and the second terminal200. This improves efficiency in switching the application between a plurality of screens when the terminal performs multi-screen display. Specifically, with reference to a heterogenous display principle shown inFIG.3AandFIG.3B, as shown inFIG.7, in the foregoing embodiment, after the first terminal100connects to the second terminal200, the first terminal100stores, in a screen container 1 indicated by a display ID 1, desktop content that needs to be displayed on the first terminal100at this time, and the first terminal100stores, in a screen container 2 indicated by a display ID 2, desktop content that needs to be displayed on the second terminal200at this time and sends the desktop content to the second terminal200. In this way, a display of the first terminal100presents the first display interface to the user based on the desktop content in the screen container 1, and a display of the second terminal200presents the second display interface to the user based on the desktop content in the screen container 2. In this case, after the first terminal100detects the first gesture of dragging the icon of the player application to the specified area21, still as shown inFIG.7, the first terminal100may be triggered to generate, in the screen container 2, display content of the player application, and send the display content of the player application to the second terminal200. In this way, the second terminal200connected to the first terminal100may switch, for running and displaying and by accessing the screen container 2, the player application that is triggered by the user on the first terminal100to the second display interface of the second terminal200, so that an application that is not started is switched between the first terminal100and the second terminal200. When storing the generated display content of the player application in the screen container 2, the first terminal100may adjust the display content of the player application based on specification information such as a resolution and DPI of the display of the second terminal200, so that the display content of the player application meets the specification information of the display of the second terminal200. For another example, a resolution of the first terminal100is A, and a resolution of the second terminal200is B. A common player application usually supports displays with a plurality of resolutions (for example, when the player application is launched in an application market, the player application supports a display with the resolution A and a display with the resolution B). Therefore, after the first terminal100detects the first gesture of dragging the player application to the specified area21, the first terminal100may change a resolution parameter of the player application from A to B, and then stores, in the screen container 2, the display content generated by the player application, so that the user can watch, on the display of the second terminal200, the display content that matches the resolution of the display of the second terminal200, thereby improving user experience. Alternatively, after the first terminal100connects to the second terminal200, the first terminal100may be running a plurality of applications. For example, as shown inFIG.8AandFIG.8B, the first terminal100is running a calendar application, a music application, and a WeChat application. If the user makes the first gesture in a window of the WeChat application (namely, the target shortcut) in a multitask window, for example, makes a double-tap in the window of the WeChat application, the first terminal100may be triggered to send, to the second terminal200, the first display data such as a display interface currently generated by the WeChat application, so that the second terminal200displays the display interface of the WeChat application in the second display interface. In this way, the user may control, on the first terminal100by using the first gesture, the application running in the background to switch between the first terminal100and the second terminal200. This improves efficiency in switching the application between a plurality of screens when the terminal performs multi-screen display. Alternatively, a shortcut button used for switching an application between the first terminal100and the second terminal200to display the application may be additionally configured in a multitask window of the first terminal100. As shown inFIG.9(a), the multitask window of the first terminal100includes the window of the WeChat application, and a first shortcut button23may be configured in the window of the WeChat application. After the first terminal100connects to the second terminal200, the first shortcut button23may be prominently displayed (for example, the first shortcut button23is highlighted). When detecting that the user taps the first shortcut button23, the first terminal100is triggered to send, to the second terminal200, a display interface that is currently generated by the WeChat application, so that the second terminal200displays the display interface of the WeChat application in the second display interface. In this case, the window of the WeChat application that is originally displayed on the first terminal100may be changed correspondingly. For example, as shown inFIG.9(b), after the WeChat application is switched to the display interface of the second terminal200, the window of the WeChat application on the first terminal100may be correspondingly displayed on a large-screen display, or the window of the WeChat application on the first terminal100may be marked by another color to remind the user that the WeChat application is currently displayed on the second terminal200. In addition, as shown inFIG.9(b), after the WeChat application is switched to the second terminal200for displaying, a second shortcut button24may be displayed in the window of the WeChat application. When detecting that the user taps the second shortcut button24, the first terminal100is triggered to switch the WeChat application displayed in the second display interface back to the first display interface, and stop sending, to the second terminal200, the display interface that is currently generated by the WeChat application. In this case, the second terminal200resumes a display state (for example, a desktop state) that exists before the WeChat application is displayed. Certainly, the user may manually configure which application can be switched to the second terminal200for displaying and which application is not allowed to be switched to the second terminal200for displaying, and the first terminal100may display the foregoing shortcut button on only an interface of an application that is allowed to be switched to the second terminal200for displaying. This is not limited in this embodiment of this application. Specifically, with reference to the heterogenous display principle shown inFIG.3AandFIG.3B, as shown inFIG.10, in the foregoing embodiment, display content of the calendar application, the music application, and the WeChat application that are running on the first terminal100is stored in the screen container 1 indicated by the display ID 1. The display of the first terminal100presents a multitask window for the calendar application, the music application, and the WeChat application to the user based on the display content of the applications that is in the screen container 1. In this case, when the first terminal100detects the first gesture of double-tapping the window of the WeChat application (or tapping the first shortcut button23), the first terminal100is triggered to store, in the screen container 2, the display content (namely, the first display data) that is of the WeChat application and that is in the screen container 1. In this way, the second terminal200connected to the first terminal100may switch, by accessing the screen container 2, the window of the WeChat application running on the first terminal100to the second display interface of the second terminal200, to continue to run and to be displayed. In this way, the display content that is of the WeChat application and that is in the screen container 2 and the display content that is of the WeChat application and that is in the screen container 1 may be seamlessly connected. In other words, a running application is seamlessly switched between the first terminal100and the second terminal200, and the user does not need to restart the WeChat application on the second terminal200. This greatly improves user experience when the terminal displays different applications on a plurality of screens. Similarly, when storing, in the screen container 2, the display content that is of the WeChat application and that is in the screen container 1, the first terminal100may adjust the display content of the WeChat application based on specification information such as a resolution and DPI of the display of the second terminal200, so that the display content of the WeChat application meets the specification information of the display of the second terminal200. Certainly, if the foregoing target application is a video type application, the first terminal100may switch, to the screen container 2 for storage, display content that is of the video and that is currently played in the screen container 1, so that video playback progress displayed on the first terminal100before the switching is the same as video playback progress displayed on the second terminal200after the switching. Alternatively, when the foregoing target application is a game type application, the first terminal100may switch, to the screen container 2 for storage, current display content that is of the game and that is in the screen container 1, so that a game system interface displayed on the first terminal100before the switching is the same as a game system interface displayed on the second terminal200after the switching. In this way, when the target application is switched between the first display interface and the second display interface, the target application can seamlessly continue without a need to restart a process of the target application. Alternatively, shortcut buttons configured on the first terminal100may be a shortcut button 1 and a shortcut button 2 in a drop-down list shown inFIG.11(a), or may be a shortcut button 1 and a shortcut button 2 that are displayed as triggered by a floating button shown inFIG.11(b). The shortcut button 1 corresponds to an application running in the first display interface of the first terminal100, and the shortcut button 2 corresponds to an application running in the second display interface of the second terminal200. In this case, when detecting that the user taps the shortcut button 2, the first terminal100presents, in a form of a multitask window in the first display interface, one or more applications that run in the second display interface at this time. When detecting that the user taps the shortcut button 1, the first terminal100presents, in a form of a multitask window in the first display interface, one or more applications that currently run in the first display interface. This can help the user manage the applications in the two display interfaces. Alternatively, as shown inFIG.12, after the first terminal100connects to the second terminal200, the first terminal100is running a file management application. In other words, the current first display interface is a display interface of the file management application. If the user makes the first gesture, for example, sliding to the right, on a selected Word file “Payroll of Mr. Zhang” (namely, the target shortcut) in the first display interface, the first terminal100may be triggered to send the Word file “Payroll of Mr. Zhang” to the second terminal200, and the second terminal200runs and displays the file in the second display interface. In this way, the user may control, on the first terminal100by using the first gesture, the target content to switch between the first terminal100and the second terminal200. This improves efficiency in switching the application between a plurality of screens when the terminal performs multi-screen display. Specifically, with reference to the heterogenous display principle shown inFIG.3AandFIG.3B, as shown inFIG.13, in the foregoing embodiment, display content of the file management application running on the first terminal100is stored in the screen container 1 indicated by the display ID 1. In this case, the display of the first terminal100presents a display interface (namely, the first display interface) of the file management application to the user based on the display content in the screen container 1. In this case, when the first terminal100detects that the user slides the Word file “Payroll of Mr. Zhang” to the right, the first terminal100is triggered to store the Word file “Payroll of Mr. Zhao” in the screen container 2. In this way, the second terminal200connected to the first terminal100may run and display, by accessing the screen container 2, the Word file “Payroll of Mr. Zhang” in the second display interface of the second terminal200, so that the target file is switched between the first terminal100and the second terminal200. In another possible design manner, after detecting the first gesture triggered by the user in the first display interface, the first terminal may determine, based on current scenario description data, whether to display, in the second display interface of the second terminal, display data correlated to the target content. In other words, the first terminal may determine, based on a specific current application scenario, whether to switch an application between the first terminal and the second terminal, so that the user can gain optimal application experience. The scenario description data may be used to indicate a specific application scenario in which the user triggers the first gesture. For example, when a player application is the target content, the scenario description data may be specifically a device running status of the first terminal after the first gesture is detected. In this case, as shown inFIG.14, after the first terminal100connects to the second terminal200, the user triggers the first gesture, for example, a tap operation, on an icon of a player application in the first display interface of the first terminal100. After the first terminal100detects the first gesture triggered on the icon of the player application, the first terminal100is triggered to obtain a current device running status of the first terminal100, for example, at least one of a battery level, network quality, a memory size, CPU usage, and the like of the first terminal100. Then the first terminal100may determine, based on the device running status, whether the current first terminal100supports running of the player application. For example, if a current battery level of the first terminal100is less than a preset threshold, it can be determined that the current device running status of the first terminal100does not support running and displaying of the player application in the first display interface. In this case, as shown inFIG.14, a prompt box may be used on the first terminal100to prompt the user to switch the player application to the second terminal200for running. If the user determines to switch the player application to the second terminal200for running, the first terminal100is triggered to store, in the screen container 2, a display interface (namely, the first display data) generated by the player application, and send the display interface to the second terminal200. The second terminal200displays the display interface based on the display interface that is of the player application and that is in the screen container 2, so that the user can gain more intelligent application experience. 504. (Optional) The first terminal obtains a second gesture triggered by the user in the first display interface. 505. (Optional) In response to the second gesture, the first terminal stops sending the first display data to the second terminal, and displays, in the first display interface, third display data correlated with the first display data. In steps504and505, after the first terminal sends the first display data to the second terminal for displaying, the first terminal may further convert, by using the second gesture made by the user, the first display data in the second display interface into the third display data that matches a display specification of the first terminal, and display the third display data in the first display interface, so that the target application is switched back to the first display interface of the first terminal. In this way, the user may freely switch the target application between the first terminal and the second terminal simply by operating the first terminal. For example, after the first terminal100connects to the second terminal200, in response to the first gesture, the first terminal100switches a player application to the second display interface of the second terminal200for running and displaying. In this case, as shown inFIG.15(a), if the first terminal100detects that the user taps a switching button22in the first display interface, as shown inFIG.15(b), the first terminal100may be triggered to stop sending the display data in the screen container 2 to the second terminal200. In addition, the first terminal100may convert display content that is of the player application and that is in the screen container 2 into display content (namely, the third display data) that matches specification information of the display of the first terminal100, and move the display content to the screen container 1. In this way, the display of the first terminal100may switch, simply by reading the display content of the player application from the screen container 1, the player application back to the first display interface for displaying. In this way, the display content that is of the player application and that is in the screen container 1 and the display content that is of the player application and that is in the screen container 2 may be seamlessly connected. In other words, an application, a document photo, or the like may be seamlessly and freely switched between the first terminal100and the second terminal200. This improves efficiency in switching an application between a plurality of screens. Certainly, in addition to tapping the switching button22, the second gesture may be a floating touch gesture, a two-finger sliding gesture, a press gesture, or the like. This is not limited in this embodiment of this application. In addition, when the first terminal100detects the second gesture triggered in the first display interface, if a plurality of applications run in the second display interface of the second terminal200, the first terminal100may selectively switch one or more of the plurality of applications back to the first display interface. For example, because an application that is being operated by the user usually runs in the foreground, the first terminal100may switch only an application running in the foreground in the second display interface back to the first display interface. Alternatively, the first terminal100may switch all applications running in the second display interface back to the first display interface. This is not limited in this embodiment of this application. Further, as shown inFIG.16, when a plurality of applications run in the second display interface of the second terminal200, if the first terminal100detects that the user taps the switching button22in the first display interface, the first terminal100may be triggered to display a prompt box in the first display interface of the first terminal100. The prompt box may include all the applications running in the second display interface. In this case, the user may manually select one or more applications that need to be switched back to the first display interface. This helps the user manage a process of switching an application between a plurality of screens. In another possible design method, an embodiment of this application further provides a display method. As shown inFIG.17, the method includes the following steps. 601. A first terminal connects to a second terminal, so that the first terminal and the second terminal can communicate with each other. 602. (Optional) The first terminal sends first display data to the second terminal, so that the second terminal displays M shortcuts in a second display interface, where the M shortcuts are shortcuts of applications that are installed on the first terminal and that have a same or similar attribute. Specifically, after the first terminal connects to the second terminal, the first terminal may selectively select M (M≤N) applications from N applications installed on the first terminal, and send icons (namely, second display data) of the M applications to the second terminal, so that the second terminal displays, in the display interface of the second terminal, the M application icons that have a same or similar attribute, and a user can quickly find out a required target application from the second terminal. For example, as shown inFIG.18, eight applications are installed on the first terminal100, and four applications are game applications. In this case, when the user sets the first terminal100to a game mode, if the first terminal100connects to the second terminal200, the first terminal may project icons of the four game applications onto the second display interface of the second terminal200. Alternatively, when detecting that the first terminal100connects to an office computer of a company, the first terminal100may project an application or data that is unrelated to user privacy onto the second display interface of the second terminal200, to reduce a risk of leaking user privacy during screen sharing. Specifically, the M applications may be a type of application on the first terminal, for example, an office type application (including office series applications), a game type application (including Angry Birds, Honor of Kings, and the like), a payment type application (including Alipay, Industrial and Commercial Bank of China, and the like). This type of application may be obtained through division based on a user setting, or may be obtained through division by the first terminal based on a classification standard in an application market. This is not limited in this embodiment of this application. 603. The first terminal starts a target application in a first display interface. 604. The first terminal sends, to the second terminal, second display data of an icon used to indicate the target application, so that the second terminal displays the icon of the target application in the second display interface. In steps603and604, when the first terminal starts to run a new application (namely, the target application) in the first display interface, the first terminal may send, to the second terminal for displaying, the second display data corresponding to the icon of the target application, so that the icon of the target application is synchronously displayed in the second display interface of the second terminal. In this way, an entry for quickly entering the target application may be provided for the user in the second display interface, to help the user to continue to perform a related function of the target application on the second terminal. For example, as shown inFIG.19, after the user taps an icon of a player on the first terminal100, the first terminal100is triggered to start to run the player application. In this case, still as shown inFIG.19, the first terminal100may display an icon25of the player application in a status bar in the second display interface, to notify the user that the first terminal100is running the player application. In this case, if the user needs to run the player application on the second terminal200, the user may directly tap the icon25of the player application in the second display interface. 605. The first terminal sends third display data to the second terminal when detecting that a user triggers an icon of the target application in the second display interface, so that the second terminal runs the target application in the second display interface, where the third display data is used to indicate a display interface that is displayed when the target application is running. For example, when detecting that the user triggers the icon25of the player application in the second display interface, as shown inFIG.20, the first terminal may store, in a screen container 2, display content that is of the player application and that is currently generated in a screen container 1, and send the display content that is of the player application and that is in the screen container 2 to the second terminal200for displaying. In this case, as shown inFIG.21, the second terminal200connected to the first terminal100may continue to display the display content of the player application in the second display interface, so that the player application running on the first terminal100is switched to the second display interface of the second terminal200, to continue to run and to be displayed. In addition, the first terminal100may stop displaying a related display interface of the player application, so that the first terminal100may further run another application. In this way, independent running is implemented in the first display interface and the second display interface. In this case, as shown inFIG.21, the switching button22may be further displayed in the display interface of the first terminal100. When detecting that the user triggers the switching button22, the first terminal100may stop sending the display content that is of the player application and that is in the screen container 2 to the second terminal200. In addition, the first terminal100may convert the display content that is of the player application and that is in the screen container 2 into display content that matches specification information of a display of the first terminal100, and move the display content to the screen container 1. In this way, as shown inFIG.19, the display of the first terminal100may switch, simply by reading the display content of the player application from the screen container 1, the player application back to the first display interface for displaying. In this case, an interface element currently displayed in the first display interface of the first terminal100is the same as an interface element displayed, before the user triggers the switching button22, in the second display interface of the second terminal200. In other words, the display content that is of the player application and that is in the screen container 1 and the display content that is of the player application and that is in the screen container 2 may be seamlessly connected. That is, the application is seamlessly and freely switched between the first terminal100and the second terminal200, and the user does not need to restart the target application on the second terminal200. This greatly improves user experience when the terminal performs multi-screen display. It may be understood that to implement the foregoing functions, the foregoing terminal and the like includes a corresponding hardware structure and/or software module for performing each of the functions. A person of ordinary skill in the art should be easily aware that the units and algorithm steps in the examples described with reference to the embodiments disclosed in the embodiments of this application may be implemented by hardware or a combination of hardware and computer software in the embodiments of this application. Whether a function is performed by hardware or computer software driving hardware depends on a particular application and a design constraint condition of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the embodiments of this application. In the embodiments of this application, the terminal may be divided into function modules based on the foregoing method example. For example, each function module may be obtained through division based on each corresponding function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software function module. It should be noted that the module division in the embodiments of this application is an example, and is merely logical function division. There may be another division manner in an actual implementation. When each function module is obtained through division based on a corresponding function,FIG.22is a possible schematic structural diagram of a terminal (for example, the first terminal or the second terminal) in the foregoing embodiment. The terminal includes a connection unit1101, an obtaining unit1102, a sending unit1103, and a display unit1104. The connection unit1101is configured to support the terminal in performing the process501inFIG.5and the process601inFIG.17. The obtaining unit1102is configured to support the terminal in performing processes502and504inFIG.5. The sending unit1103is configured to support the terminal in performing the process503inFIG.5and processes602,604, and605inFIG.17. The display unit1105is configured to support the terminal in performing the process505inFIG.5and the process603inFIG.17. All related content of each step in the foregoing method embodiments may be cited in function descriptions of a corresponding function module in content of the present invention. Details are not described herein again. When an integrated unit is used,FIG.23is a possible schematic structural diagram of a terminal (for example, the first terminal or the second terminal) in the foregoing embodiment. The terminal includes a processing module1302and a communications module1303. The processing module1302is configured to control and manage an action of the terminal. The communications module1303is configured to support communication between the terminal and another network entity. The terminal may further include a storage module1301that is configured to store program code and data of the terminal. The processing module1302may be a processor or a controller, for example, may be a central processing unit (Central Processing Unit, CPU), a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or another programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The processing module1302may implement or execute various examples of logical blocks, modules, and circuits that are described with reference to the content disclosed in this application. The processor may also be a combination implementing a computing function, for example, a combination including one or more microprocessors or a combination of a DSP and a microprocessor. The communications module1303may be a transceiver, a transceiver circuit, a communications interface, or the like. The storage module1301may be a memory. When the processing module1302is a processor, the communications module1303is an RF transceiver circuit, and the storage module1301is a memory, the terminal provided in this embodiment of this application may be the mobile phone100shown inFIG.4. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When a software program is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedures or functions according to the embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, microwave, or the like) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, that integrates one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk Solid State Disk, (SSD)), or the like. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims. | 64,818 |
11861162 | DETAILED DESCRIPTION Reference will now be made in detail to the disclosed embodiments, examples of which are illustrated in the accompanying drawings. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. A user device, such as a mobile phone or a tablet, may provide input prediction functionalities to facilitate user input of text, in which one or more candidate inputs may be displayed as suggestions based on a user's current input. It may be desired that a number of candidate inputs be provided such that the likelihood of the candidate inputs having the user-intended input is higher. Further, it may be desired that a particular type of candidate inputs be displayed according to a user's indication that such candidate inputs are intended by the user as the next input. Example embodiments of the present disclosure provide methods, devices and systems for providing candidate inputs. Consistent with disclosed embodiments, a user device may enter a sentence-generating mode in response to a user request. The sentence-generating mode may inhibit display of a virtual keyboard and display a plurality of candidate inputs in the area where the virtual keyboard was displayed. The user device may also display a plurality of synonyms or conjugations of a candidate input in response to detecting a user gesture performed on the candidate input. The embodiments herein include computer-implemented methods, tangible non-transitory computer-readable mediums, and systems. The computer-implemented methods can be executed, for example, by at least one processor that receives instructions from a non-transitory computer-readable storage medium. Similarly, systems and devices consistent with the present disclosure can include at least one processor and memory, and the memory can be a non-transitory computer-readable storage medium. As used herein, a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by at least one processor can be stored. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage medium. Singular terms, such as “memory” and “computer-readable storage medium,” can additionally refer to multiple structures, such a plurality of memories or computer-readable storage mediums. As referred to herein, a “memory” can comprise any type of computer-readable storage medium unless otherwise specified. A computer-readable storage medium can store instructions for execution by at least one processor, including instructions for causing the processor to perform steps or stages consistent with an embodiment herein. Additionally, one or more computer-readable storage mediums can be utilized in implementing a computer-implemented method. The term “computer-readable storage medium” should be understood to include tangible items and exclude carrier waves and transient signals. FIG.1is a diagram of an exemplary communications system100in which various implementations described herein may be practiced. The components and arrangement shown inFIG.1are not intended to be limiting to the disclosed embodiment as the components used to implement the processes and features disclosed here may vary. As shown inFIG.1, communications system100includes a plurality of user devices120A-120E associated with a plurality of users130A-130E respectively. In some embodiments, communications system100may be, for example, a communication platform based on a chat application, a text message application, an EMAIL application, or a social network application that allows users (e.g.,130A-130E) to exchange electronic messages, documents, audio or video content, gaming, and otherwise interact with one another in real-time using associated user devices (e.g.,120A-120E). As shown inFIG.1, communications system100includes one or more user devices120A-120E (collectively referred to as user devices120) and a network140. Network140facilitates communications and sharing of communication content between the user devices120. Network140may be any type of networks that provides communications, exchanges information, and/or facilitates the exchange of information between user devices120. For example, network140may be the Internet, a Local Area Network, a cellular network, a public switched telephone network (“PSTN”), or other suitable connection(s) that enables communications system100to send and receive information between the components of communications system100. A network may support a variety of electronic messaging formats, and may further support a variety of services and applications for user devices120. Users130A-130E communicate with one another using various types of user devices120A-120E via network140. As an example, user devices120A,120B, and120D include a display such as a television, tablet, computer monitor, video conferencing console, or laptop computer screen. User devices120A,120B, and120D may also include video/audio input devices such as a video camera, web camera, or the like. As another example, user devices120C and120E include mobile devices such as a tablet or a smartphone having display and video/audio capture capabilities. User devices120A-120E may also include one or more software applications that facilitate the user devices to engage in communications, such as IM, text messages, EMAIL, VoIP, video conferences, with one another in a group communication environment where each user may view content posted by other users and may post content that can be accessed by other users in a communication group. The messages exchanged among user130via network140may contain text, audio, video, data, or any other multimedia content. In some embodiments, user devices120may predict text likely to be entered by users130and provide candidate inputs for users130to select when typing a message. For example, user device120E may display a plurality of candidate inputs for selection by user130E when user130E composes a message. The candidate inputs may be provided based on the current text input entered by user130E and historic text input activities on user device120E. FIG.2is a block diagram of an exemplary user device200for implementing embodiments consistent with the present disclosure. User device200can be used to implement computer programs, applications, methods, processes, or other software to perform embodiments described in the present disclosure. User device200includes a memory interface202, one or more processors204such as data processors, image processors and/or central processing units, and a peripherals interface206. Memory interface202, processors204, and/or peripherals interface206can be separate components or can be integrated in one or more integrated circuits. The various components in user device200can be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to the peripherals interface206to facilitate multiple functionalities. For example, a motion sensor210, a light sensor212, and a proximity sensor214can be coupled to the peripherals interface206to facilitate orientation, lighting, and proximity functions. Other sensors216can also be connected to the peripherals interface206, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities. A GPS receiver can be integrated with, or connected to, user device200. For example, a GPS receiver can be built into mobile telephones, such as smartphone devices. GPS software allows mobile telephones to use an internal or external GPS receiver (e.g., connecting via a serial port or Bluetooth). A camera subsystem220and an optical sensor222, e.g., a charged coupled device (“CCD”) or a complementary metal-oxide semiconductor (“CMOS”) optical sensor, may be utilized to facilitate camera functions, such as recording photographs and video clips. In some embodiments, processors204may be configured to track a gaze of the user's eyes via camera subsystem220. For example, camera subsystem220may capture an eye movement by sensing, via optical sensor222, the infrared light reflected from an eye. Processors204may be configured to determine a particular text displayed on touch screen246that a user is looking at, based on the direction of the user's gaze. Processors204may be further configured to determine a duration of the user's gaze, based on the output of the eye tracking sensors. In some embodiments, other sensors216may include one or more eye tracking sensors configured to track a viewing direction of the user by tracking and/or monitoring the eyes of the user to determine the user's gaze direction. The eye tracking sensors may also be configured to provide an output indicative of the viewing direction of the user by tracking a gaze of the user's eyes. Processors204may be configured to determine a particular text displayed on touch screen246a user is looking at based on the direction of the user's gaze. Processors204may be further configured to determine a duration of the user's gaze based on the output of the eye tracking sensors. Communication functions may be facilitated through one or more wireless/wired communication subsystems224, which may include an Ethernet port, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of wireless/wired communication subsystem224depends on the communication network(s) over which user device200is intended to operate. For example, in some embodiments, user device200includes wireless/wired communication subsystems224designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth® network. An audio subsystem226may be coupled to a speaker228and a microphone230to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. I/O subsystem240includes a touch screen controller242and/or other input controller(s)244. Touch screen controller242is coupled to a touch screen246. Touch screen246and touch screen controller242can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen246. While touch screen246is shown inFIG.2, I/O subsystem240may include a display screen (e.g., CRT or LCD) in place of touch screen246. Other input controller(s)244is coupled to other input/control devices248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. Touch screen246can, for example, also be used to implement virtual or soft buttons and/or a keyboard. Memory interface202is coupled to memory250. Memory250includes high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Memory250stores an operating system252, such as DRAWIN, RTXC, LINUX, iOS, UNIX, OS X, WINDOWS, or an embedded operating system such as VXWorkS. The operating system252can include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system252can be a kernel (e.g., UNIX kernel). Memory250may also store communication instructions254to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. Memory250can include graphical user interface instructions256to facilitate graphic user interface processing; sensor processing instructions258to facilitate sensor-related processing and functions; phone instructions260to facilitate phone-related processes and functions; electronic messaging instructions262to facilitate electronic-messaging related processes and functions; web browsing instructions264to facilitate web browsing-related processes and functions; media processing instructions266to facilitate media processing-related processes and functions; GPS/navigation instructions268to facilitate GPS and navigation-related processes and instructions; camera instructions270to facilitate camera-related processes and functions; and/or other software instructions272to facilitate other processes and functions. Memory250may also include multimedia conference call managing instructions274to facilitate conference call related processes and instructions. Memory250may also include recipient vocabulary profiles. In the presently described embodiment, the instructions cause processor204to perform one or more functions of the disclosed methods. For example, the instructions may cause the processor204to display a virtual keyboard in a first area of the display, receive text input via the keyboard, display the text input, receive a user request to enter a sentence-generating mode, the sentence-generating mode inhibiting display of the keyboard and displaying a plurality of candidate inputs in a second area of the display, the second area comprising the first area, receive a selection of one of the candidate inputs, and display the received selection. Each of the above identified instructions and software applications may correspond to a set of instructions for performing functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory250may include additional instructions or fewer instructions. Furthermore, various functions of the user device200may be implemented in hardware and/or in software, including in one or more signal processing and/or application-specific integrated circuits (ASICs). FIG.3is a flowchart of an exemplary process300for providing candidate inputs on a user device, consistent with disclosed embodiments. The steps associated with this example process may be performed by, for example, a processor of the user device120ofFIG.1. The exemplary process300allows the user device to provide candidate inputs for a user to select when the user types a message using the user device. In step310, the user device displays a virtual keyboard in a first area of the display. For example, the virtual keyboard may be displayed near the top of a touch screen, the bottom of a touch screen, or the space the user is currently entering text. The virtual keyboard includes a plurality of keys, each of which configured to register a respective character of a symbolic system on displayed user interface. In some embodiments, the virtual keyboard may be displayed in response to a user operation in a text field of a user interface. For example, the virtual keyboard may be displayed in response to a user tapping operation in a message input box of an instant messaging application. In some embodiments, the virtual keyboard may be displayed in response to a user selection of an icon on the user interface. For example, the virtual keyboard may be displayed in response to a user selecting an icon for composing a new Email in an Email application. The present disclosure does not intend to limit the types of virtual keyboards or the arrangement of keys in the virtual keyboards. In step320, the user device receives a text input via the virtual keyboard. The text input may include one or more letters, characters, words, numbers, symbols, punctuations, icons, and/or a combination of any of those. In step330, the user device displays the text input on the touchscreen. Referring briefly toFIG.4, there is illustrated an exemplary user interface400for providing candidate inputs, consistent with disclosed embodiments. As shown in the left diagram400a, a virtual keyboard401is displayed in a first area that is near the bottom of the touchscreen display. The user device receives a text input402via the virtual keyboard, and text input402is displayed on the touchscreen. It can also be seen that the user device provides a text prediction function, and a plurality of candidate inputs403are displayed in the touchscreen. Candidate inputs403are provided based on the text input402. For example, the user enters text “ca” via the virtual keyboard401, and the user device provides candidate inputs that begin with letters “ca” for the user to select and enter, including “cat,” “can,” “cad,” “car,” “catalog,” and “cater.” Candidate inputs403are displayed in an area of the touchscreen that is near the text input, such as in a row below the text input field as shown in the diagram400a. The virtual keyboard is displayed concurrently with the candidate inputs, such that the user may manually type the text instead of selecting one of the candidate inputs. This mode of displaying the candidate inputs concurrently with the virtual keyboard is referred to as the regular input mode in the present disclosure. Returning now toFIG.3, in step340, the user device receives a user request to enter a sentence-generating mode. In the sentence-generating mode, the user device inhibits display of the keyboard and displays a plurality of candidate inputs in a second area of the display. In some embodiments, the second area for displaying the candidate inputs includes the first area where the virtual keyboard was displayed. For example, in the sentence-generating mode, the virtual keyboard was not displayed anymore, and the area where the virtual keyboard was displayed is used to display the candidate inputs. Referring toFIG.4, the right diagram400billustrates a user interface in the sentence-generating mode. As shown in the right diagram400b, in the sentence-generating mode, the virtual keyboard is not displayed in the touchscreen. A plurality of candidate inputs404are displayed in a second area of the touchscreen, the second area including the first area where the virtual keyboard was previously displayed. As shown in the diagram400b, the second area of the display includes a plurality of rows, the rows including a plurality of grids displaying the candidate inputs. It can be seen that by inhibiting the display of the virtual keyboard and using the first area to display the candidate inputs, the area to display the candidate inputs becomes larger and more candidate inputs can be displayed in the touchscreen compared to the regular input mode shown in diagram400a. In some embodiments, the user request to enter the sentence-generating mode may include a swiping gesture. For example, in the regular input mode of diagram400a, a user may perform a swiping gesture on one of the displayed candidate inputs, such as the word “cat.” The user device may detect the swiping gesture performed on the word “cat,” and in response to the detection, switch to the sentence-generating mode for text input. The user request to enter the sentence-generating mode may include other gestures performed on the displayed candidate inputs, and the present disclosure does not intend to limit the types of gestures to trigger the sentence-generating mode. In some embodiments, the user request to enter the sentence-generating mode may include a tracked eye movement. For example, in the regular input mode of diagram400a, a user may blink one or both eyes when looking at one of the displayed candidate inputs, such as the word “cat.” The user device may detect the blinking action and the user's gaze on the word “cat,” and in response to the detection, switch to the sentence-generating mode for text input. The eye movement to enter the sentence-generating mode may include other eye movements, such as looking at the displayed candidate inputs one by one in sequence, and the present disclosure does not intend to limit the types of eye movements to trigger the sentence-generating mode. In some embodiments, the user request to enter the sentence-generating mode may include selecting an icon displayed on the touchscreen. For example, a mode switching icon may be displayed on the touchscreen, and upon detecting a user selection of the mode switching icon, the user device may switch from the regular input mode to the sentence-generating mode. In some embodiments, the sentence-generating mode may be set as a default mode for text input, and no user request to trigger the sentence-generating mode is required. The user device may switch from the sentence-generating mode to the regular input mode upon receiving a user request to do so. For example, a mode switching icon may be displayed on the touchscreen, and upon detecting a user selection of the mode switching icon, the user device may switch from the sentence-generating mode to the regular input mode. Returning toFIG.3, in step350, the user device receives a selection of one of the candidate inputs. After the candidate inputs are displayed, a user may select one of the candidate inputs as a user input by performing a gesture. For example, a user may perform a swiping gesture on one of the candidate inputs, and upon detecting the swiping gesture on one of the candidate input, the user device may determine that corresponding candidate input is selected by the user. FIG.5illustrates an exemplary user interface500for providing candidate inputs in a sentence-generating mode, consistent with disclosed embodiments. As shown in the left diagram500a, a plurality of candidate inputs are displayed in the touchscreen. For example, the user device may detect a swiping gesture performed on the word “jump,” and in response to the detection, determine that the word “jump” is selected by the user as the text input. As another example, the user device may detect a tapping gesture performed on the word “jump,” and in response to the detection, determine that the word “jump” is selected by the user as the text input. In some embodiments, the user device may determine the candidate inputs based on the user's historic activities. For example, the user device may obtain prior messages typed by the user in a predetermined time period, such as in the last month or in the last three months, and determine a word occurring after the same preceding word in the messages as a candidate input. Referring to diagram500ashown inFIG.5, the user device may identify that the word “eating” occurred after the word “cat” in the user's past messages and determine the word “eating” as a candidate input for displaying on the touchscreen. In some implementations, the user device may implement a statistics library to store the words inputted by the user following a certain word. The user device may update the statistics library each time as the user uses the user device to input messages to reflect the recent user activities for providing candidate inputs. In some embodiments, the user device may determine the candidate inputs based on the user's historic communications with the same message recipient. For example, the user may be composing a message to send to a recipient, and the user device may obtain communication records between the user and the recipient in a predetermined time period for providing candidate inputs. If a word occurring after the same preceding word occurred in previous communication between the user and the recipient, the word may be determined as a candidate input. Referring to diagram500ashown inFIG.5, the user device may identify that the word “sitting” occurred after the word “cat” in a message sent from the recipient to the user, and determine the word “sitting” as a candidate input for displaying on the touchscreen. On the other hand, even though the word “crawling” occurred after the word “cat” in a message sent from the user to another recipient, the user device may determine the word “crawling” is not a candidate input for displaying on the touchscreen. In some implementations, the user device may implement a statistics library for each contact of the user to store the words inputted by the user or by the contact in communication between the user and the contact. The user device may update the statistics library for each contact each time as the user uses the user device to communicate with the corresponding contact in a text or messaging form, so as to reflect the recent communication between the user and the contact for providing candidate inputs. In some embodiments, the user device may identify the application in the user device currently being used by the user for writing the message and take that into account in determining the candidate inputs. For example, if the user is using an Instant Messenger application to type the electronic message, such as a finance chatbot, the user device may determine that the message relates to finance and identify candidate inputs that relate to finance. As another example, if the user is using a vehicle shopping chatbot to input the electronic message, the user device may determine that the message relates to vehicle shopping and identify candidate inputs that relate to vehicles and sales. In some embodiments, the user device may arrange the candidate inputs in the touchscreen based on likelihoods of the candidate inputs being selected by a user. For example, the user device may detect presence of a finger at a first position on the touchscreen display, and arrange the candidate inputs having a higher likelihood of being selected by the user at a position nearer to the first position. Referring to diagram500ashown inFIG.5, the user device may determine the user's finger is at the right edge of the box for the word “jump” on the touchscreen and arrange the candidate inputs having a higher likelihood of being selected by the user at a position nearer to the right edge of the box for the word “jump.” In some embodiments, the user device may detect an eye movement of the user and determine that one of the candidate inputs is selected by the user. For example, the user device may detect a gaze by the user on the word “jump,” and determine that the word “jump” is selected by the user as a text input. As another example, the user device may detect a blinking action by the user while the user gaze on the word “jump,” and in response to the detection, determine that the word “jump” is selected by the user as a text input. The eye movement to select one of the candidate inputs may include other eye movements, such as closing the eyes for a certain duration after looking at one of the candidate inputs. The present disclosure does not intend to limit the types of eye movements to select one of the candidate inputs. In step360, the user device displays the received selection. Referring to the right diagram500bshown inFIG.5, the user device determines the word “jumping” is selected by the user among the plurality of candidate inputs and displays the word “jumping” in the text field the user is currently inputting. Once the received selection is displayed in the touchscreen, the user device determines the next set of candidate inputs and updates the previously displayed candidate inputs with the new set. As shown in diagram500bofFIG.5, a new set of candidate inputs is displayed once a user selection of the candidate input “jump” is received and the received selection is displayed on the touchscreen. Diagram500balso shows that the candidate inputs can include punctuations in addition to words. In the present disclosure, the candidate inputs are not limited to words and can include words, punctuations, symbols, numbers, characters, icons, or a combination of any of those. Diagram500balso shows that one or more grids for displaying the candidate inputs in the area for displaying candidate inputs may be left blank when the number of available candidate inputs is less than the number of candidate inputs that can be displayed on the touchscreen. For example, the grid that is farthest away from the position of the user's finger on the display may be left blank if the number of available candidate inputs is less than the number of candidate inputs that can be displayed on the touchscreen. The process of identifying and providing the new set of candidate inputs is the same as the process described above in connection with step350. Steps350and360may be repeated until the user finishes inputting a message or a request to exit from the sentence-generating mode is received. FIG.6illustrates an exemplary user interface600for exiting from the sentence-generating mode, consistent with disclosed embodiments. As shown in the left diagram600a, the user device is in the sentence-generating mode, and a plurality of candidate inputs are displayed in the touchscreen in lieu of the virtual keyboard. In some embodiments, the user device may detect a lift of a finger off the display, and in response to the detection, the user device may exit from the sentence-generating mode and return to the regular input mode. In some implementations, the user device may detect that the duration of the lift of the finger off the display is less than a predetermined time threshold, and maintains the sentence-generating mode for text input. On the other hand, if the user device detects that the duration of the lift of the finger off the display is equal to or greater than the predetermined time threshold, the user device may exit from the sentence-generating mode and return to the regular inputting mode. As shown in the right diagram600a, the user device may detect that the user lifts the finger off the display, and in response, exit from the sentence-generating mode. The user device may return to the regular input mode in which the virtual keyboard is displayed on the touchscreen. In some embodiments, the user device may exit from the sentence-generating mode upon detecting a user gesture performed on one or more of the candidate inputs. For example, the user device may detect a multi-finger tapping gesture on the candidate inputs, and in response to the detection, exit from the sentence-generating mode. As another example, the user device may detect a two-finger swiping gesture on one of the candidate inputs, and in response to the detection, exit from the sentence-generating mode. As another example, the user device may detect a multi-tapping gesture on one of the candidate inputs, and in response to the detection, exit from the sentence-generating mode. The user gesture to trigger exit from the sentence-generating mode may include other gestures performed on the displayed candidate inputs, and the present disclosure does not intend to limit the types of gestures to trigger exit from the sentence-generating mode. In some embodiments, the user device may exit from the sentence-generating mode upon detecting a tracked eye movement. For example, the user device may detect a closing of the user's eyes for longer than a predetermined time duration, and in response to the detection, exit from the sentence-generating mode. The eye movement to exit from the sentence-generating mode may include other eye movements, and the present disclosure does not intend to limit the types of eye movements to trigger exit from the sentence-generating mode. In some embodiments, the user device may exit from the sentence-generating mode upon detecting a selection of an icon displayed on the touchscreen. For example, a mode switching icon may be displayed on the touchscreen, and upon detecting a user selection of the mode switching icon, the user device may switch from the sentence-generating mode to the regular input mode. In some embodiments, the user device may exit from the sentence-generating mode upon detecting a completion of the message input by the user. For example, after a user finishes inputting a message and sends it to a recipient, the user device may exit from the sentence-generating mode. As another example, the user device may detect a user finishing inputting a document, closing the document, and switching to another task of the user device. In response to the detection, the user device may exit from the sentence-generating mode. In some embodiments, the user device may exit from the sentence-generating mode upon detecting no response from the user for a time duration longer than a predetermined threshold after the candidate inputs are displayed. For example, when the candidate inputs are displayed for a time duration longer than a predetermined threshold and no user selection on the candidate inputs is received, the user device may exit from the sentence-generating mode. In some embodiments, the user device may allow a user to request display a different set of candidate inputs from those are currently displayed in the touchscreen. For example, upon detecting a user gesture, such as a tapping gesture, performed on one of the candidate inputs, the user device may provide synonyms, conjugations, or words similar to the one of the candidate inputs on the touchscreen. FIG.7illustrates an exemplary user interface700for providing candidate inputs, consistent with disclosed embodiments. As shown in the top diagram700a, the user device is in the sentence-generating mode, and a plurality of candidate inputs is displayed in the touchscreen. The user device then detects a user request to provide candidate inputs similar to the word “jump.” For example, the user device may detect a tapping gesture performed on the word “jump” as a request to provide candidate inputs similar to the word “jump.” As another example, the user device may detect a press-and-hold gesture performed on the word “jump” as a request to provide candidate inputs similar to the word “jump.” As another example, the user device may detect a swipe-and-hold gesture performed on the word “jump” as a request to provide candidate inputs similar to the word “jump.” The swipe-and-hold gesture starts with a swiping on the word “jump,” and instead of lifting the finger at the end of the swiping, the user holds the finger on the touchscreen for a certain period of time. In response to the detected user request, the user device replaces the current candidate inputs with candidate inputs similar to the word “jump,” such as words “leap,” “launch,” “bound,” as shown in the lower diagram700b. The user device may also replace the current candidate inputs with candidate inputs that are conjugations of the word “jump,” such as words “jumping,” “jumps,” or “jumped.” The present disclosure does not intend to limit the user gesture for changing the displayed candidate inputs, and other user gestures can be performed on the candidate inputs to indicate a request to display a different set of candidate inputs. In some embodiments, the user request to change the displayed candidate inputs may include a tracked eye movement. For example, a user may blink one or both eyes when looking at one of the displayed candidate inputs, such as the word “jump” in diagram700a. The user device may detect the blinking action and the user's gaze on the word “jump,” and in response to the detection, replaces the current candidate inputs with candidate inputs similar to the word “jump.” The eye movement to change the displayed candidate inputs may include other eye movements, and the present disclosure does not intend to limit the types of eye movements to change the displayed candidate inputs. FIG.8illustrates another exemplary user interface800for providing candidate inputs, consistent with disclosed embodiments. As shown in the top diagram800a, the user device is in the sentence-generating mode, and a plurality of candidate inputs is displayed in the touchscreen. The user device then detects a user request to provide candidate inputs similar to the period punctuation “.” For example, the user device may detect a tapping gesture performed on the punctuation “.” as a request to provide more candidate inputs that are punctuations. As another example, the user device may detect a press-and-hold gesture performed on the punctuation “.” as a request to provide candidate inputs that are punctuations. In response to the detected user request, the user device replaces the current candidate inputs with punctuations such as those shown in the lower diagram800b. In some embodiments, the user device may determine the alternate candidate inputs relating to the selected candidate inputs based on the user's historic activities. For example, the user device may obtain the prior messages the user typed in a predetermined time period, such as in the last month or in the last three months, and determine one or more inputs relating to selected candidate input. Referring to diagram800bshown inFIG.8, the user device may identify that the punctuations frequency used by the user in the past include an exclamation point, a question mark, a semi-colon, a colon, and an exclamation point and question mark together. In some implementations, the user device may implement a statistics library to store the related words that are input by the user. The user device may update the statistics library each time as the user uses the user device to input messages to reflect the recent user activities for providing candidate inputs. For example, when an alternate candidate input is selected by a user, such as the exclamation point in diagram800b, the user device may store the information, such that when another user request to display alternate punctuations is received, the user device may display the exclamation point at a position close to the position where the user's finger is at. In some embodiments, the user device may determine the alternate candidate inputs based on the user's historic communications with the same message recipient. For example, the user device may obtain communication records between the user and the recipient in a predetermined time period for providing alternate candidate inputs. If a word relating to the selected candidate input occurred in previous communication between the user and the recipient, the word may be determined as an alternate candidate input. Referring to diagram800bshown inFIG.8, the user device may identify that the punctuation “?!” occurred in a message sent from the recipient to the user, and determine the punctuation “?!” as an alternate candidate input for displaying on the touchscreen. On the other hand, even though the ellipsis mark “ . . . ” occurred in a message sent from the user to another recipient, the user device may determine the ellipsis mark “ . . . ” is not an alternate candidate input for displaying on the touchscreen. In some implementations, the user device may implement a statistics library for each contact of the user to store the related words that are input by the user or by the contact in communication between the user and the contact. The user device may update the statistics library for each contact each time as the user uses the user device to communicate with the corresponding contact, such that when a user request for alternate candidate inputs is received, updated alternate candidate inputs may be provided. In the preceding disclosure, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the disclosure as set forth in the claims that follow. The disclosure and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. Therefore, it is intended that the disclosed embodiments and examples be considered as examples only, with a true scope of the present disclosure being indicated by the following claims and their equivalents. | 39,987 |
11861163 | Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures. DETAILED DESCRIPTION The following description with reference to accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. FIG.1is a block diagram illustrating an electronic device in a network environment according to an embodiment of the disclosure. Referring toFIG.1, an electronic device101in a network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by other component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used for an incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may visually provide information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™ wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., printed circuit board (PCB)). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102and104, or server108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. Hereinafter, an integrated intelligence system according to an embodiment disclosed in this specification will be described with reference toFIGS.2and3. FIG.2is a block diagram illustrating an integrated intelligence system, according to an embodiment of the disclosure. Referring toFIG.2, an integrated intelligence system10may include an electronic device601(e.g., the electronic device101ofFIG.1), a first external electronic device300, a second external electronic device200, and a database500. According to an embodiment, the electronic device601may include, for example, a smartphone, a tablet PC, a wearable device, a home appliance, or a digital camera. The one electronic device601is illustrated in the drawing, but an embodiment is not limited thereto. For example, the plurality of external electronic devices601capable of communicating with the first external electronic device300, the second external electronic device200and/or the database500may be included in the integrated intelligence system10. The second electronic device601may include a processor620(e.g., the processor120ofFIG.1), a memory630(e.g., the memory130ofFIG.1), a display660(e.g., the display device160ofFIG.1), and a communication module670(e.g., the communication module190ofFIG.1). According to an embodiment, the processor620may be operatively coupled to the memory630, and the display660to perform overall functions of the electronic device601. For example, the processor620may include one or more processors. For example, the one or more processors may include an image signal processor (ISP), an application processor (AP), or a communication processor (CP). According to an embodiment, the electronic device601includes the display660, the communication module670, the processor620operatively connected to the display and the communication module, and the memory630operatively connected to the processor. The memory stores instructions that, when executed, cause the processor to receive information about a time interval and user interface information, which are associated with a response to a user utterance input to a first external electronic device300, from a second external electronic device200through the communication module, to determine whether the display is in an active state within the time interval, and to provide a first user interface corresponding to the user interface information through the display based on the determination that the display is in the active state within the time interval. In an embodiment, the information about the time interval may include information about a response time interval, during which the response is output from the first external electronic device, and information about a threshold time interval that is a specified time after the response time interval has expired. The instructions may cause the processor to determine whether the display is in the active state within the response time interval and to determine whether the display is in the active state within the threshold time interval, based on the determination that the display is in an inactive state within the response time interval. In an embodiment, the instructions may cause the processor to determine whether an application program is being executed in the electronic device, based on the determination that the display is in the active state within the time interval and to provide the user interface information and the first user interface corresponding to the application program through the display based on the determination that the application program is being executed. In an embodiment, the instructions may cause the processor to provide the first user interface so as to overlap a part of the execution screen while the application program is executed and an execution screen of the application program is provided through the display. In an embodiment, the instructions may cause the processor to display the first user interface on a first screen of the electronic device and to provide a second user interface, which is associated with content included in the response and which includes information corresponding to the user input, through the display, based on a user input for selecting a part of the first user interface, which is input to the electronic device. In an embodiment, the instructions may cause the processor to display the first user interface on a first screen of the electronic device and to execute an application program associated with the response, based on a user input for selecting a part of the first user interface, which is input to the electronic device. In an embodiment, the instructions may cause the processor, before providing the first user interface through the display, to transmit a request for determining whether the time interval has expired, to one of the first external electronic device or the second external electronic device, to receive feedback on the request from one of the first external electronic device or the second external electronic device, to determine whether the time interval has not expired, through the feedback, and to provide the first user interface corresponding to the user interface information through the display based on the determination that the display is in the active state within the time interval. In an embodiment, the instructions may cause the processor, after displaying a second user interface on a first screen of the electronic device as the first user interface corresponding to the user interface information, to display a third user interface on a first screen of the electronic device and to display the second user interface on the first screen of the electronic device instead of the third user interface based on a fact that a user input to the third user interface is input to the electronic device. According to an embodiment, a method for providing a user interface of the electronic device601includes receiving information about a time interval and user interface information, which are associated with a response to a user utterance input to a first external electronic device300, from a second external electronic device200, determining whether a display660of the electronic device is in an active state within the time interval, and providing a first user interface corresponding to the user interface information through the display based on the determination that the display is in the active state within the time interval. In an embodiment, the information about the time interval may include information about a response time interval, during which the response is output from the first external electronic device, and information about a threshold time interval that is a specified time after the response time interval has expired. The determining of whether the display of the electronic device is in the active state within the time interval may include determining whether the display is in the active state within the response time interval and determining whether the display is in the active state within the threshold time interval, based on the determination that the display is in an inactive state within the response time interval. In an embodiment, the providing of the first user interface corresponding to the user interface information through the display based on the fact that it is determined that the display is in the active state within the time interval may include determining whether an application program is being executed in the electronic device and providing the user interface information and the first user interface corresponding to the application program through the display based on the determination that the application program is being executed. In an embodiment, the first user interface may be displayed on a first screen of the electronic device. The method may further include providing a second user interface, which is associated with content included in the response and which includes information corresponding to the user input, through the display, based on a user input for selecting a part of the first user interface, which is input to the electronic device. In an embodiment, the first user interface may be displayed on a first screen of the electronic device. The method may further include executing an application program associated with the response based on a user input for selecting a part of the first user interface, which is input to the electronic device. In an embodiment, the method may further include, before providing the first user interface through the display, transmitting a request for determining whether the time interval has expired, to one of the first external electronic device or the second external electronic device and receiving feedback on the request from one of the first external electronic device or the second external electronic device. The providing of the first user interface through the display may be based on the determination through the feedback that the time interval has not expired. In an embodiment, the providing of the first user interface through the display may include displaying a third user interface on the first screen after displaying a second user interface on a first screen of the electronic device. The method may further include displaying the second user interface on the first screen of the electronic device instead of the third user interface based on a fact that a user input to the third user interface is input to the electronic device. According to an embodiment, the electronic device601includes the display660, the processor620operatively connected to the display, and a memory630operatively connected to the processor. The memory stores instructions that, when executed, cause the processor to output a response to a user utterance input to the electronic device, identify a user interface information associated with the response, to identify a time interval including a threshold time interval that is a specified time after a response time interval, during which the response is output, and the response time interval have expired, to determine whether the display is in an active state within the time interval, and to provide a first user interface corresponding to the user interface information through the display based on the determination that the display is in the active state within the time interval. In an embodiment, the instructions may cause the processor to determine the threshold time interval based on content included in the response. In an embodiment, the instructions may cause the processor to determine whether an application program is being executed in the electronic device, based on the determination that the display is in the active state within the time interval and to provide the user interface information and the first user interface corresponding to the application program through the display based on the determination that the application program is being executed. In an embodiment, the instructions may cause the processor to display the first user interface on a first screen of the electronic device and to provide a second user interface, which is associated with content included in the response and which includes information corresponding to the user input, through the display, based on a user input for selecting a part of the user interface, which is input to the electronic device. In an embodiment, the instructions may cause the processor to display the first user interface on a first screen of the electronic device and to execute an application program associated with the response, based on a user input for selecting a part of the first user interface, which is input to the electronic device. The memory630may store commands, information, or data associated with operations of components included in the electronic device601. For example, the memory630may store instructions, when executed, that cause the processor620to perform various operations described in the specification. The display660may visually provide various pieces of information. The display660according to an embodiment may be configured to display an image or a video. The display660according to an embodiment may display the graphical user interface (GUI) of the running app (or an application program). The display660may display a user interface corresponding to user interface information received from the second external electronic device200. In an embodiment where the display660is a touch screen display, the display660may receive a touch input. The electronic device601may communicate with the first external electronic device300, the second external electronic device200, and/or the database500through the communication module670. The electronic device601may further include an additional component in addition to the components illustrated inFIG.2. For example, the electronic device601may include a communication module (not shown) or a connecting terminal (not shown) for communicating with the first external electronic device300, the second external electronic device200, and/or the database500. According to an embodiment, the components of the electronic device601may be the same entities or may constitute separate entities. The external electronic device300according to an embodiment may be a terminal device (or an electronic device) capable of connecting to Internet, and may be a smart speaker. The first external electronic device300may include a communication interface310, a microphone320, a speaker330, a memory350, or a processor360. The listed components may be operatively or electrically connected to one another. The communication interface310according to an embodiment may be connected to an external device and may be configured to transmit or receive data to or from the external device. The microphone320according to an embodiment may receive a sound (e.g., a user utterance) to convert the sound into an electrical signal. The speaker330according to an embodiment may output the electrical signal as sound (e.g., voice). The memory350according to an embodiment may store a client module351, a software development kit (SDK)353, and a plurality of apps355. The client module351and the SDK353may constitute a framework (or a solution program) for performing general-purposed functions. Furthermore, the client module351or the SDK353may constitute the framework for processing a voice input. According to an embodiment, the plurality of apps355may be programs for performing a specified function. According to an embodiment, the plurality of apps355may include a first app355_1and a second app355_3. According to an embodiment, each of the plurality of apps355may include a plurality of actions for performing a specified function. For example, the plurality of apps355may include an alarm app, a message app, and/or a schedule app. According to an embodiment, the plurality of apps355may be executed by the processor360to sequentially execute at least part of the plurality of actions. According to an embodiment, the processor360may control an overall operation of the first external electronic device300. For example, the processor360may be electrically connected to the communication interface310, the microphone320, and the speaker330so as to perform a specified operation. The processor360according to an embodiment may execute the program stored in the memory350so as to perform a specified function. For example, according to an embodiment, the processor360may execute at least one of the client module351or the SDK353so as to perform a following operation for processing a voice input that is a user utterance. The processor360may control operations of the plurality of apps355via the SDK353. The following operation described as an operation of the client module351or the SDK353may be executed by the processor360. According to an embodiment, the client module351may receive a voice input that is a user utterance. For example, the client module351may receive a voice signal corresponding to a user utterance detected through the microphone320. The client module351may transmit the received voice input to the second external electronic device200. The client module351may transmit state information of the first external electronic device300to the second external electronic device200together with the received voice input. For example, the state information may be execution state information of an app. According to an embodiment, the client module351may receive a response corresponding to the received voice input. For example, when the client module351is capable of generating a response corresponding to the voice input received from the second external electronic device200, the client module351may receive a response corresponding to the received voice input. According to an embodiment, the client module351may receive a plan corresponding to the received voice input. When the first external electronic device300includes a display, the client module351may display a result of executing a plurality of operations of the app depending on the plan on a display. For example, the client module351may sequentially display the result of executing the plurality of actions on a display. For another example, the first external electronic device300may display only a part of results (e.g., a result of the last action) of executing the plurality of actions, on the display. According to an embodiment, the client module351may receive a request for obtaining information necessary to calculate a response corresponding to a voice input, from the second external electronic device200. According to an embodiment, the client module351may transmit necessary information to the second external electronic device200in response to the request. The client module351according to an embodiment may transmit information as a result of executing a plurality of operations according to a plan to the second external electronic device200. The second external electronic device200may identify that the received voice input is correctly processed, using the result information. According to an embodiment, the client module351may include a speech recognition module. According to an embodiment, the client module351may recognize a voice input for performing a limited function, via the speech recognition module. For example, the client module351may launch an intelligence app that processes a voice input for performing an organic action, via a specified input (e.g., wake up!). According to an embodiment, the second external electronic device200may receive information associated with a user's voice input from the first external electronic device300over a communication network. According to an embodiment, the second external electronic device200may convert data associated with the received voice input to text data. According to an embodiment, the second external electronic device200may generate a plan for performing a task corresponding to the user's voice input, based on the text data. According to an embodiment, the plan may be generated by an artificial intelligent (AI) system. The AI system may be a rule-based system, or may be a neural network-based system (e.g., a feedforward neural network (FNN) or a recurrent neural network (RNN)). Alternatively, the AI system may be a combination of the above-described systems or an AI system different from the above-described system. According to an embodiment, the plan may be selected from a set of predefined plans or may be generated in real time in response to a user request. For example, the AI system may select at least one plan of the plurality of predefined plans. According to an embodiment, the second external electronic device200may transmit a response according to the generated plan to the first external electronic device300or may transmit the generated plan to the first external electronic device300. According to an embodiment, the first external electronic device300may output a response according to the plan through the speaker330. According to an embodiment, the first external electronic device300may output a result of executing an action according to the plan through the speaker330with voice. According to an embodiment, the second external electronic device200may identify a response time interval that is a time required for a response to a user utterance to be output from the first external electronic device300. The second external electronic device200may identify a threshold time interval that is a specified time after the response time interval has expired. The second external electronic device200may identify user interface information corresponding to the response. The second external electronic device200may transmit information about the response time interval, information about the threshold time interval, and user interface information to the electronic device601. The second external electronic device200according to an embodiment may include a front end210, a natural language platform220, a capsule database (DB)230, an execution engine240, an end user interface250, a management platform260, a big data platform270, an analytic platform280, a threshold time determination module291, a user terminal selection module292, or a time synchronization module293. According to an embodiment, the front end210may receive a voice input received from the first external electronic device300. The front end210may transmit a response corresponding to the voice input. According to an embodiment, the natural language platform220may include an automatic speech recognition (ASR) module221, a natural language understanding (NLU) module223, a planner module225, a natural language generator (NLG) module227, or a text to speech module (TTS) module229. According to an embodiment, the ASR module221may convert the voice input received from the first external electronic device300into text data. According to an embodiment, the NLU module223may grasp the intent of the user, using the text data of the voice input. For example, the NLU module223may grasp the intent of the user by performing syntactic analysis or semantic analysis. According to an embodiment, the NLU module223may grasp the meaning of words extracted from the voice input by using linguistic features (e.g., syntactic elements) such as morphemes or phrases and may determine the intent of the user by matching the grasped meaning of the words to the intent. According to an embodiment, the planner module225may generate the plan by using a parameter and the intent that is determined by the NLU module223. According to an embodiment, the planner module225may determine a plurality of domains necessary to perform a task, based on the determined intent. The planner module225may determine a plurality of actions included in each of the plurality of domains determined based on the intent. According to an embodiment, the planner module225may determine the parameter necessary to perform the determined plurality of actions or a result value output by the execution of the plurality of actions. The parameter and the result value may be defined as a concept of a specified form (or class). As such, the plan may include the plurality of actions and a plurality of concepts, which are determined by the intent of the user. The planner module225may determine the relationship between the plurality of actions and the plurality of concepts stepwise (or hierarchically). For example, the planner module225may determine the execution sequence of the plurality of actions, which are determined based on the user's intent, based on the plurality of concepts. In other words, the planner module225may determine an execution sequence of the plurality of actions, based on the parameters necessary to perform the plurality of actions and the result output by the execution of the plurality of actions. Accordingly, the planner module225may generate a plan including information (e.g., ontology) about the relationship between the plurality of actions and the plurality of concepts. The planner module225may generate the plan by using information stored in the capsule DB230storing a set of relationships between concepts and actions. According to an embodiment, the NLG module227may change specified information into information in a text form. The information changed to the text form may be in the form of a natural language speech. The TTS module229according to an embodiment may change information in the text form to information in a voice form. According to an embodiment, all or part of the functions of the natural language platform220may be also implemented in the first external electronic device300and/or the electronic device601. The capsule DB230may store information about the relationship between the actions and the plurality of concepts corresponding to a plurality of domains. According to an embodiment, the capsule may include a plurality of action objects (or action information) and concept objects (or concept information) included in the plan. According to an embodiment, the capsule DB230may store the plurality of capsules in a form of a concept action network (CAN). According to an embodiment, the plurality of capsules may be stored in the function registry included in the capsule DB230. The capsule DB230may include a strategy registry that stores strategy information necessary to determine a plan corresponding to a voice input. When there are a plurality of plans corresponding to the voice input, the strategy information may include reference information for determining one plan. According to an embodiment, the capsule DB230may include a follow-up registry that stores information of the follow-up action for suggesting a follow-up action to the user in a specified context. For example, the follow-up action may include a follow-up utterance. According to an embodiment, the capsule DB230may include a layout registry for storing layout information of the information output via the first external electronic device300. According to an embodiment, the capsule DB230may include a vocabulary registry storing vocabulary information included in capsule information. According to an embodiment, the capsule DB230may include a dialog registry storing information about dialog (or interaction) with the user. The capsule DB230may update an object stored via a developer tool. For example, the developer tool may include a function editor for updating an action object or a concept object. The developer tool may include a vocabulary editor for updating a vocabulary. The developer tool may include a strategy editor that generates and registers a strategy for determining the plan. The developer tool may include a dialog editor that creates a dialog with the user. The developer tool may include a follow-up editor capable of activating a follow-up target and editing the follow-up utterance for providing a hint. The follow-up target may be determined based on a target, the user's preference, or an environment condition, which is currently set. In an embodiment, the capsule database230may be implemented in the first external electronic device300and/or the electronic device601. According to an embodiment, the execution engine240may calculate a result by using the generated plan. The end user interface250may transmit the calculated result to the first external electronic device300. Accordingly, the first external electronic device300may receive the result and may provide the user with the received result. According to an embodiment, the management platform260may manage information used by the second external electronic device200. According to an embodiment, the big data platform270may collect data of the user. According to an embodiment, the analytic platform280may manage quality of service (QoS) of the second external electronic device200. For example, the analytic platform280may manage the component and processing speed (or efficiency) of the second external electronic device200. According to an embodiment, the threshold time determination module291may identify user interface information corresponding to the calculated result including a response to a user utterance that is input to the first external electronic device300. The user interface information may correspond to content of the response. The threshold time determination module291may identify a response time interval that is a time required for the response to be output from the first external electronic device300. The threshold time determination module291may identify a threshold time interval that is a specified time after the response time interval has expired. In an embodiment, the threshold time determination module291may determine a threshold time interval based on content included in the response. In an embodiment, the threshold time determination module291may determine the threshold time interval based on the user interface information corresponding to the response. In an embodiment, the threshold time determination module291may determine the threshold time interval based on a capsule associated with the generated plan. In an embodiment, the threshold time determination module291may determine the threshold time interval based on the type of an electronic device that will provide the user interface. According to an embodiment, the user terminal selection module292may communicate with the first external electronic device300and may search for an electronic device (e.g., a smart phone or a smart watch) registered in the same user account as the first external electronic device300. The user terminal selection module292may access the database500so as to obtain information about a user ID510associated with a user account, a type520of an electronic device, and an electronic device list530. The user terminal selection module292may search for an electronic device by using the information obtained from the database500. In an embodiment, the database500may be implemented separately from the electronic device601, the first external electronic device300, and the second external electronic device200. In an embodiment, unlike the illustration, the database500may be included in the electronic device601, the first external electronic device300, and/or the second external electronic device200. There may be a plurality of electronic devices found by the user terminal selection module292. The user terminal selection module292may select at least one electronic device (e.g., the electronic device601), which is capable of displaying a user interface corresponding to user interface information, from among a plurality of found electronic devices through a display. The second external electronic device200may transmit information (e.g., information about a response time interval and information about a threshold time interval) about a time interval associated with a response and user interface information to the selected electronic device (e.g., the electronic device601). According to an embodiment, the second external electronic device200may transmit the information (e.g., information about the response time interval and information about the threshold time interval) about the time interval associated with the response and the user interface information to the selected electronic device (e.g., the electronic device601). Afterward, when a user interface is provided through a display (e.g., the display660), the time synchronization module293may synchronize the response output from the first external electronic device300with the user interface. In an embodiment, the electronic device601may receive, from the second external electronic device200, information about a time interval associated with a response to a user utterance and the user interface information associated with the response and then may provide a user interface corresponding to the user interface information through the display660based on a fact that it is identified that a display is activated within the time interval. In an embodiment, when a user input is input to the electronic device601through the user interface, an additional interface may be displayed on the electronic device601by the user input. In an embodiment, content included in the response may be provided through the display660. In an embodiment, an application program associated with the response may be executed in the electronic device601. In an embodiment, when a user input is input to the electronic device601through the user interface, the user input may be transmitted to the second external electronic device200. The second external electronic device200may generate a response corresponding to the user input and then may deliver the response to the first external electronic device300. The second external electronic device200may output a response corresponding to the user input. For example, in response to a user utterance saying that “please recommend a laundry detergent”, the first external electronic device300may output the found laundry detergent information as a response, and the electronic device601may show the found laundry detergent and may provide a user interface including an object for purchasing the found laundry detergent through a display. When the user inputs a user input for selecting an object for a purchase through a user interface of the electronic device601, the electronic device601may transmit a user input to the second external electronic device200. The second external electronic device200may grasp the intent of the user input and may transmit a response saying that “I will proceed with the purchase” to the first external electronic device300. In response to a user input to the electronic device601, the first external electronic device300may output a response saying that “I will proceed with the purchase for the selected laundry detergent”. In this drawing, the threshold time determination module291, the user terminal selection module292, and the time synchronization module293are illustrated to be included in the second external electronic device200, one by one. However, an embodiment is not limited thereto. For example, the threshold time determination module291, the user terminal selection module292, and the time synchronization module293may be included in the second external electronic device200such that the number of each of the threshold time determination module291, the user terminal selection module292, and the time synchronization module293corresponds to the number of capsules included in the capsule database230. According to an embodiment, a service server400may provide the first external electronic device300with a specified service (e.g., ordering food or booking a hotel). According to an embodiment, the service server400may be a server operated by the third party. According to an embodiment, the service server400may provide the second external electronic device200with information for generating a plan corresponding to the received voice input. The provided information may be stored in the capsule DB230. Furthermore, the service server400may provide the second external electronic device200with result information according to the plan. The service server may include a plurality of service servers401,403and405. In the above-described integrated intelligence system10, the first external electronic device300may provide the user with various intelligent services in response to a user input. The user input may include, for example, an input through a physical button, a touch input, or a voice input. According to an embodiment, the first external electronic device300may provide a speech recognition service via an intelligence app (or a speech recognition app) stored therein. In this case, for example, the first external electronic device300may recognize a user utterance or a voice input, which is received through a microphone, and may provide the user with a service corresponding to the recognized voice input. According to an embodiment, the first external electronic device300may perform a specified operation, based on the received voice input, independently, or together with the second external electronic device200and/or the service server400. For example, the first external electronic device300may launch an app corresponding to the received voice input and may perform the specified action via the executed app. According to an embodiment, when providing a service together with the second external electronic device200and/or the service server400, the first external electronic device300may detect a user utterance by using the microphone320and may generate a signal (or voice data) corresponding to the detected user utterance. The first external electronic device300may transmit the voice data to the second external electronic device200by using the communication interface310. According to an embodiment, the second external electronic device200may generate a plan for performing a task corresponding to the voice input or a result of performing an action depending on the plan, as a response to the voice input received from the first external electronic device300. For example, the plan may include a plurality of actions for performing a task corresponding to the voice input of the user and a plurality of concepts associated with the plurality of actions. The concept may define a parameter to be input upon executing the plurality of actions or a result value output by the execution of the plurality of actions. The plan may include relationship information between the plurality of actions and the plurality of concepts. According to an embodiment, the first external electronic device300may receive the response by using the communication interface310. The first external electronic device300may output a voice signal generated inside the first external electronic device300to the outside by using the speaker330. FIG.3is a diagram illustrating a form in which relationship information between a concept and an action is stored in a database, according to an embodiment of the disclosure. Referring toFIG.3, a capsule database (e.g., the capsule DB230) of the second external electronic device200may store a capsule in the form of a CAN700. The capsule DB may store an action for processing a task corresponding to a user's voice input and a parameter necessary for the action, in the CAN form. The capsule DB may store a plurality capsules (a capsule A701and a capsule B704) respectively corresponding to a plurality of domains (e.g., applications). According to an embodiment, one capsule (e.g., the capsule A701) may correspond to one domain (e.g., a location (geo) or an application). Furthermore, at least one service provider (e.g., CP1702or CP2703) for performing a function for a domain associated with the capsule may correspond to one capsule. According to an embodiment, the single capsule may include at least one or more actions710and at least one or more concepts720for performing a specified function. The natural language platform220may generate a plan for performing a task corresponding to the received voice input, using the capsule stored in a capsule database. For example, the planner module225of the natural language platform may generate the plan by using the capsule stored in the capsule database. For example, a plan707may be generated by using actions4011and4013and concepts4012and4014of the capsule A701and an action4041and a concept4042of the capsule B704. The capsule B704may receive information from CP4705. The CAN700may also include CP3706. In an embodiment, unlike the illustration ofFIG.2, modules included in the first external electronic device300and the second external electronic device200may be implemented in the electronic device601. Hereinafter, the operation of an electronic device according to an embodiment disclosed in this specification will be described with reference toFIG.4. For clarity of description, details the same as the above-described details are briefly described or omitted. FIG.4is a flowchart for describing a method, in which an electronic device (e.g., a processor) provides a user interface, according to an embodiment of the disclosure. Below, it is assumed that the electronic device601ofFIG.2performs a process ofFIG.4. The operation described as being performed by the electronic device601may be implemented with instructions capable of being performed (or executed) by the processor620of the electronic device601. The instructions may be stored in, for example, a computer-readable recording medium or the memory630of the electronic device illustrated inFIG.2. Referring toFIG.4, in operation11, an electronic device (e.g., the electronic device601and/or the processor620ofFIG.2) according to the embodiment disclosed in this specification may receive, from a second external electronic device (e.g., the second external electronic device200ofFIG.2), information about a time interval associated with a response to a user utterance input to a first external electronic device (e.g., the first external electronic device300ofFIG.2) and user interface information associated with the response. For example, the information about the time interval associated with the response may include information about a response time interval and information about a threshold time interval. For example, the information about the response time interval may include time information required to be output from the first external electronic device (e.g., the speaker330inFIG.2) in response to a user utterance input to the first external electronic device. The electronic device may identify the response time interval by receiving information about the response time interval from the second external electronic device. For example, the information about the threshold time interval may include information about a specified time after the response time interval has expired. The electronic device may identify the threshold time interval by receiving information about the threshold time interval from the second external electronic device. For example, the user interface information associated with the response may be associated with a user interface capable of visually providing a user with content included in the response. In operation13, the electronic device may determine whether a display of the electronic device is activated within a time interval. For example, when the display is activated within the threshold time interval even though the display is inactive within the response time interval, the electronic device may identify that the display is activated within the time interval. In operation15, the electronic device may provide a user interface corresponding to user interface information through a display based on a fact that the display is active within the time interval. In an embodiment, the electronic device may display the user interface on a first screen of the electronic device. In an embodiment, the first screen may be a lock screen. For example, when the user sets a lock on an electronic device, the first screen may be a lock screen provided through a display before an authentication input for unlocking the lock is input to the electronic device. In an embodiment, the first screen may be a home screen. For example, when the user does not set a lock on the electronic device, the first screen may be a home screen. In an embodiment, the first screen may be a screen provided before the home screen. For example, when the user has not set a lock on the electronic device, the first screen may be a screen provided before the home screen is provided when the user switches a state of a display to an active state. In this case, a user may access the home screen by inputting a touch input and/or swipe input without needing to input a separate authentication input to the first screen. In an embodiment, when an application program is executed on an electronic device and the execution screen of the application program is provided through the display, the electronic device may provide a user interface so as to overlap a part of the execution screen. Hereinafter, an operation of an integrated intelligence system (e.g., the integrated intelligence system10ofFIG.2) including an electronic device disclosed in this specification will be described with reference toFIGS.5A,5B,6A,6B,6C,6D,6E,7A,7B,8A, and8B. For clarity of description, details the same as the above-described details are omitted. FIGS.5A and5Bare sequence diagrams for describing a method for providing a user interface corresponding to a response, according to various embodiments of the disclosure. Hereinafter, it is assumed that the first external electronic device300, the second external electronic device200, and the electronic device601ofFIG.2perform the process ofFIGS.5A and5B. The operation described as being performed by the first external electronic device300, the second external electronic device200, and the electronic device601may be implemented by using instructions capable of being performed (or executed) by each of the processor360of the first external electronic device300, modules included in the second external electronic device200, and the processor620of the electronic device601. The instructions may be stored in, for example, a computer-readable recording medium or the memory350of the first external electronic device300, the second external electronic device200, and the memory630of the electronic device601illustrated inFIG.2. Referring toFIG.5A, in operation1101, the first external electronic device300may receive a user utterance. For example, the first external electronic device300may receive a user's voice input entered through the microphone320. In operation1102, the first external electronic device300may transmit a user utterance to the second external electronic device200. In operation1103, the second external electronic device200may obtain user identification information based on the received user utterance. For example, the second external electronic device200may access a database (e.g., the database500ofFIG.2) and then may obtain a user ID (e.g., the user ID510ofFIG.2). In operation1105, the second external electronic device200may generate a capsule and a plan, which correspond to the user utterance. For example, as described with reference toFIG.2, the second external electronic device200may generate the capsule and the plan. In operation1107, the second external electronic device200may determine a response to the user utterance. The second external electronic device200may determine the response to the user utterance based on the generated plan. In operation1109, the second external electronic device200may transmit the response to the first external electronic device300. In operation1111, the first external electronic device300may output the response. In an embodiment where the response is a voice output, the first external electronic device300may output the response through the speaker330. In operation1112, the second external electronic device200may identify a response time interval associated with the response. The second external electronic device200may identify the response time interval that is a time required for the response transmitted to the first external electronic device300to be output from the first external electronic device300. In operation1113, the second external electronic device200may determine a threshold time interval associated with the response. The second external electronic device200may determine a threshold time interval that is a specified time after the response is output from the first external electronic device300. In an embodiment, the second external electronic device200may determine the threshold time interval based on content included in the response. In an embodiment, the second external electronic device200may determine the threshold time interval based on the user interface information corresponding to the response. In an embodiment, the second external electronic device200may determine the threshold time interval based on a capsule associated with the generated plan. In an embodiment, the second external electronic device200may determine the threshold time interval based on the type of an electronic device that will provide a user interface. In operation1115, the second external electronic device200may determine user interface information associated with the response. In an embodiment, the user interface information may be a deep link. In operation1117, the second external electronic device200may identify an electronic device, which is capable of receiving the user interface information, from among electronic devices registered in the same user account as the first external electronic device300based on the user identification information obtained in operation1103. The second external electronic device200may identify an electronic device, which is capable of providing a user interface corresponding to the user interface information, from among electronic devices through a display. When the identified electronic device is the electronic device601, in operation1119, the second external electronic device200may transmit the information about the response time interval, the information about the threshold time interval, and the user interface information to the electronic device601. The electronic device601may receive information (e.g., information about a response time interval and information about a threshold time interval) about a time interval associated with a response and the user interface information (e.g., operation11ofFIG.4). FIG.6Ais a diagram for describing a time interval associated with a response, according to an embodiment of the disclosure. Referring toFIGS.5B and6A, in operation1301, the electronic device601may identify whether a display (e.g., the display660ofFIG.2) is in an active state within a response time interval. During the response time interval, the first external electronic device300may output a response to a user utterance. For example, the inactive state of the display may be a state where an always-on-display mode is being executed through the display. For example, the inactive state of the display may be a state where no content is provided through the display. For example, the active state of the display may be a state where a first screen is provided through the display. For example, the active state of the display may be a state where the electronic device601recognizes that a user lifts the electronic device601and provides the first screen in the inactive state of the display. For example, the active state of the display may be a state where the first screen is provided through the display when the user applies a touch input to the touch screen display of the electronic device601, or the user presses an input button of the electronic device601. In an embodiment where the first screen is a lock screen, for example, the active state of the display may be a state where the lock screen of the electronic device601is unlocked and then a home screen is provided through the display, or a specific application program is running. For example, the user may input a user utterance D1saying that “Hi Bixby. Show me the best laundry detergents in market” into the first external electronic device300. The first external electronic device300may transmit a user utterance to the second external electronic device200(e.g., operation1102ofFIG.5A), may receive a response to the user utterance from the second external electronic device200(e.g., operation1109ofFIG.5A), and may output the response. (e.g., operation1111ofFIG.5A). For example, the first external electronic device300may output a response D2saying that “Hi! Here are8best laundry detergents for you. The first one is . . . ” during the response time interval. In operation1303, the electronic device601may identify whether the display is in an active state within a threshold time interval based on a fact that the display is in an inactive state within the response time interval. When the display is in an inactive state within the threshold time interval, a process may be terminated. Operation1301and operation1303may correspond to operation13ofFIG.4. Referring toFIG.5B, in operation1501, the electronic device601may determine whether an application program is being executed in the electronic device601, based on a fact that the display is in the active state (e.g., ‘YES’ in operation1301) within the response time interval, or a fact that the display is in the active state (e.g., ‘YES’ in operation1303) within the threshold time interval. In operation1503, the electronic device601may display a user interface corresponding to user interface information on the first screen of the electronic device601based on the determination that the application program is not being executed (e.g., ‘NO’ in operation1501). FIGS.6B and6Care diagrams for describing a user interface corresponding to user interface information provided by an electronic device, according to various embodiments of the disclosure. Referring toFIG.6B, the display of the electronic device601may be in an active state, and the electronic device601may be in a state where a first screen UIL1is displayed through the display660. For example, the electronic device601may provide a first user interface UI1associated with the response D2ofFIG.6Athrough the display660. For example, when the first external electronic device300outputs a response (e.g., D2inFIG.6A) saying that “Hi! Here are8best laundry detergents for you. The first one is . . . ”, the electronic device601may receive user interface information including content associated with the found laundry detergent from the second external electronic device200. The electronic device601may provide a first user interface UI1corresponding to user interface information through the display660on the first screen UIL1. In the drawings, it is described that the electronic device601is a smart phone, but is not limited thereto. In an embodiment, when the electronic device601is a smart watch, the first user interface UI1may be provided through a display of the smart watch. Referring toFIG.6C, the display660of the electronic device601may be in an active state, and the electronic device601may be in a state where a first screen UIL2is displayed through the display660. The electronic device601may provide a second user interface UI2capable of controlling a response output from the first external electronic device300through the display660. In an embodiment, when media content (e.g., ‘Black Mirror’) is being played in response to a user utterance through the first external electronic device300as a response, the electronic device601may provide the second user interface UI2for controlling the media content being played through the display660. Referring toFIGS.6B and6C, the first screen (UIL1, UIL2) may be either a lock screen or a screen provided before a home screen is provided. Referring toFIG.5B, in operation1505, the electronic device601may display user interface information and a user interface corresponding to an application program on an execution screen of the application program based on the determination that the application program is running (e.g., ‘YES’ in operation1501). The user interface may include content, which correspond to the user interface information and which is associated with a response. Besides, the user interface may correspond to the running application program and may be provided to overlap a part of the execution screen of the application program. FIGS.6D and6Eare diagrams for describing a user interface corresponding to user interface information provided by an electronic device, according to various embodiments of the disclosure. Referring toFIG.6D, the display660of the electronic device601may be in an active state, and may be in a state where a text message application program is running in the electronic device601. The electronic device601may provide an execution screen UIE1of the text message application program through the display660. The first screen may be the execution screen UIE1of the text message application program. In an embodiment, the electronic device601may provide a third user interface UI3so as to overlap a part of the execution screen UIE1of the text message application program. In an embodiment, the electronic device601may provide the third user interface UI3at the upper end of the display660of the electronic device601in a form of a notification. Referring toFIG.6E, the first screen may be a home screen. When the first screen is a home screen UIL3, the electronic device601may provide the first user interface UI1through the display660so as to partially overlap the home screen UIL3. However, an embodiment is not limited thereto. The electronic device601may provide the first user interface UI1as a part of a screen displayed through the display660in response to a swipe input from the upper end to a lower end of the display660to the home screen UIL3. In this case, the first user interface UI1may partially overlap the home screen UI3and may be provided through the display660. Referring toFIG.5B, after providing the user interface through the display, in operation1507, the electronic device601may determine whether a user input for selecting at least part of the user interface is input. When the user input is not input (e.g., ‘NO’ in operation1507), a process may be terminated. In operation1509, the electronic device601according to an embodiment may provide an additional user interface including information corresponding to a user input through the display660, based on a fact that the user input is input (e.g., ‘YES’ in operation1507). For example, a user interface including information corresponding to a user input may be associated with content included in a response. FIG.7Ais a diagram for describing a user interface corresponding to user interface information provided by an electronic device, according to an embodiment of the disclosure. FIG.7Bis a diagram for describing a user interface capable of being provided through a display after a user input for selecting at least part of a fourth user interface UI4ofFIG.7Ais input to the electronic device according to an embodiment of the disclosure. Referring toFIG.7A, the display660of the electronic device601may be in an active state, and the electronic device601may be in a state where a first screen UIL4is displayed through the display660. In an embodiment, when music is playing as a response from the first external electronic device300, the electronic device601may provide the fourth user interface UI4, which includes information about the music being played and is used to control the music being played, through the display660. A lyrics part of the music being played from the first external electronic device300may be synchronized and displayed (a) on the fourth user interface UI4. For example, the lyrics part of the music being played from the first external electronic device300may be displayed (a) on the fourth user interface UI4so as to be distinguished from other lyrics parts. In an embodiment, when the music is played and then stopped as a response from the first external electronic device300, the electronic device601may provide the fourth user interface UI4including the display (a) of the stopped part through the display660. For example, the display660of the electronic device601may be activated within a specified time after the music is played and then stopped as a response from the first external electronic device300. For example, as a user takes an action of lifting the electronic device601and the electronic device601detects the action, the display660may be activated. Alternatively, for example, as the user presses a physical button for activating the display660of the electronic device601or applies a touch input to the display660, the display660may be activated. The electronic device601may provide the fourth user interface UI4through the display660based on a fact that the display660is activated within a specified time after the music is played and then stopped as a response from the first external electronic device300. Through the synchronization, the electronic device601may display (a) the lyrics of a part where the music is played and then stopped as a response from the first external electronic device300so as to be distinguished from other lyrics. Referring toFIG.7A, the first screen UIL4may be either a lock screen or a screen provided before a home screen is provided. However, an embodiment is not limited thereto, and the first screen UIL4ofFIG.7Amay be a home screen. Referring toFIG.7B, the electronic device601may receive a user input for selecting at least part of the fourth user interface UI4. Through the display660, the electronic device601may provide a user interface UI4dincluding details associated with the music, which is a response, in response to a user input. For example, through the user interface UI4dincluding details provided in response to a user input, the electronic device601may provide detailed information about the music being played from the first external electronic device300through the display660. In the user interface UI4dincluding details, a lyrics part of the music being played from the first external electronic device300may be displayed to be synchronized and distinguished from other lyrics parts. Referring toFIG.5B, in operation1511, the electronic device601may execute an application program associated with the response based on a fact that a user input is input. FIG.8Ais a diagram for describing a user interface corresponding to user interface information provided by an electronic device, according to an embodiment of the disclosure. FIG.8Bis a diagram for describing a user interface capable of being provided through a display after a user input for selecting at least part of a fifth user interface UI5ofFIG.8Ais input to an electronic device according to an embodiment of the disclosure. Referring toFIG.8A, the display660of the electronic device601may be in an active state, and the electronic device601may be in a state where a first screen UIL5is displayed through the display660. In an embodiment in which body content included in an e-mail is output as a response by voice from the first external electronic device300, the electronic device601may provide a fifth user interface UI5with information about the email being output from the first external electronic device300on the first screen UIL5. Referring toFIG.8A, the first screen UIL5may be either a lock screen or a screen provided before a home screen is provided. However, an embodiment is not limited thereto, and the first screen UIL5ofFIG.8Amay be a home screen. Referring toFIG.8B, the electronic device601may receive a user input for selecting at least part of the fifth user interface UI5. The electronic device601may execute an email application program in the electronic device601in response to a user input. The electronic device601may execute the email application program and then may provide an interface UI5dincluding details of an email being output by voice from the first external electronic device300through the display660. The user may identify the title and/or reception date of the e-mail being output by voice in the first external electronic device300, through the fifth user interface UI5on the first screen UIL5. The user may visually identify body content of the email from the interface UI5dincluding details of the email by selecting at least part of the fifth user interface UI5on the first screen UIL5. Operation1501, operation1503, operation1505, operation1507, operation1509, and operation1511ofFIG.5Bmay correspond to operation15ofFIG.4. Each of the operations ofFIGS.5A and5Bis not limited to the illustrated order. Each operation may be performed simultaneously or may be performed in a different order from that shown. In an embodiment, all of the operations ofFIGS.5A and5Bmay be performed in the electronic device601(or the processor620inFIG.2). In this case, operation1102, operation1109, and operation1119may be omitted. Hereinafter, an operation of an integrated intelligence system (e.g., the integrated intelligence system10ofFIG.2) including an electronic device disclosed in this specification will be described with reference toFIGS.5A,5B, and9. For clarity of description, details the same as the above-described details are omitted. FIG.9is a sequence diagram for describing a method for providing a user interface corresponding to a response, according to an embodiment of the disclosure. Referring toFIGS.5A,5B, and9, when it is identified that a display is in an active state within a time interval (e.g., a response time interval or a threshold time interval) in operation1301and/or operation1303, in operation1305, the electronic device601may further determine whether the time interval has expired. For example, the electronic device601may transmit a request for determining whether the time interval has expired, to one of the first external electronic device300or the second external electronic device200. In the case where the electronic device601further determines whether the time interval has expired, when a time interval has expired at the time to provide a user interface even though it is identified that the display is in an active state within the time interval (e.g., a response time interval or a threshold time interval), unnecessarily providing a user interface for a response of which the time interval has already expired may be reduced. In operation1307, the electronic device601may receive feedback on a request transmitted to one of the first external electronic device300or the second external electronic device200from one of the first external electronic device300or the second external electronic device200. In operation1309, the electronic device601may determine whether the time interval has expired through the feedback. When the time interval has expired, the electronic device601may terminate a process even though it is identified that the display is in an active state within the time interval in operation1301and/or operation1303. When it is identified that the time interval has not expired, the electronic device601may perform operation1501. Hereinafter, an integrated intelligence system according to an embodiment disclosed in this specification will be described with reference toFIGS.10A,10B, and10C. For clarity of description, details the same as the above-described details are omitted. FIGS.10A,10B, and10Care diagrams for describing a user interface corresponding to user interface information provided by an electronic device, according to various embodiments of the disclosure. Referring toFIGS.10A,10B, and10C, a second external electronic device (e.g., the second external electronic device200ofFIG.2) may further include a history manager module (not shown). The history manager module may store information about user interfaces provided by the electronic device601through the display660. The history manager module may determine whether there is a history related to an input user utterance before the input user utterance. When the related history is present, the history manager module may transmit, to the electronic device601, a user interface for the related history as user interface history information together with user interface information associated with a response to the input user utterance. After the electronic device601provides a first user interface (e.g., UI8) associated with a first response through the display660, the electronic device601may have a history of providing a second user interface (e.g., UI7) associated with a second response through the display660. When a third user utterance is entered as an input, the history manager module may identify first user interface information and second user interface information as a history associated with the third user utterance. The history manager module may generate user interface history information including information about the first user interface and information about the second user interface. The history manager module may transmit the user interface history information to the electronic device601together with user interface information and information about a time interval associated with a response to the third user utterance. On the basis of the user interface history information, the electronic device601may provide a sixth user interface UI6corresponding to user interface information associated with the response to the third user utterance through the display660. Instead of the sixth user interface UI6, the electronic device601may provide the seventh user interface UI7associated with a response to the second user utterance through the display660based on a fact that a user input IN1is input. Instead of the seventh user interface UI7, the electronic device601may provide the eighth user interface UI8associated with a response to the first user utterance through the display660based on a fact that a user input IN2is input. In an embodiment, the user input IN1and/or the user input IN2may associated with a swipe operation. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B”, “at least one of A and B”, “at least one of A or B”, “A, B, or C”, “at least one of A, B, and C”, and “at least one of A, B, or C” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd”, or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with”, “coupled to”, “connected with”, or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic”, “logic block”, “part”, or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. | 89,774 |
11861164 | DETAILED DESCRIPTION An interface1according to the present invention shown frontally inFIG.1is arranged on a case portion2, in particular a front panel, of a dialysis machine. The interface1includes a display3which is provided to present treatment-specific information and input surfaces or display operating elements. In the vicinity of the (directly adjacent) display3, base operating elements4a,4b,4c(hereinafter referred to as buttons) are shown. In this embodiment, these are examples of first buttons4awhich are arranged below the display3and operate, e.g., basic functions such as switch on and off, acknowledge/enter, etc., second buttons4bwhich are arranged on the right of the display3and serve as quick-function buttons for functions such as “bypass”, “disconnect patient”, “empty cartridge”, etc., as well as third buttons4cwhich are arranged on the left of the display3and serve as mousepad. Said buttons4a,4b,4care markers printed or glued onto the housing portion2on which the interface1is arranged, said markers characterizing additional input surfaces for operating the dialysis machine. Furthermore, a sensor panel5is arranged next to, in this embodiment above, the display3. Said sensor panel5extends in parallel to an edge of the display3and has a length that corresponds at least to the length of the display3and the adjacent buttons4c,4b. That is to say, the sensor panel5extends so that both the display3and the display operating elements possibly indicated thereon as well as the buttons4a,4b,4care located on the side of the sensor panel5and can be detected by the same. Optical sensors, in particular infrared sensors and/or emitters, provided in one side of the sensor panel5are aligned in the direction of the display3and the buttons4a,4b,4cand span a detection surface6that extends both over the display3in parallel thereto and over the buttons4a,4b,4c. Thus, the sensor panel5can detect when a user passes through the detection surface6and at which position this takes place so as to touch one of the display operating elements of the display3or one of the buttons4a,4b,4cand, thus, to operate the interface3. Accordingly, by comparing the measuring data of the sensor panel5to the known or set positions of the display operating elements and buttons4a,4b,4ca control of the dialysis machine can be performed. FIG.2is a schematic cross-sectional view of the interface1and illustrates the structure thereof. As afore-described, the display3is arranged or embedded on a front side of the case portion2. Beneath the display3, the buttons4a,4b,4care glued or printed onto the case portion2, but they are not visible in this view due to their two-dimensional configuration. Behind the buttons4a,4b,4c, capacitive touch sensors7which serve as redundant sensors to detect touching or operations of the buttons by a user are arranged in the case portion2. Further, above the display3and beneath the buttons4a,4b,4c, indicator devices8, in particular vibration elements and/or LEDs, are arranged on or embedded at the case portion2. They may provide a user with haptic and/or optical feedback in the case of specific inputs or reports. The case portion2that includes the interface1is completely covered by a safety shield9, preferably made of Plexiglass. Said safety shield9prevents the display3and the buttons4from being stained, prevents the latter from being worn out or damaged by a user's touching, aggressive disinfectants, etc., and moreover protects them against application of excessive force. Such safety shield9is stable and, in addition, can be attached and cleaned easily and quickly, in particular because it completely covers the case portion2. As already described in the foregoing, the sensor panel5is further arranged above the display3on a front face of the safety shield9in such a way that a detection field6extending directly next to and in parallel to the safety shield9and extending both over the display3and over the buttons4a,4b,4cis spanned by the optical sensors/emitters disposed in the sensor panel5. In other words, the Figures illustrate a possible configuration variant of the adaptive monitor concept. The display3and the operating elements4a,4b,4care realized in printed form on the monitor case2and as elements7behind the monitor case2, for example elements marketed under the trademark CAPSENSE™. The safety shield, in particular a Plexiglass pane9, is placed over the complete monitor front on which the sensor module5is installed. The latter places an optical sensor surface6which can detect elements as well as the movement and shape thereof, if they interrupt the sensor surface6, over the monitor front. Haptic feedback can be made possible, e.g., via vibration elements (indicator devices)8behind the safety shield/Plexiglass plate9. During normal operation, in this way the complete monitor front can be operated via the optical sensor module5. Auxiliary functionalities such as the mousepad4cor the quick-function buttons4bmay be switched to be inactive and are ignored by the sensor surface6. In the event of a defective display, the touch functionalities of the display surface3can be deactivated and those of the auxiliary keys4a,4b,4ccan be activated to be able to complete the treatment of the patient in a safe and largely comfortable manner. Accordingly, the sensor surface6would evaluate only contacts outside the display area3. Should sensor module5fail, an operation by the auxiliary keys4a,4b,4cis also still safeguarded, as now the functionalities can be activated. This means that the machine, similarly to a laptop, can still be transferred to a safe state via the mousepad4clargely easily without having to initiate emergency measures. These touch surfaces6which can be wired adaptively and redundantly help develop a robust and innovative operating concept which is easy to clean and can be precisely operated even with gloves or by objects. | 5,929 |
11861165 | DETAILED DESCRIPTION All examples and illustrative references are non-limiting and should not be used to limit the claims to specific implementations and embodiments described herein and their equivalents. For simplicity, reference numbers may be repeated between various examples. This repetition is for clarity only and does not dictate a relationship between the respective embodiments, unless noted otherwise. Finally, in view of this disclosure, particular features described in relation to one aspect or embodiment may be applied to other disclosed aspects or embodiments of the disclosure, even though not specifically shown in the drawings or described in the text. A client may store data objects at a storage node, which may then backup some of its data objects at a backing store. If the size of the data object is greater than a threshold, the storage node may partition the data object into a plurality of segments and store the individual segments. Content stored at the storage node may be backed up to the backing store. The term “content” may be used to refer to as a “data object” or a “segment of a data object.” The backing store may store individual segments of a data object and transition segments stored at the backing store into different states or storage classes. In some examples, the client sends a metadata request to the storage node for the state of the data object. The metadata request may be a request for metadata of the data object without a request for the return of the actual data object itself. The client may be unaware that the storage node backs up content to a cloud endpoint (e.g., backing store). The storage node may send a request to the appropriate backing store for the segment state of each segment of which the data object is composed. The storage node may determine the state of the data object based on the returned segment states. Rather than request the segment state for each segment of the data object, the storage node may sample segments of the data object. For example, the storage node may select a subset of the plurality of segments and request the segment states for the subset. A segment stored in a backing store may be in one of a plurality of segment states, each segment state indicating whether the respective segment is accessible via a backing store. Different segment states may be associated with different costs. In an example, the more restrictive a segment state of a segment is, the cheaper it may be to store the segment. As an example, a first state may be more restrictive than a second state if more processing cycles are used for returning a segment that is in the first state compared to the second state. For example, a segment that is inaccessible and no restore operation for the segment has been triggered may be in a more restrictive state than a segment that is accessible. If a segment is inaccessible via the backing store, the entire data object may be inaccessible. The storage node determines a most restrictive state of the selected subset and transmits state information derived from the restrictive state to the client in response to the client's metadata request. The state information may indicate the state of the entire data object to the client. By sampling a subset of the plurality of segments for their segment states rather than all segments of the data object, latency may be reduced while determining the state of the data object with a reasonable degree of accuracy. The segment states of the subset may be a close approximation of the state of the entire data object because segments of the data object are typically migrated together and restored together. Accordingly, the segment states corresponding to the same data object have a high probability of being the same. Additionally, costs may be reduced by leveraging the sampling techniques discussed in the present disclosure due to fewer requests being made to cloud-service providers. The more requests issued to a cloud-service provider regarding a data object, the more expensive it may be to find information on or retrieve the data object. FIG.1is a schematic diagram of a computing architecture100according to aspects of the present disclosure. The computing architecture100includes one or more host systems102(hosts), each of which may interface with a distributed storage system104to store and manipulate data. The distributed storage system104may use any suitable architecture and protocol. For example, in some embodiments, the distributed storage system104is a StorageGRID system, an OpenStack Swift system, a Ceph system, or other suitable system. The distributed storage system104includes one or more storage nodes106over which the data is distributed. The storage nodes106are coupled via a back-end network108, which may include any number of wired and/or wireless networks such as a Local Area Network (LAN), an Ethernet subnet, a PCI or PCIe subnet, a switched PCIe subnet, a Wide Area Network (WAN), a Metropolitan Area Network (MAN), the Internet, or the like. In some exemplary embodiments, the storage nodes106are coupled by a TCP/IP back-end network108, which is local to a rack or datacenter, although additionally or in the alternative, the network108may extend between sites in a WAN configuration or be a virtual network extending throughout a cloud. As can be seen, the storage nodes106may be as physically close or as widely dispersed as the application may warrant. In some examples, the storage nodes106are housed in the same racks. In other examples, storage nodes106are located in different facilities at different sites anywhere in the world. The node arrangement may be determined based on cost, fault tolerance, network infrastructure, geography of the hosts, and other considerations. A technique for preserving and restoring the data contained in these storage nodes106, suitable for use with any of these arrangements, is described with reference to the figures that follow. In the illustrated embodiment, the computing architecture100includes a plurality of storage nodes106in communication with a plurality of hosts102. It is understood that for clarity and ease of explanation, only limited number of storage nodes106and hosts102are illustrated, although the computing architecture100may include any number of hosts102in communication with a distributed storage system104containing any number of storage nodes106. An exemplary storage system104receives data transactions (e.g., requests to read and/or write data) from the hosts102and takes an action such as reading, writing, or otherwise accessing the requested data so that storage devices110of the storage nodes106appear to be directly connected (local) to the hosts102. This allows an application running on a host102to issue transactions directed to the data of the distributed storage system104and thereby access this data as easily as it can access data on storage devices local to the host102. In that regard, the storage devices110of the distributed storage system104and the hosts102may include hard disk drives (HDDs), solid state drives (SSDs), storage class memory (SCM), RAM drives, optical drives, and/or any other suitable volatile or non-volatile data storage medium. Further, one or more of the storage nodes106may be connected to one or more cloud storage providers according to embodiments of the present disclosure, and likewise appear to be directly connected (local) to the hosts102. With respect to the storage nodes106, an exemplary storage node106contains any number of storage devices110in communication with one or more storage controllers112. The storage controllers112exercise low-level control over the storage devices110in order to execute (perform) data transactions on behalf of the hosts102, and in so doing, may group the storage devices for speed and/or redundancy using a protocol such as RAID (Redundant Array of Independent/Inexpensive Disks). The grouping protocol may also provide virtualization of the grouped storage devices110. At a high level, virtualization includes mapping physical addresses of the storage devices into a virtual address space and presenting the virtual address space to the hosts102, other storage nodes106, and other requestors. In this way, the storage node106represents the group of devices as a single device, often referred to as a volume. Thus, a requestor can access data within a volume without concern for how it is distributed among the underlying storage devices110. Further, an exemplary storage node106may be connected to one or more cloud storage providers of varying levels (e.g., standard cloud storage or lower-class cloud storage, or both, for example S3 or GLACIER storage classes). The cloud storage node106may exercise protocol-level control over the allocated cloud storage space available to it on behalf of the hosts102. Such control may be via one or more protocols such as HTTP, HTTPS, etc. In addition to storage nodes, the distributed storage system104may include ancillary systems or devices (e.g., load balancers114). For example, in some embodiments, a host102may initiate a data transaction by providing the transaction to a load balancer114. The load balancer114selects one or more storage nodes106to service the transaction. When more than one alternative is possible, the load balancer114may select a particular storage node106based on any suitable criteria including storage node load, storage node capacity, storage node health, network quality of service factors, and/or other suitable criteria. Upon selecting the storage node(s)106to service the transaction, the load balancer114may respond to the host102with a list of the storage nodes106or may forward the data transaction to the storage nodes106. Additionally, or in the alternative, a host102may initiate a data transaction by contacting one or more of the storage nodes106directly rather than contacting the load balancer114. Turning now to the hosts102, a host102includes any computing resource that is operable to exchange data with the distributed storage system104by providing (initiating) data transactions to the distributed storage system104. In an exemplary embodiment, a host102includes a host bus adapter (HBA)116in communication with the distributed storage system104. The HBA116provides an interface for communicating, and in that regard, may conform to any suitable hardware and/or software protocol. In various embodiments, the HBAs116include Serial Attached SCSI (SAS), iSCSI, InfiniBand, Fibre Channel, and/or Fibre Channel over Ethernet (FCoE) bus adapters. Other suitable protocols include SATA, eSATA, PATA, USB, and FireWire. In many embodiments, the host HBAs116are coupled to the distributed storage system104via a front-end network118, which may include any number of wired and/or wireless networks such as a LAN, an Ethernet subnet, a PCI or PCIe subnet, a switched PCIe subnet, a WAN, a MAN, the Internet, or the like. To interact with (e.g., read, write, modify, etc.) remote data, the HBA116of a host102sends one or more data transactions to the load balancer114or to a storage node106directly via the front-end network118. Data transactions may contain fields that encode a command, data (i.e., information read or written by an application), metadata (i.e., information used by a storage system to store, retrieve, or otherwise manipulate the data such as a physical address, a logical address, a current location, data attributes, etc.), and/or any other relevant information. While the load balancers114, storage nodes106, and the hosts102are referred to as singular entities, a storage node106or host102may include any number of computing devices and may range from a single computing system to a system cluster of any size. Accordingly, each load balancer114, storage node106, and host102includes at least one computing system, which in turn includes a processor such as a microcontroller or a central processing unit (CPU) operable to perform various computing instructions. The computing system may also include a memory device such as random access memory (RAM); a non-transitory computer-readable storage medium such as a magnetic hard disk drive (HDD), a solid-state drive (SSD), or an optical memory (e.g., CD-ROM, DVD, BD); a video controller such as a graphics processing unit (GPU); a communication interface such as an Ethernet interface, a Wi-Fi (IEEE 802.11 or other suitable standard) interface, or any other suitable wired or wireless communication interface; and/or a user I/O interface coupled to one or more user I/O devices such as a keyboard, mouse, pointing device, or touchscreen. As described above, the storage system104may distribute the hosts' data across the storage nodes106for performance reasons as well as redundancy. The distributed storage system104is an object-based data system. The storage system104may be a distributed object store that spans multiple storage nodes106and sites. In brief, object-based data systems provide a level of abstraction that allows data of any arbitrary size to be specified by an object identifier. Object-level protocols are similar to file-level protocols in that data is specified via an object identifier that is eventually translated by a computing system into a storage device address. However, objects are more flexible groupings of data and may specify a cluster of data within a file or spread across multiple files. Object-level protocols include CDMI, HTTP, SWIFT, and S3. A data object represents any arbitrary unit of data regardless of whether it is organized as an object, a file, or a set of blocks. FIG.2is a schematic diagram of a storage node106according to aspects of the present disclosure. The storage node106corresponds to storage node106inFIG.1. The storage node106may coordinate and manage resources from multiple clouds (e.g., a public and a private cloud) within a single grid or other storage grids and provide access to tiered objects to one or more clients212. Many storage nodes106may be present in the grid and store data backups in the grid. Accordingly, if parts of the grid become unavailable (e.g., a storage node goes down), the client210may still be able to access objects tiered to the cloud by leveraging resources on the multiple storage nodes106that are still available. For example, each storage node106may receive metadata for the objects tiered to one or more clouds. Thus, while a given storage node106may have initially tiered particular data to a cloud storage provider, the corresponding metadata becomes available in the other storage nodes106in the grid and therefore those other storage nodes106now have the appropriate information to access that tiered data independent of the originating storage node106. The storage node106includes a server202, an information lifecycle management (ILM)204, a cloud tier proxy206, and a storage pool208. The storage pool208may be stored in the storage devices110. Additionally, the storage controller112may include the server202, the cloud tier proxy206, and the ILM104. The ILM204may include a policy including a set of prioritized ILM rules that specify the instructions for managing object data over time. The storage pool208uses the cloud tier proxy206to communicate with an external service cloud212. The external service cloud212may be, for example, AWS S3 or Glacier, or an Azure cloud, etc. (just to name a few non-limiting examples). The cloud tier proxy206provides a tiering service that runs on one or more storage nodes106. Object State When an object is stored in the cloud, for example tiered to the cloud according to embodiments of the present disclosure, the object may be segmented into content blocks that have their own individual metadata, as well as metadata that identifies the segments as being part of the composite segmented object, referred to herein as a container segment. The cloud tier proxy206determines a state of objects composed of multiple segments in the cloud based on sampling a subset of the segments. The server202has the intelligence to combine the result of sampling and form an approximation of the state of the object. Additionally, the server202may use the cloud tier proxy206to communicate with the external service cloud212. In an example, objects in the distributed storage system104may be composed of multiple segments up to a maximum number of segments (e.g., 10,000 maximum segments). An object may be stored in the grid and archived into a first archive store. Under the ILM control, if the object has not been accessed after a threshold amount of time, the object may be moved from the first archive store and archived into a second archive store (e.g., tiered to the cloud). The object may stay stored in the second archive store for a long time. The cloud tier proxy206may determine the state of an object that has been tiered to the cloud. In current approaches, to determine the state of the object every segment would be checked. This is burdensome. In some examples of the present disclosure, the cloud tier proxy206may approximate a state of the object by sampling a state of a subset of multiple segments of the object. The cloud tier proxy206may inspect and determine, based on the sampling, the state of the object. The cloud tier proxy206may sample the object in accordance with a sampling percentage. In an example, if the object is composed of 10,000 segments and the sampling percentage is 10%, the cloud tier proxy206samples 1,000 segments of the object in a particular order. The sampling may include sampling the container segment as part of the sampling. For example, the cloud tier proxy206may sample the container segment first. If the container segment is available, the cloud tier proxy206may determine that the object in total is restored/available to a host. This is because the container segment is typically restored last out of all the segments of a given object. Accordingly, the cloud tier proxy206may determine, based on the sampling, a state of the segmented object as-a-whole, on the endpoint. In another example, the cloud tier proxy206may determine, based on performing a POST restore operation, a state of the segmented object as-a-whole, on the endpoint. The POST restore operation may be a HTTP POST request to perform a restore operation. The order of restoring segments of an object for a POST restore operation may be used with the approximation scheme. If an object is tiered to the first archive store and then transitioned to the second archive store, the object may not be directly accessible from the grid. For example, if the client210attempts to retrieve the object, the client210may receive an error specifying that the object is in an invalid state and may further receive a request to perform a POST restore operation to retrieve the object. The POST restore operation for multiple-segment objects may involve performing a restore for each object segment that has been tiered to the second archive store in order to move the object segment from the second archive store to the first archive store. Additionally, the client210may be expected to periodically check for the completion of the restore operation. Depending on the resource tier being used, the completion time may vary greatly (e.g., from a couple of minutes to 12 hours or more). A HEAD operation may be used to retrieve metadata from the object without returning the object itself. The client210may execute the HEAD operation to check the object's resource state to determine whether the object restore has completed and whether the object is in a retrievable state. In some examples, the cloud tier proxy206executes the POST restore work flow for tiered objects. The cloud tier proxy206supports the semantics of a POST restore operation and retrieval of content after the POST restore operation. The POST restore operation may trigger a restore for all individual segments of an object tiered to the second archive store but does so in such a way that the HEAD operation may use some information about the order of the POST restore operation to ensure the samplings of the recorded object states have a higher chance of being correct than incorrect. Additionally, the container segment may be restored last. Accordingly, if an error occurs before the restore operation completes, the container segment will not be restored and the object will not be in a restored state. Accordingly, the cloud tier proxy206may approximate the state of an object on the cloud as-a-whole. A first state may be that the object is accessible from the first archive store and has not yet been tiered to the second archive store. A second state may be that the object has been tiered to the second archive store and is inaccessible from the second archive storage. A third state may be that the object is stored in the second archive store and a restore of the object has been successful. A fourth state may be that the object is stored in the second archive store, a restore of the object has been issued and completed, and the object is accessible for a timeframe (e.g., a couple of days). All of this information may be approximated according to the sampling embodiments discussed herein. FIG.3is a flow diagram of a method300of analyzing a state of a data object according to aspects of the present disclosure. It is understood that additional steps can be provided before, during, and after the steps of method300, and that some of the steps described can be replaced or eliminated for other embodiments of the method. Referring to block302ofFIG.3, a storage node106receives a request for a data object stored in an external service cloud, the data object being composed of a plurality of segments. In a block304, the storage node106samples a subset of the plurality of segments. In a block306, the storage node106determines, based on sampling the subset, the state of the data object. In a block308, in response to the state being in a given state, the storage node106determines that the data object has been tiered to an archive store and is inaccessible from the archive store. This is not intended to be limiting and the given state may be a different state than that described. For example, in response to the state being in a given state, the storage node106may determine that the data object is accessible from the first archive store and has not yet been tiered to the second archive store. In another example, in response to the state being in a given state, the storage node106may determine that the data object has been tiered to the second archive store and is inaccessible from the second archive storage. In another example, in response to the state being in a given state, the storage node106may determine that the data object is stored in the second archive store and a restore of the object has been successful. In another example, in response to the state being in a given state, the storage node106may determine that the data object is stored in the second archive store, a restore of the object has been issued and completed, and the object is accessible for a timeframe. FIG.4is a schematic diagram of a computing architecture400according to aspects of the present disclosure. InFIG.4, the client210may desire to store a data object402at the storage node106. The storage node106may correspond to the grid discussed above. The client210sends a request to the storage node106to store the data object402and sends the data object402to the storage node106. The storage node106includes a segmenting engine404, a sampling engine406, a restore engine408, and a backup engine410. The segmenting engine404may interact with the cloud storage pool208to store the data object402. In some examples, the segmenting engine404is incorporated within the storage controllers112in the storage node106. The segmenting engine404receives the client's request to store the data object402and the data object402. The segmenting engine404may store the data object402in one or more storage devices110(seeFIG.1). In an example, the segmenting engine404may store the data object402as a whole in the storage node106. In another example, the segmenting engine404may partition the data object402into a plurality of segments420a,420b,420c, and420dfor storage at the storage node106. The data object402is composed of the plurality of segments420a,420b,420c, and420d. In an example, the segmenting engine404determines whether a size of the data object402exceeds a threshold. If the size of the data object402exceeds the threshold, the segmenting engine404may partition the data object402into the plurality of segments420for storage at the storage node106. In another example, the client210may upload the data object402as a multipart upload to the storage node106. If the segmenting engine404detects that the data object402is part of a multipart upload request, the segmenting engine404may partition the data object402into the plurality of segments420for storage at the storage node106. The content stored at the storage node106may be subject to the ILM rules204. The term “content” may be used to refer to a “data object” or a “segment of a data object.” The backup engine410may track the lifecycle of content stored at the storage node106using the ILM rules204. In an example, the backup engine410tiers content stored at the storage node106out to the cloud in accordance with the ILM rules204. An administrator may configure the ILM rules204in accordance with an enterprise organization's business practices and goals. For example, the administrator may take advantage of lower costs associated with storing the data object402at a backing store432compared to at the storage node106. Storage at the backing store432may be an order of magnitude cheaper than storage at the storage node106. The backing store432may correspond to the first and second archive stores discussed above, as will be further discussed below. If the backup engine410detects, based on the ILM rules204, that content should be backed up to a cloud endpoint, the backup engine410migrates the content to the cloud endpoint. In an example, the cloud endpoint includes the backing store432. If the data object402is stored as a whole (without being partitioned into the plurality of segments420), the backup engine410may migrate the data object402from the storage node106to the backing store432and delete the data object402from the storage node106. The backing store432may receive the data object402from the storage node106and store the data object402as a whole data object at the backing store432. If the segmenting engine404partitioned the data object402and the segments420a,420b,420c, and420dare each individually stored at the storage node106, the backup engine410may migrate each of the individually stored segments420a,420b,420c, and420dto the backing store432. The backup engine410may migrate these segments at different times. Accordingly, at any point in time, one or more segments (e.g., segment420a) of the data object402may be stored at the storage node106and one or more segments (e.g., segment420b) of the data object402may be stored at the backing store432. In response to receiving an individual segment of the data object402from the storage node106, the backing store432stores the individual segment at the backing store432. The backing store432may send a confirmation to the storage node106that content (e.g., individual segments) has been successfully stored at the backing store432. After the storage node106receives the confirmation, the storage node106may delete all copies of the content from the storage node106. In an example, if the storage node106deletes all copies of the content from the storage node106, the only copies of the content may be found at the backing store432. For simplicity, segments of the same data object402may be described as being stored at the same backing store (e.g., backing store432), but it should be understood that segments of the same data object402may be stored across multiple backing stores. For example, segment420amay be stored at the backing store432, and segment420bmay be stored at another backing store different from the backing store432. The backing store432may have a plurality of storage classes for the storage of content. The backing store432may correspond to the first and second archive stores discussed above. The first archive store may correspond to a first storage class, and the second archive store may correspond to a second storage class. The backing store432includes a state transitioning engine434that transitions segments stored at the backing store432between the first and second archive stores, which represent different storage classes or tier levels within the backing store432. If a segment is of the first storage class (e.g., stored in the first archive store), the segment may be considered to have not been archived or not tiered in the backing store. The first archive store may be a default store in which the segment is saved. After a condition is satisfied (e.g., time elapses), the state transitioning engine434may archive or tier the segment in the backing store432. If a segment is of the second storage class (e.g., stored in the second archive store), the segment may be considered to have been archived or tiered in the backing store. A storage class associated with a segment may be represented by a segment state of the segment. A “segment state” may also be referred to as a “state.” A segment stored at the backing store432may be in one of a plurality of segment states. The state transitioning engine434transitions segments stored at the backing store432into different segment states. Segments of the data object402, depending on the behavior of the cloud endpoint (e.g., backing store432), can be in different states. The state transitioning engine434may set the segment state of a segment to one segment state of a plurality of segment states. [45] Different segments may be in different segment states for various reasons. In an example, segments are in different segment states because the backing store432tiers various segments of the data object402at different times, depending on the segments' lifecycles. In another example, a restore operation for segments420aand420bmay be triggered. The restore of the segment420amay succeed while the restore of the segment420bfails, causing the segments420aand420bto be in different states. FIG.5illustrates example segment states associated with the backing store432according to aspects of the present disclosure. In the example illustrated inFIG.5, a segment may be in one of four segment states502,504,506, or508. A segment that is in the segment state502is accessible through the backing store432but has not been archived into another storage class yet. In an example, a segment that is in the segment state502may be read directly without a restore operation. The state transitioning engine434may transition a segment that is in the state502to the state504if one or more conditions are satisfied (e.g., time elapses). In an example, the state502may be the default state of a segment that is stored at the backing store432. A segment that is in the segment state504is inaccessible through the backing store432. In an example, a segment that is in the segment state504has been archived in the backing store432and no restore operation has been performed for the segment. A segment that is in the segment state506is inaccessible through the backing store432. In an example, a segment that is in the segment state506has been archived in the backing store432and a restore of the segment is in-progress. A restore of a segment that is in-progress indicates that a restore operation for the segment has been requested, but the restore operation has not yet completed. Accordingly, the restore operation has been triggered, but the segment has not been restored yet. A segment that is in the segment state508is accessible through the backing store432. In an example, a segment that is in the segment state508has been archived in the backing store432and a restore operation of the segment has completed. A segment that has been restored is accessible for a restore time period (e.g., seven days). After the restore time period for a segment has elapsed, the state transitioning engine434may set the segment to the state504. Different segment states may be associated with different costs. In an example, the more restrictive a segment state of a segment is, the cheaper it may be to store the segment. As an example, the most restrictive state of the states502,504,506, and508may be the state504because more processing cycles may be used for returning a segment that is set to the state504to a requestor compared to a segment that is set to the state502,506, or508. A segment that is set to the state504is inaccessible and no restore operation for the segment has been performed. To retrieve that segment, a restore operation for the segment is triggered and completed. In an example, the state information may specify that the data object is inaccessible and that no restore operation for the data object has been performed. As another example, the most restrictive state of the remaining states502,506, and508may be the state506because more processing cycles may be used for returning a segment that is set to the state506to a requestor compared to a segment that is set to the state502or508. A segment that is set to the state506is inaccessible and a restore operation for the segment has been triggered but not yet completed. A segment in the state506is retrievable after the restore operation is completed. In an example, the state information may specify that the data object is inaccessible and a restore operation for the data object has been triggered but not yet completed. As another example, the most restrictive state of the remaining states502and508may be the state508because the segment may be accessible for a shorter time period (e.g., the restore time period) than a segment that is set to the state502. A segment that is set to the state508is accessible for a restore time period. In an example, the state information may specify that the data object is accessible and for how long the data object is accessible. After the restore time period has elapsed for a restored segment, the state transitioning engine434may set the state of the segment to the state504. As an example, the least restrictive state of the states502,504,506, and508may be the state502because the segment may be accessible for the longest time period and without consuming more processing cycles, compared to a segment in the state502,504,506, or508. In an example, the state information may specify that the data object is accessible. An enterprise that uses the backing store432to back up content stored at the storage node106may configure the different segment states and configure the state transitioning engine434to transition one or more segments from one state to another state. In an example, the enterprise may configure the state transitioning engine434to transition segments that have been stored at the backing store432over a threshold time period (e.g., six months) from the state502to the state504. The enterprise may determine that based on its business practices, content stored at the backing store432is generally requested in high volume within six months from storage at the backing store432, and very infrequently after that time period. In another example, the enterprise may establish a default restore period that may be overwritten by users with special privileges (e.g., an administrator). In the example illustrated inFIG.5, four segment states are shown, but it should be understood that other examples including fewer than or more than four segment states are within the scope of the disclosure. For example, the backing store432may have two segment states. In an example, a first segment state may indicate that a segment is accessible, and a second segment state may indicate that a segment is inaccessible. Additionally, different segment states than that discussed in the disclosure may be used and are within the scope of the disclosure. Referring back toFIG.4, after the storage node106has stored the data object402, the client210may send a metadata request440for the data object402to the storage node106. In an example, the metadata request440is a request for metadata of the data object402without a request for the return of the actual data object402itself. The storage node106may receive the metadata request440and retrieve the metadata of the data object402without returning the data object402itself. In an example, the sampling engine406receives the metadata request440for the data object402and determines whether the client210has permission to read the data object402. If the client210has permission to read the data object402, the sampling engine406may perform actions to retrieve the metadata and transmit it to the client210. If the client210does not have permission to read the data object402, the sampling engine406may send the client210an error message. In some examples, the sampling engine406samples one or more segments of which the data object402is composed to obtain the states of the sampled segments. The sampling engine406may have access to a content directory442that stores information about the data objects and their associated segments that have been backed up. The content directory440may store location information and the names for each segment that has been migrated to a backing store. For example, the content directory440may specify that segment420ais located at backing store432. In response to the metadata request440, the sampling engine406may perform a lookup of the data object402in the content directory442and find the plurality of segments420of which the data object402is composed. The plurality of segments420may be stored at one or more cloud endpoints (e.g., the backing store432). The sampling engine406selects a subset of the plurality of segments420for sampling using various techniques. The subset may be a percentage of the segments included in the plurality of segments420(e.g., twenty percent). For example, if the segmenting engine404partitions the data object402into one thousand segments and the sampling engine406is configured to sample ten percent of the total number of segments of which the data object402is composed, the sampling engine406may select one hundred segments to sample for their segment states. The subset of segments may be randomly selected in the sense that they are arbitrarily selected and one segment being selected is independent from another segment being selected. In an example, the sampling engine406executes a random number generator (e.g., initializing a random seed), associates each segment with a number, and for selects, based on the random number generator and number associated with a segment, the subset of segments for sampling. In another example, the sampling engine406may be configured to sample a default number of segments (e.g., ten segments). Other techniques for selecting the subset of segments are within the scope of the disclosure. For example, the sampling engine406may sample all segments of which the data object402is composed, sample every ten segments of the plurality of segments of which the data object402is composed, or sample the first and last segments of which the data object402is composed. The sampling engine406selects the subset of segments and sends a state request to each backing store at which a segment of the selected subset is stored. The state request for a segment is a request for the state of the segment. The appropriate backing store receives the state request for a segment and returns the segment's state request to the sampling engine406. The sampling engine406may receive a plurality of segment states from one or more backing stores storing the subset of segments. The sampling engine406may keep track of the segment states by storing the plurality of segment states in the content directory440. The sampling engine406determines a most restrictive state of the plurality of segment states and sends state information indicating aspects of the most restrictive state to the client210in response to the metadata request440. For example, the state information may include information such as for how long a temporary copy of a segment will be accessible. The sampling engine406sends the state information derived from the most restrictive state as the state of the data object402. If a segment420ais in the state508having a first restore time period and a segment420bis in the state508having a second restore time period, the more restrictive state of the two may be the one having the shorter restore time period. In this example, the sampling engine406may transmit state information derived from the state508having the shorter restore time period to the client210, so that the client210knows for how long the data object402is accessible. Although the sampling engine406samples a subset of the plurality of segments420for their segment states rather than all segments of which the data object402is composed, the segment states of the subset may be a close approximation of the state of the entire data object402. Additionally, when the client210sends a restore request for the data object402to the storage node106, the restore request typically triggers the restore of all segments of the data object402that are inaccessible. The restore of the segments may be triggered and thereafter completed around the same time such that if the state of the segment420ais accessible, it is likely that segments420b,420c, and420dare also accessible, whether these three are sampled or not. Accordingly, it may be unnecessary for the sampling engine406to request the segment states for all segments of which the data object402is composed. The client210receives the state information derived from the most restrictive state of the plurality of segment states from the sampling engine406and determines, based on the state information, whether to request a restore operation for the data object402or request the data object402. In an example, if the state information indicates that the state502or the state508is the most restrictive state of a segment of the data object402, the client210may determine that the data object402is accessible. The client210may be unaware that the storage node106has backed up segments of the data object402at a backing store432. In response to receiving the state information specifying the state of the data object402, the client210may send a request for the data object402to the storage node106, which may then request the data object402from the backing store432. The storage node106receives the data object402from the backing store432and sends the data object402to the client210. In another example, if the sampling engine406transmits state information derived from the state506as being the most restrictive state of the data object402, the client210may determine that the data object402is inaccessible. Based on receiving state information indicating that the state506is the state of the data object402, the client210may determine that a restore operation for the data object402is in-progress but has not yet been completed. The client210may wait for a time period and after the time period elapses, the client210may send another metadata request for the data object402. In another example, if the sampling engine406transmits state information derived from the state504as being the most restrictive state of the data object402, the client210may determine that the data object402is inaccessible. Based on receiving state information indicating that the state504is the state of the data object402, the client210may determine that a restore operation for the data object402has not been performed. If segment420ais accessible and segment420bis inaccessible through the backing store432, the client210may be unware of such differences between the individual segments because the client210may determine that the most restrictive state indicated in the state information is the state of the entire data object402. Accordingly, the client210may send a request to perform the restore operation for the entire data object402to the storage node106. The storage node106receives the restore request, and the restore engine408processes the restore request. The restore engine408may translate the single restore request for the data object402into a plurality of restore requests for segments of the data object402that are inaccessible. In an example, each restore request of the plurality of restore requests is a request to restore an inaccessible segment of which the data object402is composed. The restore engine408may search for the states of each segment of the plurality of segments420by performing a lookup in the content directory440. In an example, the sampling engine406stores the state of the sampled segments into the content directory440. Accordingly, the restore engine408may request that a restore operation be performed for individual segments that are inaccessible based on the returned segment states of the subset of segments selected by the sampling engine406. In another example, the restore engine408restores each segment of the plurality of segments420, without checking whether the respective segment is already accessible. The segmenting engine404may send the restore requests to the backing store432. The backing store432receives the restore requests for the individual segments and restores the appropriate segments. The backing store432may restore the segment by creating a temporary copy of the segment and providing accessibility to the copy for a restore time period. In an example, the restore period may be configured by the administrator of the storage node106. In another example, the client210specifies the restore period in the request for the restore operation. A segment that has been restored and is accessible is available through the external service cloud212. Although the segmenting engine404has been described as sending the restore requests to the same backing store432, it should be understood that the segmenting engine404may send restore requests to different backing stores if segments of the data object402are stored in different backing stores. In some examples, after sending the restore request to the storage node106, the client210may wait for a time period. After the time period elapses, the client210may send another metadata request to the storage node106. If the storage node106sends a message indicating that the data object402is accessible in response to the metadata request, the client210may send a request for the data object402to the storage node106. In some examples, the backing store432may send the restore engine408confirmation of each segment that has been successfully restored. If the restore engine408receives confirmation that each segment of the data object402has been restored, the restore engine408may send a message to the client210, the message indicating that the data object402has been successfully restored. In response to receiving the message, the client210may send a request for the data object402to the storage node106. If the client210sends a request for the data object402and not all segments of the plurality of segments420are accessible, the storage node106may send a message indicating that the data object402is not accessible to the client. In response to the message, the client210may send a request to restore the data object402. In an example, segment420aof the data object402may be stored at the storage node106, and one or more other segments of the data object402may be stored at the backing store432. In this example, the segment420ahas not been backed up and remains accessible through the storage node106. It may be unnecessary for the sampling engine406to sample the state of the segment420abecause the sampling engine406is already aware that the segment420ais accessible through the storage node106. If a segment remains accessible through the storage node106, this may be considered a fifth state having the same restrictiveness level as the state502inFIG.5. FIG.6is a flow diagram of a method of analyzing a state of a data object according to aspects of the present disclosure. Steps of the method600can be executed by a computing device (e.g., a processor, processing circuit, and/or other suitable component). For example, a storage node such as the storage node106, may utilize one or more components, such as the segmenting engine404, the sampling engine406, backup engine410, and/or restore engine408, to execute the steps of method600. As illustrated, the method600includes a number of enumerated steps, but embodiments of the method600may include additional steps before, after, and in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted or performed in a different order. At step602, the method600includes receiving, at a storage device, a metadata request for the data object from a client, the data object being composed of a plurality of segments. In an example, the storage node106partitions the data object402and stores the data object as individual segments. The metadata request may be a request for a state of the data object402, the state indicating whether the data object402is accessible or inaccessible. Rather than request the entire data object402as-a-whole, the client210may send the metadata request to save time and costs, in the case where the data object402is inaccessible. For example, if the client210sends a request for the entire data object402, retrieval of the data object402includes retrieving the individual segments of the data object402. At step604, the method600includes selecting a subset of the plurality of segments. The sampling engine406may select the subset using a variety of techniques. For example, the sampling engine406may randomly select the subset such that the selection of one segment is independent from the selection of another segment. At step606, the method600includes obtaining a segment state for each segment of the subset, each segment state indicating whether the respective segment is accessible via a backing store. In an example, a segment stored in the backing store may be in the segment state502,504,506, or508. Although the sampling engine406samples a subset of the plurality of segments420for their segment states rather than all segments of which the data object402is composed, the segment states of the subset may be a close approximation of the state of the entire data object402. At step608, the method600includes determining a most restrictive state of the one or more segment states. As an example, a first state may be more restrictive than a second state if more processing cycles are used for returning a segment that is in the first state compared to the second state. At step610, the method600includes sending state information to the client in response to the metadata request, the state information being derived from the most restrictive state. The client may receive the state information and return a response based on the most restrictive state indicated in the state information. Fan-In The cloud tier proxy206may consolidate data from multiple distributed storage system instances running in the field into a single-centralized distributed storage system instance. For example, multiple distributed storage system instances may run in small data centers in a client210's deployment, and the client210may tier objects to a single-centralized distributed storage system instance running in a large data center, which may be referred to as the fanned-in grid. Each individual small data center may set up its own ILM rules and policies. A rule or policy may include compression, encryption, and tiering rules or policies. For example, the tiering policy may specify that an object is to be tiered to a common location at which a larger grid is managed, when one or more conditions have been satisfied. The larger grid may refer to the fanned-in grid. Data sovereignty may be maintained in the sense that even though the data is fanned into the single-centralized distributed storage system instance from multiple distributed storage system instances, the data is still separated. For example, the data from a given smaller distributed storage system instance may have been compressed and/or encrypted. When fanned-in to the fanned-in grid, that compressed, encrypted data will remain such in the fanned-in grid, such that its integrity is maintained. A small distributed storage system instance of the multiple distributed storage system instances may have connectivity to the fanned-in grid and may replicate content in the grid. Accordingly, if the small distributed storage system instance loses connectivity with the fanned-in grid, the client210may still be able to retrieve and manage its content. When the small distributed storage system instance establishes connectivity to the fanned-in grid, the small instance may modify or delete data and work in a federated manner so that individual smaller sites can operate separately, but also use the centralized grid to store data that they access frequently or desire to store with much higher efficiency. In an example, the client210may have many different small grids tiering data to the first and/or second archive stores and may manage its own data. Binary Format of Tiered Data: Object data in the storage pool208may be packetized and stored on disk with packet checksums and metadata. During packetization, the data is compressed (if enabled) and then encrypted (if enabled), and the packetized data is tiered to the external service cloud212. The compression and encryption of packetized data is carried forward if it is transitioned from the grid to the external cloud service212, and the packetized data retains compression and encryption. In an example, if an object that is 1 gigabyte (GB) is compressed and stored as a 100 megabyte (MB) representation of the object, then the compressed object that is 100 MB may be moved to the external service cloud212. Additionally, encryption is typically performed when the object was ingested in the grid and not by the cloud service provider or on-the-fly when being transmitted. Such compression and encryption may be advantageous if the object is being stored in a multi-tenant deployment to safeguard the data. Additionally, the packetized data may also contain object metadata that can be used to identify the object by a recovery application. The present embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements. Accordingly, it is understood that any operation of the computing systems of computing architecture100may be implemented by the respective computing system using corresponding instructions stored on or in a non-transitory computer readable medium accessible by the processing system. For the purposes of this description, a tangible computer-usable or computer-readable medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device. The medium may include non-volatile memory including magnetic storage, solid-state storage, optical storage, cache memory, and Random Access Memory (RAM). Thus, the present disclosure provides a system, method, and machine-readable storage medium for analyzing a state of a data object in a distributed storage system. In some embodiments, the method includes receiving a request for the data object stored in an external service cloud. The data object is composed of a plurality of segments. The method also includes sampling a subset of the plurality of segments and determining, based on sampling the subset, the state of the data object. The method further includes in response to the state being in a first state, determining that the data object has been tiered to an archive store and is inaccessible from the archive store. In yet further embodiments, the non-transitory machine-readable medium has instructions for performing the method of analyzing a state of a data object, including machine executable code, which when executed by at least one machine, causes the machine to: receive a request for the data object stored in an external service cloud, the data object being composed of a plurality of segments; sample a subset of the plurality of segments; determine, based on sampling the subset, the state of the data object; and in response to the state being in a first state, determine that the data object has been tiered to an archive store and is inaccessible from the archive store. In yet further embodiments, the computing device includes a memory containing a machine-readable medium comprising machine executable code having stored thereon instructions for performing a method of analyzing a state of a data object and a processor coupled to the memory. The processor is configured to execute the machine executable code to: receive a request for the data object stored in an external service cloud, the data object being composed of a plurality of segments; sample a subset of the plurality of segments; determine, based on sampling the subset, the state of the data object; and in response to the state being in a first state, determine that the data object has been tiered to an archive store and is inaccessible from the archive store. In some embodiments, the method includes receiving, at a storage device, a metadata request for the data object from a client, the data object being composed of a plurality of segments; selecting a subset of the plurality of segments; obtaining a segment state for each segment of the subset, each segment state indicating whether the respective segment is accessible via a backing store; determining a most restrictive state of the one or more segment states; and sending state information to the client in response to the metadata request, the state information being derived from the most restrictive state. In yet further embodiments, the non-transitory machine-readable medium has instructions for performing the method of analyzing a state of a data object, including machine executable code, which when executed by at least one machine, causes the machine to: receive, at a storage device, a metadata request for a data object from a client, the data object being composed of a plurality of segments; select a subset of the plurality of segments; obtain a segment state for each segment of the subset, each segment state indicating a storage class of the respective segment, a first storage class indicating that the respective segment is accessible, and a second storage class indicating that the respective segment is inaccessible; and send a first message indicating that the data object is inaccessible via a backing store based on at least one segment of the subset being of the second storage class. In yet further embodiments, the computing device includes a memory containing a machine-readable medium comprising machine executable code having stored thereon instructions for performing a method of analyzing a state of a data object and a processor coupled to the memory. The processor is configured to execute the machine executable code to: store, at a storage node, a data object, the data object being composed of a plurality of segments; migrate, at the storage device, the plurality of segments to one or more backing stores; receive, at the storage device, a metadata request for the data object from a client; obtain a segment state for a subset of the plurality of segments, each segment state of the subset indicating whether the respective segment is accessible via the one or more backing stores; and send state information derived from a most restrictive state of the one or more segment states to the client in response to the metadata request. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 60,933 |
11861166 | DETAILED DESCRIPTION Various embodiments and aspects disclosed herein will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the embodiments disclosed herein and are not to be construed as limiting the embodiments disclosed herein. Numerous specific details are described to provide a thorough understanding of various embodiments of embodiments disclosed herein. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments disclosed herein. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment disclosed herein. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. References to an “operable connection” or “operably connected” means that a particular device is able to communicate with one or more other devices. The devices themselves may be directly connected to one another or may be indirectly connected to one another through any number of intermediary devices, such as in a network topology. In general, embodiments disclosed herein relate to methods and systems for managing storage of data in a distributed system. To manage storage of data in a distributed system, a data processing system may include a network interface controller (NIC). The network interface controller may present emulated storages that may be used for data storage. The emulated storage devices may utilize storage resources of storage devices. To improve the quantity of data that may be stored in the storage devices, the NIC and the storage devices may implement a distributed deduplication process. The NIC may segment data into chunks and obtain fingerprints of the chunks. The fingerprints may be provided to the storage which may check the fingerprints against fingerprints of already stored chunks. The storage may request the chunks corresponding to the fingerprints that did not match any fingerprints of the already stored chunks. The NIC may provide only those requested chunks to the storage before discarding all of the chunks. By doing so, the quantity of data transmitted between the NIC and storage may be reduced. Consequently, in a scenario in which the NIC is connected to the storage via a network, network bandwidth may be conserved. In an embodiment, a computer-implemented method for managing data storage in a distributed system is provided. The method may include obtaining, by a Network Interface Controller (NIC) of a data processing system, data for storage; segmenting, by the NIC, the data into chunks; obtaining, by the NIC, fingerprints for the chunks; providing, by the NIC, batches of the fingerprints to a storage; providing, by the NIC and to the storage for storage, a first portion of the chunks corresponding to a first portion of the fingerprints that are new; and discarding, by the NIC, the chunks without providing to the storage a second portion of the chunks corresponding to a second portion of the fingerprints that are not new. The storage may be operably connected to the NIC via a network, and the data is obtained from compute resources of the data processing system via a bus of the data processing system. The computer-implemented method may also include presenting an emulated storage device to the compute resources, the data may be obtained via communication over the bus, the communication being directed to the emulated storage device. The compute resources may believe that the data is stored in a storage directly connected to the bus. The computer-implemented method may further include determining communication characteristics (e.g., latency, available bandwidth, maximum transmission unit size, etc.) of a connection between the NIC and the storage via the network; and identifying a batch size based on the communication characteristics. Providing the batches may include obtaining a batch of the batches based on the identified batch size. Obtaining the batch may include adding a portion of the fingerprints to the batch so that the batch has a size substantially similar to the identified batch size. Obtaining the fingerprints may include obtaining hashes for the chunks, the hashes being used as the fingerprints, and a hash function used to obtain the hashes being substantially collision free. Obtaining the hashes for the chunks may include sending the chunks to a communication security processor of the NIC; and receiving the hashes from the communication security processor. Providing the first portion of the chunks may include obtaining, from the storage, indications that the first portion of the fingerprints that are new; and using the indications to select the first portion of the chunks. The indications may be received from the storage in a response batch that is responsive to a batch of the batches of the fingerprints provided to the storage. A non-transitory media may include instructions that when executed by a processor cause the computer-implemented method to be performed. A data processing system may include the non-transitory media and a processor (e.g., of a NIC), and may perform the computer-implemented method when the computer instructions are executed by the processor. Turning toFIG.1, a block diagram illustrating a system in accordance with an embodiment is shown. The system shown inFIG.1may facilitate performance of workloads (e.g., computer-implemented workloads performed by executing computing instructions with at least one processor of one or more data processing systems). The system may include to data processing system100. To perform the workloads, data processing system100may provide computer implemented services to users and/or other computing devices operably connected to data processing system100. The computer implemented services may include any type and quantity of services including, for example, database services, instant messaging services, video conferencing services, etc. Different systems may provide similar and/or different computer implemented services. To provide the computer implemented services, data processing system100may include various hardware resources such as compute resources102, local storage104, network interface controller (NIC)110, and bus106. Compute resources102may include hardware devices such as processors, memory modules, etc. Local storage104may include storage devices such as hard disk drives, solid state drives, storage controller, etc. NIC110may facilitate communication with other remote devices. For example, NIC110may facilitate communication with network storage130. Any of the components of data processing system100may be operably connected to one another and/or other components (not shown) via bus106. When providing the computer implemented services, data may be stored for future use in local storage104and/or remote storage devices such as network storage130(and/or other remote storages). To facilitate use of network storage130(and/or other remote or local storages), MC110may present an emulated storage (e.g., by presenting an emulated storage endpoint) to compute resources102via bus106. Consequently, compute resources102may direct access requests (e.g., storage, read, delete) for the emulated storage to NIC110via bus106. To implement the emulated storage, NIC110may use the storage resources of network storage130(and/or other remote or local storage devices operably connected to it). For example, network storage130may also include a NIC (not shown) that may include functionality to secure, format, and use storage resources local to network storage130. When an access request for the emulated storage is received by NIC110, NIC110may use translation tables, lookup tables, and/or implement various procedures for servicing the access request via network storage130. However, from the perspective of compute resources102, the emulated storage may appear to be a bare metal device operably connected to compute resources102via bus106. Compute resources102may be unaware of network storage130and/or the processes performed by NIC110to service access requests. Due to the distributed nature of the system illustrated inFIG.1, storing data in the emulated storage device may consume network bandwidth if the to-be-stored data is transmitted by NIC110to network storage130via communication system120. Further, the condition of the operable connection between NIC110and network storage130may impact the quality of the storage services provided by the emulated storage device. For example, network churn or other issues with communication system120may introduce latency, periods where communication between NIC110and network storage130is not possible, etc. In general, embodiments disclosed herein relate to systems, methods, and devices for managing data storage in a distributed system. To manage data storage in a data processing system, NIC110may perform deduplication for data to be stored in an emulated storage that it presents to other device. As used herein, deduplication may refer to a process of avoiding storage of redundant data while ensuring that copies of unique data are stored. Various data structures may be reconstructed using the stored unique data. For example, to deduplicate data, the system ofFIG.1may (i) segment the data into chunks, (ii) obtain fingerprints for the chunks, (iii) compare the fingerprints for the chunks to fingerprints of other chunks that are already stored, (iv) store the portions of the chunks corresponding to a portion of the fingerprints that do not match the fingerprints of the other chunks that are already stored, (v) discard the chunks without storing a second portion of the chunks corresponding to a second portion of the finger prints that do match the fingerprints of the other chunks that are already stored, and/or (v) update counts and/or recipes usable to determine when a stored chunk may be deleted without negatively impacting data reconstruction using stored chunks and reconstruct data using the stored chunks, respectively. To provide for deduplication, the responsibilities for performing these processes may be divided across the components ofFIG.1 In an embodiment, NIC110is responsible for obtaining chunks and fingerprints for the chunks while network storage130may be responsible for identifying whether any of the fingerprints are new and storing fingerprints corresponding to the identified fingerprints. By doing so, NIC110may only need to send the fingerprints for the chunks and the chunks associated with new fingerprints to network storage130. In such a scenario, network storage130may store fingerprints and/or metadata (e.g., reference counts). To identify whether a chunk may need to be stored, network storage130may receive a corresponding fingerprint, compare it to fingerprints of stored chunks, and request the chunk if the fingerprint does not match the fingerprints of stored chunks. Consequently, the communication bandwidth used for data storage may be reduced when compared to relying on network storage130to perform all of the deduplication process, including fingerprint generation. In an embodiment, NIC110is responsible for obtaining chunks and fingerprints for the chunks, and identifying new fingerprints, while network storage130may be responsible for storing fingerprints corresponding to the new fingerprints. By doing so, NIC110may only need to send the chunks associated with the new fingerprints to network storage130. In such a scenario, NIC110may store fingerprints and/or metadata (e.g., reference counts). Network storage130may also maintain a copy of the metadata (e.g., reference counts) to identify when stored chunks are no longer needed (e.g., for reconstruction). Consequently, the communication bandwidth used for data storage may be reduced when compared to relying on network storage130to perform all of the deduplication process, including fingerprint generation or new fingerprint identification. By doing so, embodiments disclosed herein may facilitate deduplicated storage of data with reduced use of communication bandwidth (e.g., when compared to scenarios in which only remote entities are responsible for deduplication). NIC110may be implemented with a hardware devices and/or software components hosted by the hardware devices. In an embodiment, NIC110is implemented using a hardware device including circuitry. The hardware device may be, for example, a digital signal processor, a field programmable gate array, system on a chip, or an application specific integrated circuit. The circuitry may be adapted to cause the hardware device to perform the functionality of NIC110. NIC110may be implemented using other types of hardware devices without departing embodiment disclosed herein. In one embodiment, NIC110is implemented using a processor adapted to execute computing code stored on a persistent storage that when executed by the processor performs the functionality of NIC110discussed throughout this application. The processor may be a hardware processor including circuitry such as, for example, a central processing unit, a processing core, a part of a system on a chip or other type of special purpose hardware device, or a microcontroller. The processor may be other types of hardware devices for processing information without departing embodiment disclosed herein. Generally, NIC110may include functionality to process network data units such as packets. Packets may be exchanged with communication system120, and devices operably connected to communication system120such as network storage130. In the context of storage, when NIC110obtains access requests that will result in access requests being directed to network storage130, NIC110may perform part of the data deduplication process, and cooperate with the remote storage. The communications between NIC110and networks storage130may be encapsulated to obtain packets, and directed between network storage130and NIC110via communication system120. Network storage130may include similar functionality to provide for transmission of access requests, fingerprints, chunks, etc. Bus106may be implemented with one or more communication buses. The communications buses may support various communications standards. In an embodiment, bus106comprises a Peripheral Component Interconnect Express (PCIE) bus which connects compute resources102to NIC110. NIC110may comply with the Non-Volatile Memory Express (NVMe) specification and support NVME communications. NIC110may also support, NVME over fabric (NVMe-oF) communications (or other communication standards) and may communicate with network storage130and/or other local storage devices using NVMe-oF communications. To support NVMe communications, NIC110may include functionality to present endpoints (e.g., to other devices), establish initiators to facilitate communications between endpoints and the initiators, and/or implement other methods for communicating via bus106, communication system120, and/or other communications facilitates not illustrated inFIG.1. Refer toFIG.2for additional details regarding NIC110. Network storage130may be implemented using, for example, a network attached storage system. The network attached storage system may include functionality to perform a part of the deduplication process to facilitate storage of deduplicated data. As part of that process, network storage130may store metadata such as reference counts (e.g., numbers of times a fingerprint for a chunk has been countered) and/or recipes (e.g., stored chunk identifiers, instructions for combining the chunks to obtain previously stored data) for reconstructing data using stored chunks. In an embodiment, communication system120includes one or more networks that facilitate communication between any number of components. The networks may include wired networks and/or wireless networks (e.g., and/or the Internet). The networks may operate in accordance with any number and types of communication protocols (e.g., such as the internet protocol). Communication system120may include packetized communication. To convey information via communication system120, data structures (e.g., payloads) may be encapsulated (e.g., packetized) with control information compliant with the communication schemes supported by communication system120. For example, communication system120may include the Internet and support internet protocol communications. Any of data processing system100, NIC110, and network storage130may be implemented with a computing device such as a host or server, a personal computer (e.g., desktops, laptops, and tablets), a “thin” client, a personal digital assistant (PDA), a Web enabled appliance, embedded computing device such as a system on a chip, a mobile phone (e.g., Smartphone), and/or any other type of computing device or system. For additional details regarding computing devices, refer toFIG.4. While illustrated inFIG.1as including a limited number of specific components, a system in accordance with an embodiment may include fewer, additional, and/or different components than those illustrated therein. As discussed above, NIC110may facilitate deduplicated data storage in storage devices separate from data processing system100. Turning toFIG.2, a diagram of NIC110in accordance with an embodiment is shown. As discussed above, data processing system100may utilize NIC110for storage purposes. To do so, NIC110may present an emulated storage device to data processing system. Data processing system100may send communications, compliant with the emulated data processing system, to NIC110over bus106to have access requests for the emulated storage device be serviced. To communicate with NIC, data processing system100may host connection manager144. Connection manager144may generate access requests based on requests from applications142(and/or other entities hosted by data processing system100), encapsulate them as necessary to comply with the communication scheme supported by bus106, and transmit the encapsulated access request to an emulated storage device endpoint presented by NIC110. In the context of data storage, the access request may include the data to be stored in the emulated storage. Connection manager144may, in the context of NVMe communications, be implemented with an NVMe initiator. The NVMe initiator may be implemented with a driver or other piece of software for sending communications via bus106. Applications142and connection manager144may execute via compute resources102. While not shown inFIG.2, data processing system100may host an operating system (e.g., which may include connection manager144) that mediates presentation of storage to applications142. To present the emulated storage device endpoint to compute resources102, NIC110may host connection manager112. Connection manager112may present emulated storage device endpoints to other devices, such as data processing system100. Connection manager112may present any number of such emulated endpoints. By doing so, NIC110may present any number of emulated storage devices to the compute resources of data processing system100. When communications are received by connection manager112, the connection manager may identify a target emulated storage device and initiate processing of the access requests based on the target. For example, connection manager112may pass the access requests to front end deduplication service114which may perform a portion of the deduplication process on the data (as noted above, the storage device may also perform some of the deduplication process). The processes may result in the generation of (i) chunks and (ii) fingerprints. In an embodiment, front end deduplication service114leverages security processor118to obtain the fingerprints. Security processor118may be an onboard processor adapted for communication security and may include functionality to perform various hashes on data structures. For example, security processor118may be implemented with a special purposes circuit, companion chip, special purpose processing core, or other piece of hardware that may execute hash functions efficiently. Generally, security processor118may be used for communication security, which may also utilize hashes or other type of one-way functions useful in cryptographic systems. The hash generation functionality of security processor118may be leveraged to efficiently generate hashes of chunks to obtain fingerprints for the chunks. The chunks (all or a portion) and/or fingerprints may be provided to connection manager116which may encapsulate and send the encapsulated fingerprints/chunks to storage devices such as network storage130, local storage150, or other storage devices not illustrated herein. The manner of encapsulation of the processed access requests may correspond to the communication medium over which the encapsulated fingerprints/chunks are transmitted. For example, if local storage150is operably connected via a PCIe link, then connection manager116may encapsulate according to the PCIe standard. Likewise, access requests directed to network storage130may be encapsulated for internet protocol based communications. In an embodiment, fingerprints are sent in batches to network storage130(or other storages). For example, connection manager116may monitor the connectivity to network storage130, aggregate fingerprints until a sufficient quantity are obtained, and send the aggregated fingerprints in a batch to network storage130. Similarly, network storage130may send requests for chunks in batches (e.g., responsive or associated with corresponding batches of fingerprints) as well. Connection manager116may be implemented with, for example, a PCIe initiator, NVMe-oF initiator, and/or other communication protocol compliant initiators to facilitate communications between NIC110and storage devices. Any of connection manager112and connection manager116may be implemented with, for example, a driver or other type of application. When deciding where to direct access requests and how to process them, connection manager116may utilize lookup tables or other types of data structures that may relate to which emulated storage device an access request is directed to how the access requests are to be processed and where corresponding fingerprints/chunks are to be sent. The data included in the lookup tables may be set by an administrator, may be set by a control plane that may manage NIC110, and may be dynamically updated over time to provide different qualities of storage service. In an embodiment, bus106is implemented as a PCIe bus. In such a scenario, the functionality of connection manager112may be implemented using a PCIe chipset hosted by NIC110. The chipset may support both physical and virtual functions. The virtual functions may be used to manage presentation of any number of emulated storage devices. The physical and virtual functions may handle protocol specific requirements such as error handling, doorbells, interrupts, and/or other aspects of sending and receiving information via a physical bus. In an embodiment, any of connection manager112, front end deduplication service114, connection manager116, and security processor118is implemented using a hardware device including circuitry. The hardware device may be, for example, a digital signal processor, a field programmable gate array, system on a chip, or an application specific integrated circuit. The circuitry may be adapted to cause the hardware device to perform the functionality of connection manager112, front end deduplication service114, connection manager116, and/or security processor118. Connection manager112, front end deduplication service114, connection manager116, and/or security processor118may be implemented using other types of hardware devices without departing embodiment disclosed herein. In one embodiment, any of connection manager112, front end deduplication service114, connection manager116, and security processor118is implemented using a processor adapted to execute computing code stored on a persistent storage that when executed by the processor performs the functionality of connection manager112, front end deduplication service114, connection manager116, and/or security processor118discussed throughout this application. The processor may be a hardware processor including circuitry such as, for example, a central processing unit, a processing core, or a microcontroller. The processor may be other types of hardware devices for processing information without departing embodiment disclosed herein. Connection manager112, front end deduplication service114, connection manager116, and/or security processor118may perform all, or a portion, of the methods illustrated inFIGS.3A-3B. While illustrated inFIG.2with a limited number of specific components, a NIC may include additional, fewer, and/or different components without departing from embodiments disclosed herein. As discussed above, the components ofFIG.1may perform various methods to store data.FIGS.3A-3Billustrates examples of methods that may be performed by the components ofFIG.1when providing their functionalities. In the diagrams discussed below and shown inFIGS.3A-3B, any of the operations may be repeated, performed in different orders, and/or performed in a parallel with other operations and/or a partially overlapping in time manner with other operations. Turning toFIG.3A, a flow diagram illustrating a method of storing data in accordance with an embodiment is shown. The method may be performed by a NIC, a data processing system, a storage, and/or other components. At operation300, data for storage is obtained. The data for storage may be obtained from compute resources. The data may be part of an access request directed to an emulated storage device presented by a NIC to the compute resources. At operation302, the data is segmented into chunks. The data may be segmented using any segmentation technique. In an embodiment, the data is segmented based on blocks included in the data. The blocks may be used as the chunks. Any number of chunks may be obtained. The data may be duplicative, in part or entirely, of data stored in a storage used by the NIC to provide the functionality of the emulated storage. At operation304, fingerprints for the chunk are obtained. The fingerprints may be obtained by generating hashes of the chunks. The hashes may be generated using a security processor, or may be generated using a general purpose processor and corresponding algorithm. The hash may be substantially collisions free so that it is unlikely that fingerprints generated for two fingerprints that have different bit sequence result in the same hash value. In an embodiment, the hash is collision free. For example, a perfect hash function may be used to generate the hashes. At operation306, the fingerprints are provided to a storage in batches. To provide the fingerprints to the storage, fingerprints may be aggregated until the size and/or number of aggregated fingerprints reaches a threshold. The threshold may be set or may vary depending on the quality of the connection (e.g., via network) between the NIC and the storage utilized by the NIC. For example, characteristics of the connection such as latency and available bandwidth may be evaluated. As the connection characteristics improve (e.g., lower latency, higher bandwidth), the threshold may be reduced. In an embodiment, fingerprints are aggregated until the aggregated fingerprints are of a size that is similar to (but smaller than) that of the maximum network data unit size supported by a network used to transmit the fingerprints to the storage. By doing so, network data units (e.g., packets) that only use a fraction of the supported payload size may be reduced, thereby improving network communication efficiency. At operation308, a first portion of the chunks corresponding to a first portion of the fingerprints that are new are provided to the storage. In an embodiment, the first portion of the chunks are provided by receiving, from the storage, a listing indicating that the first portion of the fingerprints are new. In other words, the storage may determined whether any of the fingerprints of a batch are new (e.g., no chunk stored in the storage has the same fingerprint). The first portion of the chunks may be provided by sending them to the storage. In an embodiment, the first portion of the chunks are provided by determining which fingerprints from the batch are new using a fingerprint cache hosted by the NIC and/or the host data processing system. For example, the NIC or data processing system may maintain the fingerprint cache rather than the storage. At operation310, the chunks are discarded without providing, to the storage, a second portion of the chunks that correspond to a second portion of the fingerprints that are not new. For example, after providing the chunks in operation308, the NIC may discard all of the chunks (e.g., immediately, after a period of time, or after receiving a request for some of the chunks responsive to a provided batch of fingerprints is received by the NIC). The chunks may be discarded by deleting them, deallocating memory associated with them, etc. The method may end following operation310. Using the method illustrated inFIG.3A, a system in accordance with an embodiment may facilitate deduplicated storage of data without imposing the load for generating fingerprints on a storage or the communication load for transmitting duplicative chunks to the storage. Turning toFIG.3B, a flow diagram illustrating a method of storing data in accordance with an embodiment is shown. The method may be performed by a NIC, a data processing system, a storage, and/or other components. At operation320, a batch of fingerprints are obtained from a NIC. The batch of fingerprints may include fingerprints that may be duplicative of fingerprints of chunks stored in the NIC. At operation322, requests for chunks corresponding to the fingerprints of the batch of fingerprints that are new are obtained. The requests may be obtained by comparing the fingerprints from the batch to a fingerprint cache in which fingerprints of stored chunks are stored. The comparison may indicate which of the fingerprints of the batch are new and which of the fingerprints of the batch that are not new (e.g., duplicative of fingerprints in the cache). The requests may be obtained by populating requests with identifiers of the fingerprints of the batch that are new. At operation324, the requests for the chunks are provided to the NIC in batches. For example, the requests may be provided in a single batch that is responsive to the batch of fingerprints. At operation326, the corresponding chunks are obtained from the NIC. The NIC may provide the corresponding chunks based on the identifiers included in the requests of operation324. At operation328, the obtained corresponding chunks are stored. The may be stored via any process and in any format (e.g., in containerized storages, bulk storages, structured with a file system, etc.). The fingerprints for each of the corresponding chunks may also be stored in the fingerprint cache. Reference counts for each of the corresponding chunks may also be established (e.g., set to one, to indicate that only one piece of stored data relies on the corresponding chunks for reconstruction). At operation330, reference counts for each of the fingerprints that are not new are updated. The reference counts may be updated by incrementing them to indicate that an additional data structure which relies on the chunk corresponding to the fingerprints that are not new has been stored in the storage. The method may end following operation330. Using the method illustrated inFIG.3B, storages may store deduplicated data without needing to generate fingerprints or receive copies of duplicative chunks of data thereby reducing network traffic. Any of the components illustrated inFIGS.1-2may be implemented with one or more computing devices. Turning toFIG.4, a block diagram illustrating an example of a data processing system (e.g., a computing device) in accordance with an embodiment is shown. For example, system400may represent any of data processing systems described above performing any of the processes or methods described above. System400can include many different components. These components can be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules adapted to a circuit board such as a motherboard or add-in card of the computer system, or as components otherwise incorporated within a chassis of the computer system. Note also that system400is intended to show a high level view of many components of the computer system. However, it is to be understood that additional components may be present in certain implementations and furthermore, different arrangement of the components shown may occur in other implementations. System400may represent a desktop, a laptop, a tablet, a server, a mobile phone, a media player, a personal digital assistant (PDA), a personal communicator, a gaming device, a network router or hub, a wireless access point (AP) or repeater, a set-top box, or a combination thereof. Further, while only a single machine or system is illustrated, the term “machine” or “system” shall also be taken to include any collection of machines or systems that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, system400includes processor401, memory403, and devices405-408via a bus or an interconnect410. Processor401may represent a single processor or multiple processors with a single processor core or multiple processor cores included therein. Processor401may represent one or more general-purpose processors such as a microprocessor, a central processing unit (CPU), or the like. More particularly, processor401may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor401may also be one or more special-purpose processors such as an application specific integrated circuit (ASIC), a cellular or baseband processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, a graphics processor, a network processor, a communications processor, a cryptographic processor, a co-processor, an embedded processor, or any other type of logic capable of processing instructions. Processor401, which may be a low power multi-core processor socket such as an ultra-low voltage processor, may act as a main processing unit and central hub for communication with the various components of the system. Such processor can be implemented as a system on chip (SoC). Processor401is configured to execute instructions for performing the operations discussed herein. System400may further include a graphics interface that communicates with optional graphics subsystem404, which may include a display controller, a graphics processor, and/or a display device. Processor401may communicate with memory403, which in one embodiment can be implemented via multiple memory devices to provide for a given amount of system memory. Memory403may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Memory403may store information including sequences of instructions that are executed by processor401, or any other device. For example, executable code and/or data of a variety of operating systems, device drivers, firmware (e.g., input output basic system or BIOS), and/or applications can be loaded in memory403and executed by processor401. An operating system can be any kind of operating systems, such as, for example, Windows® operating system from Microsoft®, Mac OS®/iOS® from Apple, Android® from Google®, Linux®, Unix®, or other real-time or embedded operating systems such as VxWorks. System400may further include IO devices such as devices (e.g.,405,406,407,408) including network interface device(s)405, optional input device(s)406, and other optional IO device(s)407. Network interface device(s)405may include a wireless transceiver and/or a network interface card (NIC). The wireless transceiver may be a WiFi transceiver, an infrared transceiver, a Bluetooth transceiver, a WiMax transceiver, a wireless cellular telephony transceiver, a satellite transceiver (e.g., a global positioning system (GPS) transceiver), or other radio frequency (RF) transceivers, or a combination thereof. The NIC may be an Ethernet card. Input device(s)406may include a mouse, a touch pad, a touch sensitive screen (which may be integrated with a display device of optional graphics subsystem404), a pointer device such as a stylus, and/or a keyboard (e.g., physical keyboard or a virtual keyboard displayed as part of a touch sensitive screen). For example, input device(s)406may include a touch screen controller coupled to a touch screen. The touch screen and touch screen controller can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen. IO devices407may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices407may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. IO device(s)407may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips. Certain sensors may be coupled to interconnect410via a sensor hub (not shown), while other devices such as a keyboard or thermal sensor may be controlled by an embedded controller (not shown), dependent upon the specific configuration or design of system400. To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage (not shown) may also couple to processor401. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a solid state device (SSD). However, in other embodiments, the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also a flash device may be coupled to processor401, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system. Storage device408may include computer-readable storage medium409(also known as a machine-readable storage medium or a computer-readable medium) on which is stored one or more sets of instructions or software (e.g., processing module, unit, and/or processing module/unit/logic428) embodying any one or more of the methodologies or functions described herein. Processing module/unit/logic428may represent any of the components described above. Processing module/unit/logic428may also reside, completely or at least partially, within memory403and/or within processor401during execution thereof by system400, memory403and processor401also constituting machine-accessible storage media. Processing module/unit/logic428may further be transmitted or received over a network via network interface device(s)405. Computer-readable storage medium409may also be used to store some software functionalities described above persistently. While computer-readable storage medium409is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The terms “computer-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies disclosed herein. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, or any other non-transitory machine-readable medium. Processing module/unit/logic428, components and other features described herein can be implemented as discrete hardware components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, processing module/unit/logic428can be implemented as firmware or functional circuitry within hardware devices. Further, processing module/unit/logic428can be implemented in any combination hardware devices and software components. Note that while system400is illustrated with various components of a data processing system, it is not intended to represent any particular architecture or manner of interconnecting the components; as such details are not germane to embodiments disclosed herein. It will also be appreciated that network computers, handheld computers, mobile phones, servers, and/or other data processing systems which have fewer components or perhaps more components may also be used with embodiments disclosed herein. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as those set forth in the claims below, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments disclosed herein also relate to an apparatus for performing the operations herein. Such a computer program is stored in a non-transitory computer readable medium. A non-transitory machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices). The processes or methods depicted in the preceding figures may be performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes or methods are described above in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially. Embodiments disclosed herein are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments as described herein. In the foregoing specification, embodiments have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the embodiments disclosed herein as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 46,161 |
11861167 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to maintenance operations for memory systems. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described with reference toFIG.1. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. The memory components can include non-volatile and volatile memory devices. A non-volatile memory device is a package of one or more dice. The dice in the packages can be assigned to one or more channels for communicating with a memory sub-system controller. The non-volatile memory devices include cells (i.e., electronic circuits that store information), that are grouped into pages to store bits of data. The non-volatile memory devices can include three-dimensional cross-point (“3D cross-point”) memory devices that are a cross-point array of non-volatile memory that can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Although non-volatile memory components such as 3D cross-point type memory are described, the memory device can be based on any other type of non-volatile memory, such as negative-and (NAND), and other examples as described below in conjunction withFIG.1. Access operations can be performed by a memory sub-system on memory devices and can include read operations, erase operations, write operations, re-write operations. Access operations can cause wear in the memory cell. In some cases, wear of some memory cells can be different than other memory cells within the memory device. Unevenness in the wearing of the memory cells can be due to some memory cells being accessed more frequently compared with other memory cells. In this example, the more frequently accessed memory cells within the memory device can have a lower read/write life. As such, the overall life of the memory device can be affected negatively by the more frequently accessed memory cells. Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that includes a counter configured to count the number of access operations. The memory sub-system can coordinate the number of access operations on specific sets of memory cells, thereby mitigating the decreased life of the memory device. For example, counting the number of access operations that sets of memory cells undergo, and when a threshold number (e.g., a number limit) of access operations are performed, changing subsequent access operations to memory cells in a different set. Additionally, in some cases, maintenance operations, such as wear leveling operations, can be initiated on the memory cells of the set that has reached the threshold. In some examples, the threshold number can be changeable to accommodate different numbers of access operations, which can be based upon the age of the memory device. The maintenance operations can level the wear throughout the memory device and increase the life of the memory device. In some cases, the counters can be implemented using a global counter, an offset counter, and a set-specific counter. Some non-volatile memory devices, such as 3D cross-point memory devices, can group pages across dice and channels to form management units (MUs). An MU can include user data and corresponding metadata. A memory sub-system controller can send and receive user data and corresponding metadata as management units to and from memory devices. A super management unit (SMU) may be a group of one or more MUs that are managed together. For example, a memory sub-system controller can perform media management operations (e.g., wear level operations, refresh operations, etc.) on SMUs. Other types of non-volatile memory devices can be comprised of one or more planes. Planes can be groups into logic units (LUN). For some types of non-volatile memory devices (e.g., NAND devices), each plane may include of a set of physical blocks, which may be the smallest area than can be erased. A set-specific counter can be a MU-specific counter, an SMU-specific counter, or a block-specific counter. A memory sub-system can be configured to use a value of the global counter, a value of the offset counter, and a value of the set-specific counter to determine a count of access operations performed on each set of memory cells. A set of memory cells can be an MU, an SMU, or memory bank. The counting system can be configured to allow a global least-significant value to be updated without having to update each of the set-specific counters. Features of the disclosure are initially described in the context of a computing environment as described with reference toFIG.1. Features of the disclosure are described in the context systems and timing diagrams as described with reference toFIGS.2,3, and4. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram, a computer diagram, and flowcharts that relate to maintenance operations for memory systems as described with references toFIGS.5-10. FIG.1illustrates an example of computing environment100in accordance with examples as disclosed herein. The computing environment can include a host system105and a memory sub-system110. The memory sub-system110can include media, such as one or more non-volatile memory devices (e.g., memory device130), one or more volatile memory devices (e.g., memory device140), or a combination thereof. A memory sub-system110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and a non-volatile dual in-line memory module (NVDIMM). The computing environment100can include a host system105that is coupled with a memory system. The memory system can be one or more memory sub-systems110. In some examples, the host system105is coupled with different types of memory system110.FIG.1illustrates one example of a host system105coupled with one memory sub-system110. The host system105uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system105can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), embedded systems, Internet of Things (IoT) devices, or such computing device that includes a memory and a processing device. The host system105can be coupled to the memory sub-system110using a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system105and the memory sub-system110. The host system105can further utilize an NVM Express (NVMe) interface to access the memory components (e.g., memory devices130) when the memory sub-system110is coupled with the host system105by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system105. The memory devices can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). An example of non-volatile memory devices (e.g., memory device130) includes a three-dimensional (3D) cross-point (“3D cross-point”) type flash memory, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Although non-volatile memory components such as 3D cross-point type memory are described, the memory device130can be based on any other type of non-volatile memory, such as negative-and (NAND), read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as single level cells (SLCs), multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), or a combination of such. In some examples, a particular memory component can include an SLC portion, and an MLC portion, a TLC portion, or a QLC portion of memory cells. Each of the memory cells can store one or more bits of data used by the host system105. Furthermore, the memory cells of the memory devices130can be grouped as memory pages or a set of memory cells that can refer to a unit of the memory component used to store data. Pages can be grouped across dice and channels to form management units (MUs). An MU can include user data and corresponding metadata. A super management unit (SMU) is a group of one or more MUs that are managed together. The memory sub-system controller115can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processor120(e.g., a processing device) configured to execute instructions stored in a local memory125. In the illustrated example, the local memory125of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system105. In some examples, the local memory125can include memory registers storing memory pointers, fetched data, etc. The local memory125can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another example of the present disclosure, a memory sub-system110cannot include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system105and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA)) and a physical address that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system105via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system105. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some examples, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the media device130(e.g., perform media management operations on the media device130). In some embodiments, the memory devices130can be locally managed memory devices, which is a raw memory device combined with a local media controller135that performs memory management operations on the memory device130within the same memory device package. The memory sub-system110includes a counter150that can count the number of access operations performed on sets of memory cells of a memory device and initiate maintenance operations on the memory cells, which can be based upon a threshold number of access operations that may be modifiable. Coordinating (e.g., counting) the number of access operations on specific sets of memory cells can mitigate the decreased life of the memory sub-system110. For example, counting the number of access operations that sets of memory cells undergo can allow for wear leveling operations to be performed on certain sets of memory cells (e.g., sets of memory cells that have reached a threshold number of access operations). Such wear leveling operations can increase the life of the memory sub-system110. In some cases, counter150may count any number of access operations performed on the memory cells until the threshold number of access operations is reached. Counter150can also count any number of maintenance operations performed on the memory cells. The number of times that maintenance operations have been performed on the memory cells may be based on the amount of times the counter150has counted the number of access operations from 0 to the threshold number of access operations. In some examples, the memory sub-system controller115includes at least a portion of the counter150. For example, the memory sub-system controller115can include a processor120(e.g., a processing device) configured to execute instructions stored in local memory125for performing the operations described herein. In some examples, the counter150is part of the host system105, an application, or an operating system. The counter150can count the number of access operations performed on the memory cells and can initiate wear leveling operations based upon a criterion (e.g., threshold value of access operations). Counting the number of access operations can be accomplished by multiple parts of counter150. In some embodiments, the counter includes two parts. The first part of the counter150can count the number of access operations until a criterion (e.g., threshold value) is satisfied. The first part can restart the count once the threshold has been satisfied. The second part of the counter150can increment once the first part satisfies the threshold. The second part can then initiate switching access operations to a different part of the memory device (e.g., different cells), and/or trigger maintenance operations (e.g., wear leveling operations) on the memory cells. The second part of the counter150can store the number of times the criterion (e.g., threshold value) has been satisfied. In some examples, the threshold is configurable, user/system defined, and/or can be changed. Further details with regards to the operations of the counter150are described below. FIG.2illustrates an example of a method200for determining wear leveling operation coordination in a memory sub-system, according to an embodiment of the present disclosure. The method200can be performed by a memory sub-system, which can be an example of a memory sub-system110described with reference toFIG.1. The method200can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method200may be performed by counter150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other methods are possible. The memory sub-system can receive access commands from the host system. Such access commands can be read commands, write commands, or erase commands which can read, erase, write, and/or re-write data to specific memory cells in memory devices of the memory sub-system. In some cases, accessing, erasing, or writing data in the memory cells can cause the memory cells to wear, which can limit the life of the memory cells within the memory sub-system. In some cases, if the wear of the memory sub-system is left unchecked, the life of the memory sub-system can be decreased. To reduce the impact of wear caused by access operations on the memory cells in the memory sub-system, maintenance operations can be used. Maintenance operations can be a variety of operations to lessen wear on the memory sub-system, including wear leveling operations. In some examples, wear leveling can limit the impact of wear caused by erasing, writing, and rewriting data to the memory cells in the memory sub-system. Wear leveling can be a process that helps reduce premature wear in memory devices by distributing write operations across the memory devices. Wear leveling can include a set of operations to determine which physical media (e.g., set of memory cells) to use. Each time data is programmed to help ensure that certain physical sets of memory cells are not written and erased more often than others. Wear leveling can allow for the wear of a specific memory cell to be similar to the wear experienced by the other memory cells within the memory sub-system (e.g., the wear across different cells can be leveled). In some examples, even distribution of the access operations across the different cells of the memory sub-system can ensure that specific memory cells are not erased and re-written more often than other memory cells. In some cases, wear leveling operations can be performed after a set number of access operations performed on the memory cells. Thus, the wear experienced in the accessed memory cells of the memory sub-system can be evenly distributed across the different sets of memory cells within the memory sub-system. Even distribution of the wear experienced by the memory cells can extend the life of the memory sub-system. A host system can send access requests to the memory sub-system, such as to store data at the memory sub-system and to read data from the memory sub-system. The data to be read and written are hereinafter referred to as “user data”. A host request can include a logical address (e.g., logical block address (LBA)) for the user data, which is the location the host system associates with the user data. The logical address (e.g., LBA) can be part of metadata for the user data. The requests can be in the form of access operation commands (e.g., read command, write command). For example, at operation215, the memory sub-system can receive access operation commands, for example, from a host system. The access operation commands can initiate access operations on memory cells within a specific set of the memory sub-system. The access operations can include read operations, erase operations, write operations, re-write operations, other operations, or combinations thereof, which can cause wear in the memory cell. The memory sub-system can continuously receive successive access operations for the same memory cells within the same set of cells on the memory sub-system, which can cause additional wear. In some cases, successive access operations on the memory cells within the set can occur before the memory sub-system has performed access operations on memory cells within other sets of the memory sub-system. Additionally or alternatively, the access operations can be performed without periodic wear leveling operations on the accessed memory cells. This can lead to the premature wear of the accessed memory cells compared to non-accessed memory cells in other sets of the memory sub-system. Wear leveling operations can be performed on memory cells within a set of the memory sub-system, which can be after a specific number of access operations have been performed on the memory cells. Counting the number of access operations performed on the individual memory cells within the set can allow for the memory sub-system to coordinate the wear leveling operations performed on the different sets of memory cells. To reduce wear on specific memory cells in the set, counting the number of access operations performed on the memory cells can be used. Additionally, the memory sub-system can determine a threshold number of access operations performed on the memory cells before performing wear leveling operations on the previously accessed cells. Counting the number of access operations performed on the memory cells can prevent a higher wear rate on the accessed memory cells compared to other less-accessed cells in the memory sub-system. Therefore, the memory sub-system can cease access operations, and in some examples, initiate wear leveling operations, on the accessed cells within the memory sub-system after a certain count of access operations have occurred. The memory sub-system can identify the limit on the number of access operations performed on the memory cells within the specific set. The number of access operations (e.g., counts/values) that the set of the memory sub-system can undergo before wear leveling operations occur can be a fixed number. In some examples, this number can be regarded as a threshold. The threshold of access operations that a set can undergo can occur before initiating wear leveling operations can be in the hundreds, thousands, tens of thousands, or hundreds of thousands of access operations. The threshold can be stored in a register of the memory sub-system. In some cases, the value of the threshold can be configurable such that the memory sub-system can be configured to modify the threshold. In some embodiments, the memory sub-system receives one or more threshold values from the host system and can store the value in a register of memory sub-system. For example, at operation220, the memory sub-systems determine a first threshold, for example, by accessing the threshold value in a register. The memory sub-system can use the first threshold value as the number of access operations to occur upon the memory cells within the set before switching access operations to a new set and initiating wear leveling operations. In some cases, the threshold value can be changed, which can accommodate different values to reduce or increase the number of access operations performed on the memory cells within the set. For example, when the memory sub-system is new (e.g., low in the number of access operations performed on the memory cells) the threshold value can be high. However, as the memory sub-system ages (e.g., the memory cells undergo more access operations), the threshold value can be reduced to allow for more frequent wear leveling operations to take place in the memory sub-system. The reduction in the threshold can prolong the life of the memory sub-system. However, in some examples, the threshold can be the same throughout the life of the memory sub-system. In other examples, the threshold can be increased. Therefore, in some examples, the memory sub-system determines a second threshold, which can be different from the first threshold, at any time. The second threshold value can be stored in a register. The second threshold can replace the first threshold and can be used for the subsequent wear leveling coordination operations (e.g., operations235-255). In some cases, the first threshold value can be used in all wear leveling operations without using any other (e.g., second) threshold values. The memory sub-system can identify the count of the number of access operations performed on the memory cells within the sets of memory sub-system. In some examples, the counts can be checked for the number's divisibility by the threshold to determine if the count has met the threshold (which can initiate wear leveling operations). In some cases, this can be accomplished by a divider circuit. The divider circuit can utilize a divisor corresponding to the threshold value. However, the divider circuit can be limited to certain divisors. For example, the divisor can be limited to a power of two (e.g., 2n). In this case, a divisor that cannot be a power of two cannot be used as a divisor by the divider circuit. Thus, the divider circuit can be inflexible regarding the thresholds that can be used. Additionally, the divider circuit can be expensive, which can be due to the complexity of the circuitry of the divider circuit. For example, the divider circuit can use a large number of gates, thereby increasing the area of the divider circuit or the divider circuit can consume larger amounts of power. As such, a count of access operations determined by other methods can be advantageous. The use of a divider circuit to count the number of access operations performed on memory cells within a set of the memory sub-system can be avoided by utilizing a combined counter. Combined counter305, as described with reference toFIG.3, can be an example such a combined counter. Each set of memory cells within the memory sub-system can include their own combined counter, which can count the number of access operations performed on the memory cells within that set. For example, the combined counter can identify (e.g., determine) the count of the number of access operations performed on the memory cells within the set. The combined counter can have a first counter and a second counter. The first counter can be an example of the first counter315and the second counter can be an example of the second counter345described in more detail with reference toFIG.3. The first counter can count the number of access operations performed on the memory cells in the set. The first counter can count the number of access operations from zero to a defined number (e.g., the threshold of access operations) and determine if the threshold of access operations has been reached. For example, a comparator can be used to compare the count of access operations performed on the memory cells within the set to the threshold. Comparator330, as described with reference toFIG.3, can be an example of such a comparator. At operation235, the first counter can increment the count by an integer (e.g.,1) and communicate the incremented value to the comparator. At operation240, the comparator can compare the incremented value to the threshold value. In some examples, at operation240, the comparator can determine that the incremented count has not yet satisfied (e.g., is less than) the threshold. In this example, at operation245, the first counter can receive a returned incremented count value, which can be selected based upon the incremented count not satisfying (e.g., being less than) the threshold. However, in some cases, at operation240, the comparator can determine that the incremented value satisfies (e.g., matches or exceeds) the threshold. In this case, at operation245, the first counter can be reset to a zero value. The zero value can then be incremented, which can be based upon the subsequent access operations performed on the memory cells in the set. The first counter can then iteratively perform incrementations of the count until the threshold value is reached again. The second counter of the combined counter can trigger wear leveling operations on the set and additionally or alternatively count the number of wear leveling operations performed on the set. The wear leveling operation count can begin at 0, which can indicate that no previous wear leveling operations have been performed upon the memory cells in the set (e.g., the threshold has not yet been satisfied). At operation250, the second counter can increment the value from 0 to 1. However, in some examples, when the threshold has previously been reached, the incrementation can add to a non-zero number. In the example where the comparator has determined that the incremented first counters value (e.g., the number of access operations) is less than the threshold value, at operation255, the second counter can receive a returned selected non-incremented count (e.g., a 0). However, in the example where the comparator has determined that the first counters count value matches the threshold value, at operation255, the second counter can receive a returned selected incremented count (e.g., a 1). In the case where the comparator has received an incremented count, maintenance operations can be triggered, which can include wear leveling operations. Maintenance operations can be performed by the memory sub-system. The operations can be initiated by the firmware of the memory sub-system. Triggering maintenance operations can be initiated by the second counter indicating the second counters incrementation in value (e.g., indicating that the comparator has determined the first counter has reached the threshold). To initiate the maintenance operations, at operation260, the memory sub-system can communicate an indicator to the memory sub-system. An example of such an indication can be the specific physical address of the set where the second counter has indicated an incrementation in value (e.g., the first counter has satisfied the threshold). This can trigger the memory sub-system, at operation265, to initiate maintenance operations, which can be wear leveling operations. Additionally or alternatively, the memory sub-system can cease access operations on memory cells within the set and move access operations to a different set of memory cells in memory sub-system. At operation270, the memory sub-system can perform the maintenance operations on the memory cells within the set. The maintenance operation can be for a wear leveling operation, or other types of maintenance operations. The maintenance operation can prevent access operations from being performed on the memory cells within the set. The previously described process of triggering maintenance operations can iteratively be performed once access operations are performed on the memory cells, which can be after wear leveling has been completed. As discussed previously, the memory sub-system can change the first threshold to a second threshold, which can be different from the first threshold. In some examples, the memory sub-system can change the threshold to a second threshold. The change in threshold can be based upon a variety of factors including the age of the memory sub-system, the number of access operations performed on the memory sub-system, or other considerations. At operation280, the memory sub-system can receive the second threshold. In some examples, the threshold can be stored in a register of the memory sub-system. In some cases the memory sub-system can receive the new threshold from the host system via a message. Similar to operation220, the second threshold can be used as the threshold value for the combined counter. The successive operations (e.g.,235-270) can be again performed utilizing the second threshold value. In one example, the second threshold value can be less than the first threshold value, and can be identified by the memory sub-system at a time where the number of access operations performed on the set of memory cells has previously exceeded the second threshold. In this case, the comparator can determine that the threshold has been satisfied. For example, the matching of the threshold and the access operation count cannot be used in the case where the modified second threshold is less than the current access operation count value. The comparator can determine that the access operation count is greater than the threshold, which can satisfy the threshold. In some examples, operation280can occur at different positions than illustrated inFIG.2. For example, at any point during method200, the first threshold value can be changed to a second threshold value and determined by the memory sub-system (e.g., operation280). Therefore, a changed threshold value (whether being a second threshold value, as illustrated inFIG.2, or a third threshold value, or any other number of subsequent threshold values) can be determined by the memory sub-system and used as the threshold value for subsequent wear leveling operation coordination steps. FIG.3illustrates an example of a counting system300that supports maintenance operations for memory sub-systems in accordance with examples as disclosed herein. Counting system300can be an example of a counting system for an individual set of memory cells within a memory sub-system. The Memory sub-system, as described with reference toFIG.2, can be an example of such a memory sub-system. In some examples, each set of memory cells can utilize a corresponding counting system (e.g., each set of memory cells can have a corresponding combined counter305). As such, many counting systems can be present in the memory sub-system. Counting system300can include a combined counter305and a memory sub-system controller310. Memory Sub-System Controller115, described with reference toFIG.1, can be an example of a memory sub-system controller. The memory sub-system controller310can receive commands (e.g., access operation commands and wear leveling operation commands) from a host system. Combined counter305can be coupled with memory sub-system controller310, which can allow memory sub-system controller310and combined counter305to communicate information between memory sub-system controller310and combined counter305. Examples of the communicated information can be access operation commands, wear leveling operation commands, thresholds, incrementation indications, and/or other information. The memory sub-system containing the counting system300can receive access operation commands from a host system. Memory sub-system controller310can carry out the access operations (e.g., read, erase, write, and/or re-write data) upon the specific memory cells corresponding to the set of memory associated with the combined counter305. In some examples, combined counter305can receive an access operation command from memory sub-system controller310, which can be directed towards memory cells within the combined counter's corresponding set. Combined counter305can include a first counter315and a second counter345. First counter315can count the number of access operation commands received by combined counter305. For example, first counter315can receive the access operation command from memory sub-system controller310. After receiving the access operation command, first counter315can output the current access operation count to first incrementer325. First incrementer325can increment the value of the number of access operations by a fixed number (e.g., by 1) over the current count of access operations (e.g., incremented to a value of 2 when the number of access operations can be 1). First incrementer325can output the incremented access operation count to comparator330and first selector335. The memory sub-system can determine one or more thresholds. As described with reference toFIG.2, thresholds can correspond to the number of access operations performed on the memory cells in the set before maintenance operations can be performed. Wear leveling operations are an example of such maintenance operations. When wear leveling operations are described herein, any other type of maintenance operation could additionally or alternatively be performed. Memory sub-system controller310can communicate a threshold to the combined counter305. Threshold320can be an example of such a threshold. In some examples, threshold320can be a variety of threshold values. For example, threshold320can be 1,000, 5,000, 30,000, or other higher or lower amounts. Therefore, in some examples, threshold320can set the number of bits that the first counter315uses to count the number of access operations. For example, eleven bits can be used for counting to a corresponding threshold value of 30,000. However, in some examples, a lower threshold value (e.g., 5,000) can use fewer bits. In some examples, however, the number of bits used for the threshold value can be set by the highest threshold value (e.g., 30,000). Memory sub-system controller310can communicate the threshold320to comparator330. Comparator330can be an example of a variety of comparator circuits. Comparator330can use threshold320to compare against the incremented count of access operations received from first incrementer325. Comparator330can compare the two values (e.g., threshold and the incremented count), and determine whether the incremented value matches the threshold320. In some examples, threshold320can be greater than the incremented count. In this example, comparator330can output a low signal (e.g., a 0) to two selector components: first selector335and second selector340. However, in some examples, threshold320can satisfy (e.g., be the same number as, or greater than) the incremented count. In this example, comparator330can output a high signal (e.g., a 1) to first selector335and second selector340. First selector335can select a value of the count to output to first counter315. First selector335can be an example of a variety of selector type circuit such as a multiplexer, switch, or other type of selector circuit. First selector335's outputted count can be based upon the received high or low signal (e.g., 1 or 0 respectively) from comparator330. For example, first selector335can receive the incremented count from first incrementer325at second input337. First selector335can also receive a null (e.g., 0) value at first input336. First selector335can also receive the output from comparator330at selector input338. Selector input338can be used to determine which count value (e.g., which of first input336or second input337) is output from the first selector335(e.g., by the output339) and returned to the first counter315. The count value can be 0 (e.g., from the null value input) or it can be the incremented count from first incrementer325(e.g., second input337). The selection between 0 or the incremented count can depend on the value received at selector input338. For example, when selector input338(e.g., the output of comparator330) is a low value (e.g., a 0), first selector335can output the incremented count from output339to first counter315. This incremented count can then be used as the current count for the number of access operations performed on the memory cells within the set by first counter315. The count value can be iteratively incremented utilizing similar steps, as described previously, until comparator330determines that the incremented count matches the threshold320. In another case, when selector input338is a high value (e.g., a 1), first selector335can output a 0 value from output339to first counter315. The 0 value can be used to reset the first counter315's count for the number of access operations performed on the memory cells within the set. The resetting of first counter315's count to 0 can signify that wear leveling operations can occur. For example, resetting the current count to 0 can signify that the count has been incremented in value from 0 to threshold320, and wear leveling operations can be initiated. The current count (e.g., 0) can be iteratively incremented utilizing similar steps, as described previously, until the threshold is reached again (e.g., comparator330determines that the incremented count matches the threshold320). In some cases, first selector335and second selector340can additional or alternatively include more inputs than the first input336and341, the second input337and342, and the selector input338and343respectively. For example, more inputs can be received by first selector335and second selector340from other components such as incrementors, comparators, counters, or other types of counting components. Combined counter305can include second counter345, second incrementer350, and second selector340. These components can be used to count the number of maintenance operations that can be performed on the memory cells in the set. In some examples, second counter345can count the current value of maintenance operations (e.g., the number of times maintenance operations can be performed). Second counter345can output the count of wear leveling operations to second incrementer350and second selector340. Second incrementer350can increment the count of the number of wear leveling operations by a fixed number (e.g., by 1) over the current count of wear leveling operations (e.g., incremented to 2 when the number of wear leveling operations is 1). Second incrementer350can output the incremented wear leveling operation count to second selector340. Second selector340can select a value of the wear leveling operation count to output to second counter345. Similar to first selector335, second selector340can be a variety of selector type circuit such as a multiplexer, switch, or other type of selector circuit. Second selector340's outputted count can be based upon the received high or low signal (e.g., 1 or 0 respectively) from comparator330, as discussed previously. For example, second selector340can receive the incremented count from second incrementer350at second input342. Second selector340can also receive the current count from second counter345at first input341. Second selector340can also receive the output from comparator330at selector input343. Selector input343can be used to determine which count value (e.g., which first input341or second input342) is returned to second counter345. The wear leveling operation count value can be the current count (e.g., from first input341) or it can be the incremented count (from second input342). The selection between the current or the incremented count can depend on the value received at selector input343. For example, when selector input343(e.g., the output of comparator330) is a low value (e.g., a 0), second selector340can output the current count from output344to second counter345. This count can be used as the current count for the number of wear leveling operations performed on the memory cells in the set. This count value cannot be incremented because the count has not passed through second incrementer350. As such, in this example, the count value can remain the same count value when selector input343receives a low signal (e.g., a 0). In another case, when selector input343is a high value (e.g., a 1), second selector340can output the incremented count at output344to second counter345. The incrementation of the current count to a higher value (e.g., a 0 to a 1 at second selector340) can signify that wear leveling operations can occur for the memory cells in the set. In this example, second counter345can communicate with memory sub-system controller310, which can include information regarding triggering wear leveling operations, and the memory sub-system controller310can initiate maintenance operations in the memory cells of the set, as described with reference toFIG.2. In some examples, the total count of the number of access operations performed on the memory cells within the set corresponding to combined counter305can be determined. In some cases, first counter315and first incrementer325can increment the count of access operations, as discussed previously, until comparator330determines that the incremented count matches the threshold320. In some examples, when comparator330compares the incremented count value and threshold320, and determines that the value satisfies threshold320, first counter315's count can return to 0. In this case, second selector340can return an incremented count of wear leveling operations to second counter345. Second counter345can output the address of the set to memory sub-system controller310, which can initiate wear leveling operations. In some examples, second counter345can use this incremented value of wear leveling operations to again increment when comparator330subsequently determines that the incremented count of access operations (e.g., first incrementer325's output) satisfies threshold320. However, the current count of wear leveling operations (e.g., second counter345's count) can additionally or alternatively allow for the determination of the total count of access operations performed on the memory cells in the set. For example, first counter315can increment from 0 to threshold320. Once the threshold320is satisfied (which can indicate the current number of access operations), second counter345can increment in count. The incremented count of second counter345can be regarded as the number of times that the count of access operations has reached the threshold320. Thus, second counter345's count can, in some examples, be the number of iterations of threshold320's access operations value. In other words, the count of second counter345can be viewed as the wear leveling operation count multiplied by threshold320. In this example, the overall count of combined counter305can be determined by adding first counter315's count (e.g., the current number of access operations under threshold320) to the product of second counter345's count and threshold320(e.g., the number of wear leveling operations times the threshold for wear leveling operations). Thus, the overall number of access operations can be determined for combined counter305by utilizing first counter315's count, second counter345's count, and threshold320. FIG.4illustrates an example of counting system400that supports maintenance operations for memory sub-systems in accordance with examples as disclosed herein. The counting system400can be configured to track access operations on sets of memory cells as part of performing maintenance operations, such as wear leveling operations. The counting system400can be implemented by a controller, software, firmware, hardware, or a combination thereof. In some counting systems, a counting system can be used to track wear leveling operations for sets of memory cells using a global minimum counter and one or more set-specific counters tracking the differences between the global minimum value and the specific set. The global minimum counter can track the value of access operations on the set of memory with the least amount of access operations. A set-specific counter can track the differences between a set's specific count and the global minimum. Using such a type of counting system, can reduce the total number of bits used to implement the counters. For example, instead of maintaining a sixteen-bit counter for each set, the system can maintain a sixteen-bit counter for the global minimum and much small counters (e.g., two, three, four, five, six, seven, or eight bits) for the set-specific difference counters. In such systems, when the global minimum counter is updated or incremented, each of the set-specific difference counters can also be updated. Such operations can consume power and computing resources. Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that includes a counter configured to track the access operations on sets using a global counter405, an offset counter410, and one or more set-specific counters415. An algorithm420can use the value of the global counter405, the offset counter410, and each of the one or more of set-specific counters415to determine a count425of access operations performed on each set. The counting system400can be configured to allow the global counter405(e.g., the global minimum counter) to be updated without having to update each of the set-specific counters415. There can be any number of set-specific counters (e.g., set-specific counter415-a, set-specific counter415-b, set-specific counter415-c, through set-specific counter415-N) to identify any number of counts (e.g., count425-a, count425-b, count425-c, through count425-N) for any number of sets of memory cells. The global counter405can be an example of a global minimum counter. The global counter405can track the value of access operations on the set of memory with the least amount of access operations. A controller can identify which set has the least amount of access operations and set the global counter405to that value of access operations. In some cases, the global counter405can need to be updated as the number of access operation performed on the sets of memory cells increases. The offset counter410can be configured to update the global counter405without affecting the set-specific counters415. The offset counter410can be an example of a global counter and can be used to determine the counts425of one or more sets. The value of the global counter405and the value of the offset counter410can be used to determine a global least-significant value of access operations performed on the sets of memory cells. In some cases, the global least-significant value is equivalent to the value of the global minimum counter. In some implementations, the controller can update or modify the value of the offset counter410instead of modifying or updating the value of the global counter405when the global least-significant value changes. In this manner, the controller can be able to update the global least-significant value without updating the values of the set-specific counters415, at least in some instances. The set-specific counters415can be examples of counters that track at least some aspects of the differences between an actual count425of access operations performed on a specific set and the global counter405. The set-specific counters415can be configured to cooperate with the offset counter410and the global counter405to determine the counts425associated with the specific sets. The algorithm420can be configured to use the values of the global counter405, the offset counter410, and the set-specific counters415to determine the count425of access operations performed on each set. Equation 1 illustrates an example of a procedure that can be used as part of the algorithm420. Count=α+((β−γ)mod(δ)) (1) In Equation 1, the term α can refer to the value of the global counter405; the term β can refer to the value of a set-specific counter415; the term γ can refer to the value of the offset counter410; and the term δ can refer to a modifier value. A controller implementing the algorithm420to determine counts425of access operations on sets, can be configured to identify a value of the global counter405. The value of the global counter405can indicate a baseline quantity of access operations performed on the sets of memory cells. The controller can determine a difference between a value of a set-specific counter415-aand a value of the offset counter410as part of determining the count425-aassociated with the specific set of memory cells. The set-specific counter415-acan be associated with a first set of the sets of memory cells. The offset counter410can be for indicating a global offset value relative to the value of the global counter405. The controller implementing the algorithm420can be configured to identify a remainder using a modulo operation. For example, the controller can apply a modulo operation using the difference between the value of the set-specific counter415-aand the value of the offset counter410and a modifier value or parameter. The modifier value can be any value. In some cases, the modifier value can be related to an upper-limit of a difference between the value of the global counter405and a value of a set-specific counter415that the counting system will tolerate. In some cases, the modifier value can be related to a number of bits associated with the value of the set-specific counters415. The controller implementing the algorithm420can be configured to add the value of the global counter405to the remainder determined earlier. The sum of the value of the global counter405and the remainder can be a count425-a(e.g., a quantity) of access operations performed on the specific set. The controller can use the counts425to determine whether to perform a maintenance operation (such as a wear leveling operation). The controller can compare the count425with a threshold (as described with reference toFIGS.2and3) and can initiate a maintenance operation based on that comparison. FIG.5shows a flowchart illustrating a method or methods500that supports maintenance operations for memory sub-systems in accordance with aspects of the present disclosure. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500can be performed by counter150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other methods are possible. At505, the processing device can perform an access operation on a memory cell. At510, the processing device can increment a value of a first counter based on performing the access operation on the memory cell. At515, the processing device can determine that the incremented value of the first counter satisfies a threshold. At520, the processing device can increment a value of a second counter based on determining that the incremented value of the first counter satisfies the threshold. At525, the processing device can perform a maintenance operation on the memory cell based on determining that the incremented value of the first counter satisfies the threshold. In some examples, an apparatus as described herein can perform a method or methods, such as the method500. The apparatus may include a controller that is operable to cause the apparatus to perform the methods described herein. For example, the controller may cause the apparatus to perform an access operation on a memory cell, increment a value of a first counter based on performing the access operation on the memory cell, determine that the incremented value of the first counter satisfies a threshold, increment a value of a second counter based on determining that the incremented value of the first counter satisfies the threshold, and perform a maintenance operation on the memory cell based on determining that the incremented value of the first counter satisfies the threshold. In other examples, the apparatus can include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for performing the features of the methods described herein. FIG.6shows a flowchart illustrating a method or methods600that supports maintenance operations for memory sub-systems in accordance with aspects of the present disclosure. The method600can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method600can be performed by counter150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other methods are possible. At605, the processing device can perform an access operation on a memory cell. At610, the processing device can increment a value of a first counter based on performing the access operation on the memory cell. At615, the processing device can determine that the incremented value of the first counter satisfies a threshold. At620, the processing device can set the value of the first counter to the incremented value based on determining that the incremented value of the first counter fails to satisfy the threshold. At625, the processing device can increment a value of a second counter based on determining that the incremented value of the first counter satisfies the threshold. At630, the processing device can perform a maintenance operation on the memory cell based on determining that the incremented value of the first counter satisfies the threshold. FIG.7shows a flowchart illustrating a method or methods700that supports maintenance operations for memory sub-systems in accordance with aspects of the present disclosure. The method700can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method700can be performed by counter150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other methods are possible. At705, the processing device can determine that a first quantity of access operations performed on a memory cell satisfies a first threshold. At710, the processing device can perform a first wear leveling operation on the memory cell based on determining that the first quantity of access operations performed on the memory cell satisfies the first threshold. At715, the processing device determine a second threshold. At720, the processing device can determine that a second quantity of access operations performed on the memory cell after performing the first wear leveling operation satisfies the second threshold. At725, the processing device can perform a second wear leveling operation on the memory cell based on determining that the second quantity of access operations satisfies the second threshold. The apparatus may include a controller that is operable to cause the apparatus to perform the methods described herein. For example, the controller may cause the apparatus to determine that a first quantity of access operations performed on a memory cell satisfies a first threshold, perform a first wear leveling operation on the memory cell based on determining that the first quantity of access operations performed on the memory cell satisfies the first threshold, determine a second threshold, determine that a second quantity of access operations performed on the memory cell after performing the first wear leveling operation satisfies the second threshold, and perform a second wear leveling operation on the memory cell based on determining that the second quantity of access operations satisfies the second threshold. In other examples, the apparatus can include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for performing the features of the methods described herein. FIG.8shows a flowchart illustrating a method or methods800that supports maintenance operations for memory sub-systems in accordance with aspects of the present disclosure. The operations of method800can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method800can be performed by counter150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other methods are possible. At805, the processing device can identify a global counter associated with performing a wear-leveling procedure on sets of memory cells, a value of the global counter indicating a baseline quantity of access operations performed on the sets of memory cells. At810, the processing device can determine a difference between a value of a set-specific counter and a value of an offset counter, the set-specific counter associated with a first set of the sets of memory cells, the offset counter for indicating a global offset value relative to the value of the global counter. At815, the processing device can identify a remainder of the difference and a parameter. At820, the processing device can identify a quantity of access operations performed on the first set based on adding the remainder to the value of the global counter. At825, the processing device can perform a wear-leveling operation on the first set based on the quantity of access operations performed on the first set satisfying a threshold. The apparatus may include a controller that is operable to cause the apparatus to perform the methods described herein. For example, the controller may cause the apparatus to identify a global counter associated with performing a wear-leveling procedure on sets of memory cells in a memory device, a value of the global counter indicating a baseline quantity of access operations performed on the sets of memory cells, determine a difference between a value of a set-specific counter and a value of an offset counter, the set-specific counter associated with a first set of the sets of memory cells, the offset counter for indicating a global offset value relative to the value of the global counter, identify a remainder of the difference and a parameter, identify a quantity of access operations performed on the first set based on adding the remainder to the value of the global counter, and perform a wear-leveling operation on the first set based on the quantity of access operations performed on the first set satisfying a threshold. In other examples, the apparatus can include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for performing the features of the methods described herein. FIG.10illustrates an example machine of a computer system900that supports a maintenance operations for memory sub-systems in accordance with examples as disclosed herein. The computer system900can include a set of instructions, for causing the machine to perform any one or more of the techniques described herein. In some examples, the computer system900can correspond to a host system (e.g., the host system105described with reference toFIG.1) that includes, is coupled with, or utilizes a memory sub-system (e.g., the memory sub-system110described with reference toFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the counter150described with reference toFIG.1). In some examples, the machine can be connected (e.g., networked) with other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” can also include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system900can include a processing device905, a main memory910(e.g., read-only memory (ROM), flash memory, DRAM such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory915(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system925, which communicate with each other via a bus945. Processing device905represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device905can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device905is configured to execute instructions935for performing the operations and steps discussed herein. The computer system900can further include a network interface device920to communicate over the network940. The data storage system925can include a machine-readable storage medium930(also known as a computer-readable medium) on which is stored one or more sets of instructions935or software embodying any one or more of the methodologies or functions described herein. The instructions935can also reside, completely or at least partially, within the main memory910and/or within the processing device905during execution thereof by the computer system900, the main memory910and the processing device905also constituting machine-readable storage media. The machine-readable storage medium930, data storage system925, and/or main memory910can correspond to a memory sub-system. In one example, the instructions935include instructions to implement functionality corresponding to a counting device950(e.g., the counting device950described with reference toFIG.1). While the machine-readable storage medium930is shown as a single medium, the term “machine-readable storage medium” can include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” can also include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” can include, but not be limited to, solid-state memories, optical media, and magnetic media. Information and signals described herein can be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that can be referenced throughout the above description can be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings can illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal can represent a bus of signals, where the bus can have a variety of bit widths. As used herein, the term “virtual ground” refers to a node of an electrical circuit that is held at a voltage of approximately zero volts (0V) but that is not directly coupled with ground. Accordingly, the voltage of a virtual ground can temporarily fluctuate and return to approximately 0V at steady state. A virtual ground can be implemented using various electronic circuit elements, such as a voltage divider consisting of operational amplifiers and resistors. Other implementations are also possible. “Virtual grounding” or “virtually grounded” means connected to approximately 0V. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” can refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) can be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components can be a direct conductive path between the components or the conductive path between connected components can be an indirect conductive path that can include intermediate components, such as switches, transistors, or other components. In some cases, the flow of signals between the connected components can be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. As used herein, the term “electrode” can refer to an electrical conductor, and in some cases, can be employed as an electrical contact to a memory cell or other component of a memory array. An electrode can include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of memory array. The devices discussed herein, including a memory array, can be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate can be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, can be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping can be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein can represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals can be connected to other electronic elements through conductive materials, e.g., metals. The source and drain can be conductive and can comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain can be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are signals), then the FET can be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET can be referred to as a p-type FET. The channel can be capped by an insulating gate oxide. The channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, can result in the channel becoming conductive. A transistor can be “on” or “activated” when a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor can be “off” or “deactivated” when a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that can be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, can be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features can have the same reference label. Further, various components of the same type can be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The various illustrative blocks and modules described in connection with the disclosure herein can be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor can be a microprocessor, but in the alternative, the processor can be any processor, controller, microcontroller, or state machine. A processor can also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein can be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions can also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” can be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium can be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein can be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 81,825 |
11861168 | DETAILED DESCRIPTION Reference will now be made in detail to embodiments of the inventive concept, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth to enable a thorough understanding of the inventive concept. It should be understood, however, that persons having ordinary skill in the art may practice the inventive concept without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first module could be termed a second module, and, similarly, a second module could be termed a first module, without departing from the scope of the inventive concept. The terminology used in the description of the inventive concept herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concept. As used in the description of the inventive concept and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The components and features of the drawings are not necessarily drawn to scale. U.S. patent application Ser. No. 15/256,495, filed Sep. 2, 2016, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/366,622, filed Jul. 26, 2016, both of which are incorporated by reference herein for all purposes, describes a self-discovery process by which Non-Volatile Memory (NVM) devices may perform self-discovery. This process may be extended to Baseboard Management Controllers (BMCs) that may perform self-discovery to get “Chassis Personality” information to complement self-configuring Solid-State Drives (SSDs). The new BMC may perform the self-discovery process during boot up initialization. By reading “Chassis Personality” information from a known location of an Electrically Erasable Programmable Read Only Memory (EEPROM)—such as Vital Product Data (VPD) on the mid-plane—chassis-specific data may be obtained, and the BMC may respond appropriately. The BMC may discover, for example, whether it is in an NVM Express (NVMe) or NVMe over Fabric (NVMeoF) chassis. If the BMC is in an NVMeoF chassis, the BMC may enable appropriate NVMeoF functionalities such as Discovery Services, robust error reporting, and management capabilities, as well as multi-pathing BMCs in high availability configurations. If the BMC self-discovery reveals that it is in an NVMe chassis, then the BMC may operate as a conventional BMC: i.e., no NMVeoF support. In NVMe mode, the drive discovery may be done through in-band PCI Express initialization/link training process. Thus, the new BMC may be used in both NVMe-based and NVMeoF-based systems. In a large NMVeoF storage system, a BMC that may perform self-discovery may shorten the enumeration/discovery process significantly because:All Network-attached SSD (NASSD) devices present in the system may perform self-discovery (as disclosed in U.S. patent application Ser. No. 15/256,495, filed Sep. 2, 2016, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/366,622, filed Jul. 26, 2016, both of which are incorporated by reference herein for all purposes) independently by reading from a known location from the system much quicker than a host CPU may.The new BMC may perform self-discovery by reading from a known location for BMCs only, and be ready to behave appropriately in a much shorter period of time than that required by having a remote host/local processor ping/discover each device in the chassis, including the BMC. Newer storage devices (or other devices, such as Network Interface Cards (NICs)) may use transport protocols such as NVMeoF to communicate with a chassis (also termed a host machine), and may support multiple transport protocols. When such devices are installed in a chassis, these devices may perform self-discovery during boot up and initialization. These devices may read VPD from a known location in an EEPROM: U.S. patent application Ser. No. 15/256,495, filed Sep. 2, 2016, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/366,622, filed Jul. 26, 2016, both of which are incorporated by reference herein for all purposes, describes such a self-discovery process. Once self-discovery has started, these devices may then discover that they are installed in an NVMeoF chassis. These devices may then configure themselves to enable, for example, the Ethernet ports and disabling other unnecessary/unused/unsupported transport protocol support. In this way the operating system and host processor overhead related to multiple transport protocol discovery and management may be avoided. In a large storage system, using such self-configuring devices may shorten the enumeration process significantly because all devices may perform self-discovery independently by reading from known location(s) from the system. The host processors and operating systems are not required to be present. A BMC is a low-power controller embedded in servers or switches. A BMC may connect to sensors to read environmental conditions and to control devices. A BMC has all the connections to all NVMeoF devices via the control plane/path. Therefore, it is advantageous to use a BMC as a proxy for providing discovery services to the host or initiator. Due to its interaction with many devices, a BMC may serve as a Discovery Controller to provide a list of NVM subsystems that are accessible to the host. The BMC presented herein may have firmware to perform discovery of eSSDs, Network-Attached Solid State Drives, or other devices inserted into the system. Network-Attached SSDs may include Ethernet SSDs, InfiniBand SSDs, Fibre-Channel SSDs, SSDs, or SSDs that offer a combination of these transport protocols (Ethernet, InfiniBand and, and Fibre-Channel). Ethernet, InfiniBand and, and Fibre-Channel transport protocols are merely exemplary, and embodiments of the inventive concept may include Network-Attached SSDs that support other transport protocols. The BMC may directly access each device through a private bus and a Complex Programmable Logic Device (CPLD). The BMC may also read a known non-volatile memory location on the mid-plane where each device reports its information. This method may shorten the enumeration process. The BMC may store each device's information as a Discovery Log Page in its non-volatile memory. The BMC may communicate with devices using a control plane. The control plane, the data plane and the management plane are the three basic components of telecommunications products. The control plane is the part of a network that carries signaling traffic and is responsible for routing. Functions of the control plane include system configuration and management. The control plane and management plane serve the data plane, which bears the traffic that the network exists to carry. The management plane, which carries administrative traffic, is considered a subset of the control plane. A new Intelligent Platform Management Interface (IPMI) command (System Discovery) may be supported by the BMC's firmware for local or remote hosts to retrieve this Discovery Log Page. Remote hosts may connect to the BMC through its Local Area Network (LAN) interface if they are in the same network. Remote hosts may also connect to the BMC's local host. Each entry in the Discovery Log Page may specify information necessary for the host to connect to an NVM subsystem via an NVMe Transport. The NVMeoF standard specifies Discovery service may be performed via Ethernet links or via the data plane. In contrast, embodiments of the inventive concept use the BMC as a proxy, which enables discovery services to be performed via the control plane. In networking, the control plane is typically limited to only a system administrator, and is better protected than data plane, which may be accessed by many people/nodes. In terms of security, the control plane is better protected than data plane. In addition, system administrators may issue one command to a BMC to get all discovery log files from all NVMeoF devices instead of issuing one command per device, as specified by the standard. FIG.1shows a chassis with a self-configuring Baseboard Management Controller (BMC) installed therein that may perform discovery of Non-Volatile Memory (NVM) devices, according to an embodiment of the inventive concept. InFIG.1, chassis105is shown as a tower server, but chassis105may just as easily be a rack server. Chassis105may include processor110, memory115, storage device120, and BMC125. Processor110may be any variety of processor: for example, an Intel Xeon, Celeron, Itanium, or Atom processor, an AMD Opteron processor, an ARM processor, etc. WhileFIG.1shows a single processor, chassis105may include any number of processors. Memory115may be any variety of memory, such as flash memory, Static Random Access Memory (SRAM), Persistent Random Access Memory, Ferroelectric Random Access Memory (FRAM), or Non-Volatile Random Access Memory (NVRAM), such as Magnetoresistive Random Access Memory (MRAM) etc., but is typically DRAM. Memory115may also be any desired combination of different memory types. Storage device120may be any variety of storage device. Examples of such devices may include Solid State Drives (SSDs), but other storage forms, such as hard disk drives or other long-term storage devices, are also viable. BMC125, as described above, may operate as a conventional BMC, but may also be self-configuring based on the configuration of chassis105. For example, chassis105may be an NVMe chassis, or an NVMeoF chassis. With chassis105as an NVMe chassis, BMC125may operate as a conventional NVMe BMC after self-configuration. With chassis105as an NVMeoF chassis, BMC125may also operate as a conventional BMC, but it may also perform discovery of other devices within chassis105, such as storage device120, Network Interface Cards (NICs), and any other devices that may, like BMC125, be subject to discovery. While BMC125is described as being able to perform discovery of other devices in chassis105, BMC125is one possible proxy for processor110performing the discovery. Other possible proxies may include a Redundant Array of Independent Disks (RAID) controller, another processor (typically different from processor110, which would be involved in performing start-up operations), or even a software proxy. For the remainder of this document, any reference to BMC125is intended to also refer to these other proxy devices, as well as any other devices that may act as a proxy for processor110. FIG.2shows additional details of the chassis ofFIG.1. Referring toFIG.2, typically, chassis105includes one or more processors110, which may include memory controller205and clock210, which may be used to coordinate the operations of the components of chassis105. Processors110may also be coupled to memory115, which may include random access memory (RAM), read-only memory (ROM), or other state preserving media, as examples. Processors110may also be coupled to storage devices120, and to network connector215, which may be, for example, an Ethernet connector or a wireless connector. Processors110may also be connected to a bus220, to which may be attached user interface225and input/output interface ports that may be managed using input/output engine230, among other components. FIG.3shows BMC125ofFIG.1communicating with devices on a mid-plane of chassis105ofFIG.1. InFIG.3, BMC125and Complex Programmable Logic Device (CPLD)305may be situated on motherboard310within chassis105ofFIG.1. Chassis105ofFIG.1may also include midplane315. Midplane315may include other components, such as various Network-Attached SSDs320,325, and330, which are examples of storage device120ofFIG.1. Network-Attached SSDs320,325, and330may support using any of a number of different transport protocols, such as Ethernet, Fibre Channel, InfiniBand, or Non-Volatile Memory Express (NVMe), to name a few possibilities, but in some embodiments of the inventive concept Network-Attached SSDs320,325, and/or330may be limited to a subset of these transport protocols (possibly one: for example, an Ethernet SSD). WhileFIG.3shows three Network-Attached SSDs320,325, and330, embodiments of the inventive concept may support any desired number of devices. In addition, whileFIG.3shows only Network-Attached SSDs320,325, and330, other devices, such as Ethernet SSDs or NICs may be substituted for or included in addition to Network-Attached SSDs320,325, and330. In the remainder of this document, any reference to Network-Attached SSDs320,325, and330is intended to encompass any alternative device that may be subject to discovery as an NVMeoF device and may be substituted for Network-Attached SSDs320,325, and330. BMC125may communicate with Network-Attached SSDs320,325, and330over I2C bus335and SMBus340. Network-Attached SSDs320,325, and330may also communicate with EEPROM345and NVM350. NVM350may act as memory115ofFIG.1; EEPROM345may store information for use by various devices in chassis105ofFIG.1. For example, EEPROM345may store VPD355. VPD355may be used by Network-Attached SSDs320,325, and330, and by BMC125, to store information pertinent to those devices. More particularly, EEPROM345may store separate VPD355for each such device. VPD355has several uses. In some embodiments of the inventive concept, VPD355may be used to store pertinent information for each device, which may be used in self-configuration. Thus, VPD355may store information used by Network-Attached SSDs320,325, and330to self-configure, as described in U.S. patent application Ser. No. 15/256,495, filed Sep. 2, 2016, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/366,622, filed Jul. 26, 2016, both of which are incorporated by reference herein for all purposes. But in other embodiments of the inventive concept, VPD355may also store information used by BMC125to perform its own self-configuration, as described below. In addition, in yet other embodiments of the inventive concept, Network-Attached SSDs320,325, and330may write information to VPD355, which BMC125may then read. For example, Network-Attached SSDs320,325, and330may write their IP addresses to VPD355, which BMC125may then read from VPD355. Then, when host110ofFIG.1queries BMC125for information, BMC125may provide the configuration information for Network-Attached SSDs320,325, and330. WhileFIG.3shows EEPROM345on midplane315and NVM350on motherboard310, embodiments of the inventive concept may support these components (and other components as well) being placed anywhere desired. For example, in some embodiments of the inventive concept, EEPROM345and NVM350may both be located on midplane315, in other embodiments of the inventive concept they may both be located on motherboard310, and in yet other embodiments of the inventive concept NVM350may be on midplane315and EEPROM345on motherboard310. Other embodiments of the inventive concept may place such components in yet other locations: for example, on another board within chassis105ofFIG.1, or possibly in another chassis entirely. FIG.4shows details of BMC125ofFIG.1. InFIG.4, BMC125is shown divided into two portions405and410. Portion405relates to BMC125performing self-configuration in some embodiments of the inventive concept; portion410relates to BMC125acting as a proxy for host110ofFIG.1in other embodiments of the inventive concept. Note that embodiments of the inventive concept may include one or both of portions405and410, as desired. To perform self-configuration, BMC125may include access logic415, built-in self-configuration logic420, and error reporting logic425. Access logic415may access information about how BMC125is to configure itself. Access logic415is described further with reference toFIG.5below. Built-in self-configuration logic420may configure BMC125to use the appropriate driver based on the configuration of chassis105ofFIG.1. Built-in self-configuration logic420is described further with reference toFIG.8below. Error reporting logic425may report an error to host110ofFIG.1when there is a problem. Examples of problems that BMC125might report to host110ofFIG.1may include when chassis105ofFIG.1is a High Availability chassis but BMC125may not access or load a High Availability driver, or when BMC125may not communicate with its pairing partner as a High Availability system. To act as a discovery proxy for host110ofFIG.1, BMC125may include device communication logic430, Log Page creation logic435, reception logic440, and transmission logic445. Device communication logic430may enable BMC125to communicate with devices, such as Network-Attached SSDs320,325, and330ofFIG.3, to learn about their configuration. Device communication logic430is described further with reference toFIG.10below. Log Page creation logic435may take the information received from Ethernet devices320,325, and330ofFIG.3, and create a Discovery Log Page, that may be reported to host110ofFIG.1at an appropriate time. Log Page creation logic435may either simply collate the information received from Ethernet devices320,325, and330ofFIG.3, or it may eliminate repeated information from Network-Attached SSDs320,325, and330ofFIG.3by assembling the Log Page. The structure of a Log Page is described in the NVM Express over Fabrics specification, revision 1.0, dated Jun. 5, 2016, which is hereby incorporated by reference for all purposes. In some embodiments of the inventive concept, BMC125may have its own storage: for example, in NVM350ofFIG.3, or in EEPROM345ofFIG.3among other possibilities. Network-Attached SSDs320,325, and330ofFIG.3may write their configuration information directly into a Log Page maintained in this storage for BMC125. Reception logic440and transmission logic445enable communication with host110ofFIG.1. For example, reception logic440may receive a query from host110ofFIG.1regarding Network-Attached SSDs320,325, and330ofFIG.3; transmission logic445may send a response back to host110ofFIG.1responsive to such a query. Note that reception logic440and transmission logic445are not required to be dedicated to the purposes described above: they may be used for other purposes as well. For example, as described below with reference toFIG.10, device communication logic430may send messages to Network-Attached SSDs320,325, and330ofFIG.3: these messages may be sent using transmission logic445(and responses to these messages may be received using reception logic440). FIG.5shows details of access logic415ofFIG.4. InFIG.5, access logic415may include VPD reading logic505and pin reading logic510. VPD reading logic505may read information from VPD355ofFIG.3, which may be a VPD specific to BMC125. The information in VPD355ofFIG.3may include the configuration of chassis105ofFIG.1. Pin reading logic510, on the other hand, may determine the configuration of chassis105ofFIG.1by reading one or more signals on one or more pins of BMC125ofFIG.1. These pins of BMC125ofFIG.1may be dedicated to specifying the configuration of chassis105ofFIG.1. FIG.6shows an example of BMC125ofFIG.1with pins for signaling. InFIG.6, BMC515is shown as including a variety of pins. Pins605and610may be used to specify the configuration of chassis105ofFIG.1: based on the values signaled on these pins, BMC125may determine the configuration of chassis105ofFIG.1. Pins605and610may be general purpose input/output (GPIO) pins, among other possibilities. Returning toFIG.5, pin reading logic510may use the information read from pins605and610ofFIG.6to determine the configuration of chassis105ofFIG.1and load the appropriate driver. For example, as described below with reference toFIG.8, there may be three different configurations of chassis105ofFIG.1: NVMe, NVMeoF, and High Availability. To choose between three different possibilities may require two bits, which could require signals to be sent on two pins. If the two pins specify the value00, that combination may specify that chassis105ofFIG.1is an NVMe chassis. If the two pins specify the value01, that combination may specify that chassis105ofFIG.1is an NVMeoF chassis. And if the two pins specify the value10, that combination may specify that chassis105ofFIG.1is a High Availability chassis. Alternatively, three possibilities could be managed by a single pin. For example, a 0 value could specify an NVMe chassis, a 1 value could specify an NVMeoF chassis, and an oscillation between 0 and 1 could specify a High Availability chassis. But if there are more than three combinations, it is likely that more than one pin would be needed to specify the chassis configuration. While the above example describes three possibilities—NVMe, NVMeoF, and High Availability—in other embodiments of the inventive concept there may be four driver configurations—NVMe, NVMeoF, NVMe High Availability, and NVMeoF High Availability. In such an embodiment of the inventive concept, for example, a high value on pin605may indicate that chassis105ofFIG.1is a High Availability chassis and a low value on pin605may indicate that chassis105ofFIG.1is not a High Availability chassis, whereas a high value on pin610may indicate that chassis105ofFIG.1uses NVMeoF and a low value on pin610may indicate that chassis105ofFIG.1uses NVMe. And in yet other embodiments of the inventive concept there may be even more different driver types. Embodiments of the inventive concept are may encompass any number of driver types as desired. InFIG.5, VPD reading logic505and pin reading logic510represent alternative ways for BMC125ofFIG.1to determine the configuration of chassis105ofFIG.1. Thus, access logic415might include one or the other, and not necessarily both. However, embodiments of the inventive concept could include both VPD reading logic505and pin reading logic510, to support BMC125ofFIG.1being able to determine the configuration of chassis105ofFIG.1in different ways. High Availability chassis have now been mentioned a couple of times.FIG.7shows chassis105ofFIG.1in a High Availability configuration. Processor110and BMC125may be paired with another processor705and another BMC710. In some embodiments of the inventive concept, processor705may be in chassis105ofFIG.1, and in other embodiments of the inventive concept processor705may be in a different chassis. Processor110may maintain communication with processor705, and BMC125may maintain communication with BMC710. This communication may include a heartbeat: if one of BMC125and BMC710do not respond, then the other BMC knows that there is an error. Pairing partners may communicate, for example, over Peripheral Component Interconnect Express (PCIe) or Ethernet, among other possibilities. If a pairing partner fails—for example, one of the chassis loses power—the remaining processor may enable the take-over path, permitting the remaining BMC to establish communication and cross the domain. Since the BMC in the failed chassis may run on standby power, the surviving processor may talk to the BMC of the failed chassis. The surviving processor may try to reset the failed processor, in the hopes that the failed processor may be restarted. If the failed processor may not be reset, the surviving processor may send an alert or interrupt to the host that oversees the failed chassis. A third party software or agent may then elect an available working node to become the new pairing partner of the surviving node. Because of the need for heartbeat communication and for the surviving node to take over for the failed node, the driver needed for a High Availability chassis is different from the driver used in a non-High Availability chassis. Thus, BMC125operates differently in a High Availability chassis than in a non-High Availability chassis. Until the High Availability driver is loaded into BMC125, it might happen that BMC125may not see its pairing partner. Thus, in some embodiments of the inventive concept, the High Availability driver should be loaded even though BMC125might not yet be able to communicate with its pairing partner, and checking for the pairing partner should occur after the High Availability driver is loaded. FIG.8shows built-in self-configuration logic420ofFIG.4. As described above, built-in self-configuration logic420may take the information determined by access logic415ofFIG.4and configure BMC125ofFIG.1accordingly. Built-in self-configuration logic420may include driver downloader805and driver loader810. Driver downloader805may download an appropriate driver, such as NVMe driver815, NVMeoF driver820, and High Availability driver825from a driver source. Note that the driver source might be within the firmware of BMC125ofFIG.1, in which case the driver does not need to be “downloaded” at all, but rather just read from the firmware. Once downloaded or otherwise located, driver loader810may then load the selected driver into BMC125. FIG.9shows various sources for the drivers ofFIG.8. InFIG.9, chassis105may include EEPROM345, which may be a driver source. In such embodiments of the inventive concept, the appropriate driver may be stored in EEPROM345and read from there as needed. Chassis105is also shown as connected to network905. Network905may permit communication between chassis105and machines910and915. Machine910may be a machine on a Local Area Network (LAN), whereas machine915may be a machine on a global network, such as the Internet. Regardless of what source exists for the selected driver, however, driver downloader805ofFIG.8may download (or read) the appropriate driver from the driver source, enabling driver loader810ofFIG.8to then load the driver into BMC125ofFIG.1. FIG.10shows device communication logic430ofFIG.4. InFIG.10, device communication logic430may include read logic1005and polling logic1010. BMC125ofFIG.1may use read logic1005to read information, such as the configuration of one of Network-Attached SSDs320,325, and330ofFIG.3, from VPD355ofFIG.3. BMC125may also use read logic1005to read new information from VPD355, if Network-Attached SSDs320,325, and330send a message to BMC125ofFIG.1indicating that new information is available. In contrast, BMC125ofFIG.1may using polling logic1010to poll Network-Attached SSDs320,325, and330ofFIG.3periodically. In embodiments of the invention where Network-Attached SSDs320,325, and330ofFIG.3do not notify BMC125ofFIG.1about changes in their configuration, BMC125ofFIG.1may use polling logic1010to query Network-Attached SSDs320,325, and330about their current information, and whether any information has changed. Network-Attached SSDs320,325, and330may then reply, indicating whether their configurations have changed and, if so, how they have changed. FIG.11shows host110ofFIG.1requesting a Discovery Log Page from BMC125ofFIG.1. InFIG.11, host110may send query1105to BMC125. Typically, query1105is sent when host110is ready to receive the configuration information of BMC125, which may be some interval of time after BMC125has collected and assembled the configuration information for Network-Attached SSDs320,325, and330. BMC125may then respond by with response1110, which may include Log Page1115. In this manner, BMC125may provide host110with information about all the devices installed in chassis105, or at least, all the devices in the domain of the BMC.125. For example, a single chassis105ofFIG.1might have two motherboards (either two half-width motherboards or two stacked full-width motherboards, for example), each with its own BMC and attached devices. In such a scenario, each BMC is responsible for collecting the information about the devices in its domain, but not for collecting the information about the devices in the other BMC's domain, even though they are all within the same chassis. Embodiments of the inventive concept have a technical advantage over conventional systems in that they may expedite the process of starting the machine. In conventional systems, the host must query each device in turn for its configuration information, but it may not do so until after it has done a number of other start-up operations. The BMC, in contrast, may start up much more quickly, and may act as a proxy for the host, querying the various devices for their configuration (while the host is busy performing other start-up procedures). Then, when the host is ready, the host may query the BMC for the configuration information, and may learn about all attached devices much more quickly. In addition, as compared with conventional Discovery Services performed using the data plane, performing discovery services via BMC125on the control plane is more secure and does not consume any bandwidth on the data plane. Another technical advantage that embodiments of the inventive concept have over conventional systems is that the host only needs to issue one command to the BMC to perform discovery of all devices present in the chassis. For example, if the chassis includes 24 devices, the host may issue a “discovery all devices” command to the BMC: the BMC may discover the 24 devices. This approach avoids the host issuing 24 discovery commands to the 24 devices, as in the conventional system. FIGS.12A-12Dshow a flowchart of an example procedure for BMC125ofFIG.1to self-configure, according to an embodiment of the inventive concept. InFIG.12A, at block1203, access logic415ofFIG.4may determine a configuration of chassis105ofFIG.1: NVMe, NVMeoF, or High Availability. At block1206, BMC125ofFIG.1may determine if chassis105ofFIG.1is a High Availability chassis (which may include multiple flavors, such as NVMe or NVMeoF). If chassis105ofFIG.1is not a High Availability chassis, then processing may continue at block1236(FIG.12C). Continuing withFIG.12A, if chassis105ofFIG.1is a High Availability chassis, then at block1209built-in self-configuration logic420ofFIG.4may select High Availability driver825ofFIG.8. At block1212, built-in self-configuration logic420ofFIG.4may determine if High Availability driver825ofFIG.8is available. If not, then at block1215, error reporting logic425ofFIG.4may report an error. If High Availability driver825ofFIG.8is available, then at block1218(FIG.12B) driver downloader805ofFIG.8may download High Availability driver825ofFIG.8, and at block1221driver loader810ofFIG.8may load High Availability driver825ofFIG.8. At block1224, BMC125ofFIG.1may attempt to communicate with its pairing partner (BMC710ofFIG.7). At block1227, BMC125ofFIG.1may determine if its pairing partner is available. If the pairing partner of BMC125ofFIG.1is available, then at block1230, BMC125ofFIG.1may determine whether the devices installed in chassis105ofFIG.1are dual-path devices. If BMC125ofFIG.1may not determine that its pairing partner is available (block1227) or the devices installed in chassis105ofFIG.1are not dual-path devices (block1230), then at block1233BMC125ofFIG.1may report that it is not operating as a High Availability device. Regardless of whether chassis105ofFIG.1is operating as a High Availability device (in blocks1227,1230, and1233) or not (in block1206), at block1236, BMC125ofFIG.1may determine if the configuration of chassis105ofFIG.1is an NVMeoF chassis. If so, then at block1239, BMC125ofFIG.1may select NVMeoF driver820ofFIG.8, at block1242driver downloader805ofFIG.8may download the NVMeoF driver820ofFIG.8, and at block1245driver loader810ofFIG.8may load the NVMeoF driver820ofFIG.8. Additionally, at block1248, BMC125ofFIG.1may collect information about other devices installed in chassis105ofFIG.1, thereby acting as a proxy for host110ofFIG.1. If chassis105ofFIG.1is not an NVMeoF chassis, then at block1251(FIG.12D), BMC125ofFIG.1may determine if the configuration of chassis105ofFIG.1is an NVMe chassis. If so, then at block1254, BMC125ofFIG.1may select NVMeoF driver820ofFIG.8, at block1257driver downloader805ofFIG.8may download NVMe driver815ofFIG.8, and at block1260driver loader810ofFIG.8may load NVMe driver815ofFIG.8, after which processing ends. If chassis105ofFIG.1is not an NVMe chassis at block1251, then control may return to block1215to report an error. FIGS.12A-12Dshow an example embodiment of the inventive concept. In other embodiments of the inventive concept, there may be more than two chassis configurations. And in yet other embodiments of the inventive concept, BMC125ofFIG.1may use an NVMe driver as a default when no other driver may be loaded. Other variations onFIGS.12A-12Dare also possible. FIG.13shows a flowchart of an example procedure for access logic415ofFIG.4to determine the configuration of the chassis ofFIG.1. InFIG.13, at block1305, VPD reading logic505ofFIG.5may read the configuration of chassis105ofFIG.1from VPD355ofFIG.3. Alternatively, at block1310, pin reading logic510may read the configuration of chassis105ofFIG.1from signals sent on one or more pins605and610ofFIG.6on BMC125ofFIG.1. FIG.14shows a flowchart of an example procedure for BMC125ofFIG.1to perform discovery of NVM devices in chassis105ofFIG.1, according to an embodiment of the inventive concept. InFIG.14, at block1405, BMC125may receive data about the configuration of Network-Attached SSDs320,325, and330ofFIG.3. Block1405may be repeated as often as necessary for all devices, as shown by dashed arrow1410. At block1415, BMC125ofFIG.1may compile a record (such as Log Page1115ofFIG.11) from the information received from Network-Attached SSDs320,325, and330ofFIG.3. At block1420, host110ofFIG.1may send, and BMC125ofFIG.1may receive, a request for the configurations about Network-Attached SSDs320,325, and330ofFIG.3. At block1425, BMC125ofFIG.1may send to host110ofFIG.1the record of the device configurations. FIG.15shows a flowchart of an example procedure for device communication logic430ofFIG.4to obtain discovery information about NVM devices320,325, and330ofFIG.3in chassis105ofFIG.1. InFIG.15, at block1505, read logic1005ofFIG.10may read configuration data about a device from VPD355ofFIG.3. Alternatively, at block1510, polling logic1010ofFIG.10may poll the device for its configuration data, at block1515BMC125ofFIG.1may receive the configuration data from the device. FIG.16shows a flowchart of an example procedure for BMC125ofFIG.1to build a record of the device(s) configurations. InFIG.16, at block1605, BMC125ofFIG.1may simply compile the collected information from VPD355for the various devices into a record. Alternatively, at block1610, Log Page creation logic435ofFIG.4may create Log Page1115ofFIG.11from the collected device(s) configurations. FIG.17shows a flowchart of an example procedure for an NVM device in chassis105ofFIG.1to inform BMC125ofFIG.1about a change in the configuration of the NVM device, according to an embodiment of the inventive concept. InFIG.17, at block1705, the device—for example, Network-Attached SSDs320,325, and/or330ofFIG.3—may determine that its configuration has changed. At block1710, the device may write the change to VPD355, and at block1715the device may notify a proxy device—such as BMC125ofFIG.1—that the change was written to VPD355. Alternatively, at block1720, the device may wait until it receives a query from the proxy device about the device's current configuration, at which time (in block1725) the device may send its current configuration to the proxy device. InFIGS.12A-17, some embodiments of the inventive concept are shown. But a person skilled in the art will recognize that other embodiments of the inventive concept are also possible, by changing the order of the blocks, by omitting blocks, or by including links not shown in the drawings. All such variations of the flowcharts are considered to be embodiments of the inventive concept, whether expressly described or not. The following discussion is intended to provide a brief, general description of a suitable machine or machines in which certain aspects of the inventive concept may be implemented. The machine or machines may be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., as well as by directives received from another machine, interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used herein, the term “machine” is intended to broadly encompass a single machine, a virtual machine, or a system of communicatively coupled machines, virtual machines, or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices, telephones, tablets, etc., as well as transportation devices, such as private or public transportation, e.g., automobiles, trains, cabs, etc. The machine or machines may include embedded controllers, such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits (ASICs), embedded computers, smart cards, and the like. The machine or machines may utilize one or more connections to one or more remote machines, such as through a network interface, modem, or other communicative coupling. Machines may be interconnected by way of a physical and/or logical network, such as an intranet, the Internet, local area networks, wide area networks, etc. One skilled in the art will appreciate that network communication may utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE) 802.11, Bluetooth®, optical, infrared, cable, laser, etc. Embodiments of the present inventive concept may be described by reference to or in conjunction with associated data including functions, procedures, data structures, application programs, etc. which when accessed by a machine results in the machine performing tasks or defining abstract data types or low-level hardware contexts. Associated data may be stored in, for example, the volatile and/or non-volatile memory, e.g., RAM, ROM, etc., or in other storage devices and their associated storage media, including hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, biological storage, etc. Associated data may be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format. Associated data may be used in a distributed environment, and stored locally and/or remotely for machine access. Embodiments of the inventive concept may include a tangible, non-transitory machine-readable medium comprising instructions executable by one or more processors, the instructions comprising instructions to perform the elements of the inventive concepts as described herein. Having described and illustrated the principles of the inventive concept with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and detail without departing from such principles, and may be combined in any desired manner. And, although the foregoing discussion has focused on particular embodiments, other configurations are contemplated. In particular, even though expressions such as “according to an embodiment of the inventive concept” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the inventive concept to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments. The foregoing illustrative embodiments are not to be construed as limiting the inventive concept thereof. Although a few embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible to those embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of this inventive concept as defined in the claims. Embodiments of the inventive concept may extend to the following statements, without limitation: Statement 1. An embodiment of the inventive concept includes a Baseboard Management Controller (BMC), comprising: an access logic to determine a configuration of a chassis; and a built-in self-configuration logic to configure the BMC responsive to the configuration of the chassis, wherein the BMC may self-configure without using any BIOS, device drivers, or operating systems. Statement 2. An embodiment of the inventive concept includes a BMC according to statement 1, wherein the built-in self-configuration logic is operative to configure the BMC to use either a Non-Volatile Memory Express (NVMe) driver or a Non-Volatile Memory Express Over Fabric (NVMeoF) driver responsive to the configuration for the BMC. Statement 3. An embodiment of the inventive concept includes a BMC according to statement 2, wherein using the NVMeoF driver enables the BMC to determine the configuration of at least one device in a chassis including the BMC. Statement 4. An embodiment of the inventive concept includes a BMC according to statement 2, wherein the access logic includes a Vital Product Data (VPD) reading logic to read the configuration of the chassis from a VPD. Statement 5. An embodiment of the inventive concept includes a BMC according to statement 4, wherein the VPD is stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 6. An embodiment of the inventive concept includes a BMC according to statement 2, wherein the access logic includes a pin reading logic to determine the configuration of the chassis from a signal on at least one pin on the BMC. Statement 7. An embodiment of the inventive concept includes a BMC according to statement 2, wherein the built-in self-configuration logic includes a driver loader to load the NVMe driver or the NVMeoF driver responsive to the configuration of the chassis. Statement 8. An embodiment of the inventive concept includes a BMC according to statement 7, wherein the built-in self-configuration logic further includes a driver downloader to download the NVMe driver or the NVMeoF driver from a driver source. Statement 9. An embodiment of the inventive concept includes a BMC according to statement 8, wherein the driver source is drawn from a set including storage in an EEPROM, a first site on a local computer network, and a second site on a global computer network. Statement 10. An embodiment of the inventive concept includes a BMC according to statement 2, wherein the access logic is operative to determine whether the configuration of the chassis includes a High Availability (HA) chassis. Statement 11. An embodiment of the inventive concept includes a BMC according to statement 10, wherein the built-in self-configuration logic is operative to load an HA driver. Statement 12. An embodiment of the inventive concept includes a BMC according to statement 11, wherein the built-in self-configuration logic is operative to load the HA driver before the BMC has determined whether a pairing partner is available. Statement 13. An embodiment of the inventive concept includes a BMC according to statement 11, further comprising an error reporting logic to report an error if the HA driver is not available. Statement 14. An embodiment of the inventive concept includes a BMC according to statement 10, further comprising an error reporting logic to report an error if the BMC may not communicate with a pairing partner. Statement 15. An embodiment of the inventive concept includes a method, comprising: determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC; selecting a driver responsive to the configuration of the chassis; and loading the selected driver, wherein the BMC may self-configure without using any BIOS, device drivers, or operating systems. Statement 16. An embodiment of the inventive concept includes a method according to statement 15, wherein: the configuration of chassis is drawn from a set including a Non-Volatile Memory Express (NVMe) chassis and a Non-Volatile Memory Express Over Fabric (NVMeoF) chassis; and selecting a driver responsive to the configuration of the chassis includes selecting one of an NVMe driver and an NVMeoF driver for the BMC according to the configuration of the chassis. Statement 17. An embodiment of the inventive concept includes a method according to statement 16, wherein: determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes determining by the BMC that the configuration of the chassis is the NVMeoF chassis; and the method further comprises determining, by the BMC, the configuration of at least one device in the chassis including the BMC. Statement 18. An embodiment of the inventive concept includes a method according to statement 16, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes reading the configuration of the chassis from a Vital Product Data (VPD). Statement 19. An embodiment of the inventive concept includes a method according to statement 18, wherein the VPD is stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 20. An embodiment of the inventive concept includes a method according to statement 16, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes accessing a signal from at least one pin on the BMC to determine the configuration of the chassis. Statement 21. An embodiment of the inventive concept includes a method according to statement 16, further comprising downloading the selected driver from a driver source. Statement 22. An embodiment of the inventive concept includes a method according to statement 21, wherein the driver source is drawn from a set including storage in an EEPROM, a first site on a local computer network, and a second site on a global computer network. Statement 23. An embodiment of the inventive concept includes a method according to statement 16, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis includes determining by the BMC that the configuration of the chassis is a High Availability (HA) chassis. Statement 24. An embodiment of the inventive concept includes a method according to statement 23, wherein selecting a driver for the BMC according to the configuration of the chassis includes selecting an HA driver. Statement 25. An embodiment of the inventive concept includes a method according to statement 24, further comprising reporting an error if the HA driver is not available. Statement 26. An embodiment of the inventive concept includes a method according to statement 24, further comprising attempting to communicate with a pairing partner for the BMC. Statement 27. An embodiment of the inventive concept includes a method according to statement 26, further comprising reporting an error if the BMC may not communicate with the pairing partner. Statement 28. An embodiment of the inventive concept includes a method according to statement 26, wherein attempting to communicate with a pairing partner for the BMC includes attempting to communicate with the pairing partner for the BMC after loading the HA driver. Statement 29. An embodiment of the inventive concept includes article, comprising a tangible storage medium, the tangible storage medium having stored thereon non-transitory instructions that, when executed by a machine, result in: determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC; selecting a driver responsive to the configuration of the chassis; and loading the selected driver, wherein the BMC may self-configure without using any BIOS, device drivers, or operating systems. Statement 30. An embodiment of the inventive concept includes an article according to statement 29, wherein: the configuration of chassis is drawn from a set including a Non-Volatile Memory Express (NVMe) chassis and a Non-Volatile Memory Express Over Fabric (NVMeoF) chassis; selecting a driver responsive to the configuration of the chassis includes selecting one of an NVMe driver and an NVMeoF driver for the BMC according to the configuration of the chassis. Statement 31. An embodiment of the inventive concept includes an article according to statement 30, wherein: determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes determining by the BMC that the configuration of the chassis is the NVMeoF chassis; and the tangible storage medium having stored thereon further non-transitory instructions that, when executed by the machine, result in determining, by the BMC, the configuration of at least one device in the chassis including the BMC. Statement 32. An embodiment of the inventive concept includes an article according to statement 30, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes reading the configuration of the chassis from a Vital Product Data (VPD). Statement 33. An embodiment of the inventive concept includes an article according to statement 32, wherein the VPD is stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 34. An embodiment of the inventive concept includes an article according to statement 30, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis including the BMC includes accessing a signal from at least one pin on the BMC to determine the configuration of the chassis. Statement 35. An embodiment of the inventive concept includes an article according to statement 30, the tangible storage medium having stored thereon further non-transitory instructions that, when executed by the machine, result in downloading the selected driver from a driver source. Statement 36. An embodiment of the inventive concept includes an article according to statement 35, wherein the driver source is drawn from a set including storage in an EEPROM, a first site on a local computer network, and a second site on a global computer network. Statement 37. An embodiment of the inventive concept includes an article according to statement 30, wherein determining by a Baseboard Management Controller (BMC) a configuration of a chassis includes determining by the BMC that the the chassis is a High Availability (HA) chassis. Statement 38. An embodiment of the inventive concept includes an article according to statement 37, wherein selecting a driver for the BMC according to the configuration of the chassis includes selecting an HA driver. Statement 39. An embodiment of the inventive concept includes an article according to statement 38, the tangible storage medium having stored thereon further non-transitory instructions that, when executed by the machine, result in reporting an error if the HA driver is not available. Statement 40. An embodiment of the inventive concept includes an article according to statement 38, the tangible storage medium having stored thereon further non-transitory instructions that, when executed by the machine, result in attempting to communicate with a pairing partner for the BMC. Statement 41. An embodiment of the inventive concept includes an article according to statement 40, the tangible storage medium having stored thereon further non-transitory instructions that, when executed by the machine, result in reporting an error if the BMC may not communicate with the pairing partner. Statement 42. An embodiment of the inventive concept includes an article according to statement 40, wherein attempting to communicate with a pairing partner for the BMC includes attempting to communicate with the pairing partner for the BMC after loading the HA driver. Statement 43. An embodiment of the inventive concept includes a proxy device in a chassis, comprising: a device communication logic to communicate with at least one device over a control plane about data regarding the at least one device; a reception logic to receive a query from a host, the query requesting information about the at least one device; and a transmission logic to send a response to the host, the response including data about the at least one device. Statement 44. An embodiment of the inventive concept includes a proxy device according to statement 43, wherein the proxy device is drawn from a set including a Baseboard Management Controller (BMC), a Redundant Array of Independent Disks (RAID) controller, and a processor. Statement 45. An embodiment of the inventive concept includes a proxy device according to statement 43, wherein the at least one device is drawn from a set including a storage device and a Network Interface Card (NIC). Statement 46. An embodiment of the inventive concept includes a proxy device according to statement 43, wherein the device communication logic includes a read logic to read the data regarding the at least one device from a Vital Product Data for the at least one device. Statement 47. An embodiment of the inventive concept includes a proxy device according to statement 46, wherein the Vital Product Data is stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 48. An embodiment of the inventive concept includes a proxy device according to statement 43, wherein: the device communication logic includes a polling logic to poll the at least one device for the data regarding the at least one device; and the reception logic is operative to receive the data regarding the at least one device from the at least one device. Statement 49. An embodiment of the inventive concept includes a proxy device according to statement 43, wherein the chassis includes permanent storage associated with the proxy device in which the proxy device may create a Log Page from the data regarding the at least one device. Statement 50. An embodiment of the inventive concept includes a proxy device according to statement 49, further comprising a Log Page creation logic to create a Log Page from the data about the at least one device. Statement 51. An embodiment of the inventive concept includes a proxy device according to statement 49, wherein the transmission logic is operative to send the Log Page to the host responsive to the query. Statement 52. An embodiment of the inventive concept includes a method, comprising: receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device, the data from the at least one device received over a control plane; compiling the at least one data into a record; receiving, at the proxy device, a query from a host for the configurations of the at least one device; and sending the record from the proxy device to the host, wherein the proxy device may receive the at least one data from the at least one device and compile the at least one data into a record before receiving the query from the host. Statement 53. An embodiment of the inventive concept includes a method according to statement 52, wherein the proxy device is drawn from a set including a Baseboard Management Controller (BMC), a Redundant Array of Independent Disks (RAID) controller, a processor, or a software proxy device. Statement 54. An embodiment of the inventive concept includes a method according to statement 52, wherein the at least one device is drawn from a set including a storage device and a Network Interface Card (NIC). Statement 55. An embodiment of the inventive concept includes a method according to statement 52, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes receiving, at the proxy device, the at least one data from the at least one device about configurations of the at least one device along a control plane. Statement 56. An embodiment of the inventive concept includes a method according to statement 52, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes polling the at least one device for the configurations of the at least one device. Statement 57. An embodiment of the inventive concept includes a method according to statement 52, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes receiving a datum from one of the at least one device when a configuration of the one of the at least one device changes. Statement 58. An embodiment of the inventive concept includes a method according to statement 52, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes reading the at least one data from at least one Vital Product Data. Statement 59. An embodiment of the inventive concept includes a method according to statement 58, wherein the at least one Vital Product Data are stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 60. An embodiment of the inventive concept includes a method according to statement 52, wherein compiling the at least one data into a record includes creating a Log Page from the at least one data. Statement 61. An embodiment of the inventive concept includes a method according to statement 60, wherein sending the configurations of the at least one device from the proxy device to the host includes sending the Log Page from the proxy device to the host. Statement 62. An embodiment of the inventive concept includes a method according to statement 52, wherein sending the configurations of the at least one device from the proxy device to the host includes sending the at least one data from the proxy device to the host. Statement 63. An embodiment of the inventive concept includes a method, comprising: determining, by a device, a change in a configuration of the device; and notifying a proxy device over a control plane about the change in the configuration of the device. Statement 64. An embodiment of the inventive concept includes a method according to statement 63, wherein the proxy device is drawn from a set including a Baseboard Management Controller (BMC), a Redundant Array of Independent Disks (RAID) controller, a processor, or a software proxy device. Statement 65. An embodiment of the inventive concept includes a method according to statement 63, wherein the at least one device is drawn from a set including a storage device and a Network Interface Card (NIC). Statement 66. An embodiment of the inventive concept includes a method according to statement 63, wherein notifying a proxy device about the change in the configuration of the device includes writing the change in the configuration of the device to a Vital Product Data that may be read by the proxy device. Statement 67. An embodiment of the inventive concept includes a method according to statement 66, wherein notifying a proxy device about the change in the configuration of the device further includes notifying the proxy device that the change in the configuration of the device was written to the Vital Product Data. Statement 68. An embodiment of the inventive concept includes a method according to statement 63, wherein notifying a proxy device about the change in the configuration of the device includes: receiving a query from the proxy device about a current status of the configuration of the device; and sending a response to the proxy device including the change in the configuration of the device. Statement 69. An embodiment of the inventive concept includes an embodiment of the inventive concept includes an article, comprising a tangible storage medium, the tangible storage medium having stored thereon non-transitory instructions that, when executed by a machine, result in: receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device, the data from the at least one device received over a control plane; compiling the at least one data into a record; receiving, at the proxy device, a query from a host for the configurations of the at least one device; and sending the record from the proxy device to the host, wherein the proxy device may receive the at least one data from the at least one device and compile the at least one data into a record before receiving the query from the host. Statement 70. An embodiment of the inventive concept includes an article according to statement 69, wherein the proxy device is drawn from a set including a Baseboard Management Controller (BMC), a Redundant Array of Independent Disks (RAID) controller, a processor, or a software proxy device. Statement 71. An embodiment of the inventive concept includes an article according to statement 69, wherein the at least one device is drawn from a set including a storage device and a Network Interface Card (NIC). Statement 72. An embodiment of the inventive concept includes an article according to statement 69, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes receiving, at the proxy device, the at least one data from the at least one device about configurations of the at least one device along a control plane. Statement 73. An embodiment of the inventive concept includes an article according to statement 69, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes polling the at least one device for the configurations of the at least one device. Statement 74. An embodiment of the inventive concept includes an article according to statement 69, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes receiving a datum from one of the at least one device when a configuration of the one of the at least one device changes. Statement 75. An embodiment of the inventive concept includes an article according to statement 69, wherein receiving, at a proxy device, at least one data from at least one device about configurations of the at least one device includes reading the at least one data from at least one Vital Product Data. Statement 76. An embodiment of the inventive concept includes an article according to statement 75, wherein the at least one Vital Product Data are stored in an Electrically Erasable Programmable Read Only Memory (EEPROM). Statement 77. An embodiment of the inventive concept includes an article according to statement 69, wherein compiling the at least one data into a record includes creating a Log Page from the at least one data. Statement 78. An embodiment of the inventive concept includes an article according to statement 77, wherein sending the configurations of the at least one device from the proxy device to the host includes sending the Log Page from the proxy device to the host. Statement 79. An embodiment of the inventive concept includes an article according to statement 69, wherein sending the configurations of the at least one device from the proxy device to the host includes sending the at least one data from the proxy device to the host. Statement 80. An embodiment of the inventive concept includes an embodiment of the inventive concept includes an article, comprising a tangible storage medium, the tangible storage medium having stored thereon non-transitory instructions that, when executed by a machine, result in: determining, by a device, a change in a configuration of the device; and notifying a proxy device over a control plane about the change in the configuration of the device. Statement 81. An embodiment of the inventive concept includes an article according to statement 80, wherein the proxy device is drawn from a set including a Baseboard Management Controller (BMC), a Redundant Array of Independent Disks (RAID) controller, a processor, or a software proxy device. Statement 82. An embodiment of the inventive concept includes an article according to statement 80, wherein the at least one device is drawn from a set including a storage device and a Network Interface Card (NIC). Statement 83. An embodiment of the inventive concept includes an article according to statement 80, wherein notifying a proxy device about the change in the configuration of the device includes writing the change in the configuration of the device to a Vital Product Data that may be read by the proxy device. Statement 84. An embodiment of the inventive concept includes an article according to statement 83, wherein notifying a proxy device about the change in the configuration of the device further includes notifying the proxy device that the change in the configuration of the device was written to the Vital Product Data. Statement 85. An embodiment of the inventive concept includes an article according to statement 80, wherein notifying a proxy device about the change in the configuration of the device includes: receiving a query from the proxy device about a current status of the configuration of the device; and sending a response to the proxy device including the change in the configuration of the device. Consequently, in view of the wide variety of permutations to the embodiments described herein, this detailed description and accompanying material is intended to be illustrative only, and should not be taken as limiting the scope of the inventive concept. What is claimed as the inventive concept, therefore, is all such modifications as may come within the scope and spirit of the following claims and equivalents thereto. | 67,045 |
11861169 | DETAILED DESCRIPTION Some examples of the claimed subject matter are now described with reference to the drawings, where like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. Nothing in this detailed description is admitted as prior art. The techniques described herein are directed to a format and layout for compressed data. This format provides for container level compression, such as compression of a container as opposed to compression of a single file at a file level. The format allows for data blocks to be compressed in variable chunk sizes and using different compression algorithms without impacting any prior deduplication storage savings, and while being transparent to data management capabilities of a file system associated with the data being compressed. With this format, different compression algorithms can be applied for frequently accessed (hot) data, infrequently accessed (cold) data, backup/secondary data, etc. Furthermore, this format allows for the combination of similar types of data blocks to be compressed together in order to achieve better compression savings and reduced storage utilization. With this format, metadata, used to interpret compressed data and describing how to correctly decompress the compressed data, is stored within disk blocks at the start of the compressed data. In this way, a file does not need to maintain any metadata corresponding to compression since such metadata is stored within the disk blocks comprising the compressed data. Thus, each data block has enough information to read and decompress compressed data of the data block without having to maintain additional information describing other data blocks compressed with the data block. Because each compressed data block is self-sufficient to read and decompress the compressed data block, storage savings from prior deduplication is retain since each compressed data block can be deduplicated individually at a logical data level. In this way, additional storage savings can be achieved from performing compression using this format and layout, along with retaining storage savings from prior deduplication. A node, such as a server, a computing device, a virtual machine, a storage service, hardware, software, or combination thereof, may store data on behalf of client devices. The node may also provide various types of storage efficiency functionality, such as deduplication, compression, etc. As provided herein, a layout format is provided for container level compression of data blocks. The layout format allows data blocks to be compressed in variable chunk sizes using different compression algorithms without impacting deduplication savings (e.g., storage savings from reducing duplicate data) while being transparent to data management capabilities of a file system hosted by the node. With this layout format, different compression algorithms can be applied to different types of data, such as frequently/recently accessed data, infrequently accessed data, backup data (e.g., a backup copy of primary data being accessed by a client device), user data, metadata, etc. In this way, the layout format allows for the combination of similar types of data blocks to be grouped and compressed together in order to achieve better compression savings. The layout format combines a set of data blocks together as a group, and compresses the group together as a compression group comprising compression data stored within disk blocks of storage. The compression group is stored in a variable number of disk blocks. Metadata, used to interpret the compressed data and describing how to correctly decompress the compressed data within the compression group, is stored within the disk blocks themselves at the start of compressed data. In this way, a data file that has been compressed does not need to keep any metadata corresponding to compression. Thus, the layout format is independent from data management operations and is transparent. This means that each data block has enough information to read and decompress the data within the data block without knowing which other data blocks are compressed together. So even if some of the data blocks are overwritten, there is no need to decompress the data and write other blocks in an uncompressed state. Accordingly, the layout format solves the read-modify-write problem for compressed data, where the compressed data otherwise would have to be decompressed and read, modified, and then written back. Since each data block, which is compressed, is self-sufficient to read and decompress the data within that data block, the data block can be deduplicated individually at a logical data level. In this way, even if a variable number of data blocks are combined and compressed together as a compression group or are compressed with different compression algorithms, deduplication storage savings are not impacted. With the layout format, each data block will point to a set of disk blocks of storage holding the compressed data. In an example, one logical block of data could be pointing to more than one physical disk block. These physical disk blocks may be located anywhere on storage. In an example, the representation of the physical disk block locations within file system metadata may be simplified by applying an encoding. In an example, the physical disk blocks where compressed data is stored may be assumed to be contiguous disk blocks. In this way, a disk location of the data can be easily represented in a file system indirect block by encoding the disk block information as a starting disk block number and a count of blocks from the start disk block number where a particular disk block is located (an offset). In an example, the layout format can have one logical block pointing to multiple disk blocks. So, when a file block is overwritten or some of the file blocks is deduplicated, it is possible that eventually more disk space is consumed as compared to actual data. This can be solved by performing garbage collection to free extra disk blocks so that the freed disk blocks are available to store other data. In an example, garbage collection work is done asynchronously. With this layout format, each logical block takes/consumes a reference count on the disk blocks containing compressed data. By comparing the number of logical blocks, pointing to a set of disk blocks, with a count of disk blocks, a determination may be made that the disk blocks are to be freed by garbage collection. If the number of logical blocks, pointing to disk blocks, is less than the count of disk blocks, then more disk space is being consumed as compared to logical data. Thus, garbage collection may be performed because some of the disk blocks are no longer referenced and can be freed for storing other data. In an example, the garbage collection work is done asynchronously in the background (e.g., a background process may perform the garbage collection work in the background with respect to client I/O processing). Garbage collection can be tuned such that it has a minimal impact on I/O performance. As part of garbage collection, compressed data will be decompressed, and extra disk blocks will be freed. The remaining data blocks can be combined with other garbage collected data blocks and compressed again in order to obtain storage efficiency savings. This garbage collection will not impact deduplication savings, and so the garbage collection process can compress the data independent of how the blocks were previously deduplicated or compressed. Since the disk space has yet to be garbage collected and is thus pending to be freed, the disk space can be effectively counted as free space by the file system. As long as there is enough space available in the file system for future use, the garbage collection work can be deferred for a longer time. A threshold can be maintained to trigger the garbage collection work. In an example, the threshold can be a percentage of disk space (e.g., garbage collection work can be triggered when disk blocks to be garbage collected are more than 2% of file system space). The layout format may implement compression at a container file layer. In this way, deduplication can be performed before compression of the data blocks. Also, compression remains independent of whether data blocks are part of a snapshot of the file system or not. To compress the data at a container file layer, multiple data blocks of data are combined in a group, and then this combined group of data will be compressed using some compression algorithm. The compressed data is then stored within the disk blocks of storage. The disk blocks where the compressed data is stored for a group of container file blocks may be referred to as a compression data extent. The group of blocks on the container file will be pointing to the compression data extent in file system indirect block metadata. In an example, in order to reduce the footprint of representing the compression data extent in a file system indirect block, an assumption is made that the disk blocks where compressed data is stored on physical storage are contiguous blocks within the storage. In this way, a file system indirect block can point to a compression data extent by encoding a list of disk blocks in much smaller space. This assumption can be relaxed by encoding the disk blocks of a compression data extent in an indirect block, or by having a completely different format for indirect blocks, such as by maintaining an overflow block for an indirect block where additional disk block information can be stored within the overflow block. Since the metadata, specifying how to interpret the compressed data and how to correctly decompressed the compressed data, is maintain in the disk blocks themselves at the start of compressed data, a container file does not need to keep any metadata corresponding to compression. This keeps data management operations independent and transparent of this layout format. Since this layout format does not require any specific metadata to be kept at a container file level, each group of file blocks which are compressed together can have independent group sizes and/or can be compressed with any desirable compression algorithm. Thus, this layout format provides the flexibility of compressing frequently accessed data, infrequently accessed data, user data, metadata, and secondary backup data independently. The layout format of compression at a container file level provides for the ability to perform deduplication before compression. In an example, deduplication of data is done at a file level, such as at a logical 4K block boundary. If a block is deduplicated, then the block will point to the same duplicate block on the container file. So, by doing compression at the container file level with this layout format, deduplication can be done on a logical block boundary without worrying about how (e.g., in which chunk size and with what algorithm) the data will be subsequently compressed after being deduplicated. FIG.1is a diagram illustrating an example operating environment100in which an embodiment of the techniques described herein may be implemented. In one example, the techniques described herein may be implemented within a client device128, such as a laptop, a tablet, a personal computer, a mobile device, a server, a virtual machine, a wearable device, etc. In another example, the techniques described herein may be implemented within one or more nodes, such as a first node130and/or a second node132within a first cluster134, a third node136within a second cluster138, etc. A node may comprise a storage controller, a server, an on-premise device, a virtual machine such as a storage virtual machine, hardware, software, or combination thereof. The one or more nodes may be configured to manage the storage and access to data on behalf of the client device128and/or other client devices. In another example, the techniques described herein may be implemented within a distributed computing platform102such as a cloud computing environment (e.g., a cloud storage environment, a multi-tenant platform, a hyperscale infrastructure comprising scalable server architectures and virtual networking, etc.) configured to manage the storage and access to data on behalf of client devices and/or nodes. In yet another example, at least some of the techniques described herein are implemented across one or more of the client device128, the one or more nodes130,132, and/or136, and/or the distributed computing platform102. For example, the client device128may transmit operations, such as data operations to read data and write data and metadata operations (e.g., a create file operation, a rename directory operation, a resize operation, a set attribute operation, etc.), over a network126to the first node130for implementation by the first node130upon storage. The first node130may store data associated with the operations within volumes or other data objects/structures hosted within locally attached storage, remote storage hosted by other computing devices accessible over the network126, storage provided by the distributed computing platform102, etc. The first node130may replicate the data and/or the operations to other computing devices, such as to the second node132, the third node136, a storage virtual machine executing within the distributed computing platform102, etc., so that one or more replicas of the data are maintained. For example, the third node136may host a destination storage volume that is maintained as a replica of a source storage volume of the first node130. Such replicas can be used for disaster recovery and failover. In an embodiment, the techniques described herein are implemented by a storage operating system or are implemented by a separate module that interacts with the storage operating system. The storage operating system may be hosted by the client device,128, a node, the distributed computing platform102, or across a combination thereof. In an example, the storage operating system may execute within a storage virtual machine, a hyperscaler, or other computing environment. The storage operating system may implement a one or more file systems to logically organize data within storage devices as one or more storage objects and provide a logical/virtual representation of how the storage objects are organized on the storage devices (e.g., a file system tailored for block-addressable storage, a file system tailored for byte-addressable storage such as persistent memory). A storage object may comprise any logically definable storage element stored by the storage operating system (e.g., a volume stored by the first node130, a cloud object stored by the distributed computing platform102, etc.). Each storage object may be associated with a unique identifier that uniquely identifies the storage object. For example, a volume may be associated with a volume identifier uniquely identifying that volume from other volumes. The storage operating system also manages client access to the storage objects. The storage operating system may implement a file system for logically organizing data. For example, the storage operating system may implement a write anywhere file layout for a volume where modified data for a file may be written to any available location as opposed to a write-in-place architecture where modified data is written to the original location, thereby overwriting the previous data. In an example, the file system may be implemented through a file system layer that stores data of the storage objects in an on-disk format representation that is block-based (e.g., data is stored within 4 kilobyte blocks and inodes are used to identify files and file attributes such as creation time, access permissions, size and block location, etc.). In an example, deduplication may be implemented by a deduplication module associated with the storage operating system. Deduplication is performed to improve storage efficiency. One type of deduplication is inline deduplication that ensures blocks are deduplicated before being written to a storage device. Inline deduplication uses a data structure, such as an incore hash store, which maps fingerprints of data to data blocks of the storage device storing the data. Whenever data is to be written to the storage device, a fingerprint of that data is calculated and the data structure is looked up using the fingerprint to find duplicates (e.g., potentially duplicate data already stored within the storage device). If duplicate data is found, then the duplicate data is loaded from the storage device and a byte by byte comparison may be performed to ensure that the duplicate data is an actual duplicate of the data to be written to the storage device. If the data to be written is a duplicate of the loaded duplicate data, then the data to be written to disk is not redundantly stored to the storage device. Instead, a pointer or other reference is stored in the storage device in place of the data to be written to the storage device. The pointer points to the duplicate data already stored in the storage device. A reference count for the data may be incremented to indicate that the pointer now references the data. If at some point the pointer no longer references the data (e.g., the deduplicated data is deleted and thus no longer references the data in the storage device), then the reference count is decremented. In this way, inline deduplication is able to deduplicate data before the data is written to disk. This improves the storage efficiency of the storage device. Background deduplication is another type of deduplication that deduplicates data already written to a storage device. Various types of background deduplication may be implemented. In an example of background deduplication, data blocks that are duplicated between files are rearranged within storage units such that one copy of the data occupies physical storage. References to the single copy can be inserted into a file system structure such that all files or containers that contain the data refer to the same instance of the data. Deduplication can be performed on a data storage device block basis. In an example, data blocks on a storage device can be identified using a physical volume block number. The physical volume block number uniquely identifies a particular block on the storage device. Additionally, blocks within a file can be identified by a file block number. The file block number is a logical block number that indicates the logical position of a block within a file relative to other blocks in the file. For example, file block number0represents the first block of a file, file block number1represents the second block, etc. File block numbers can be mapped to a physical volume block number that is the actual data block on the storage device. During deduplication operations, blocks in a file that contain the same data are deduplicated by mapping the file block number for the block to the same physical volume block number, and maintaining a reference count of the number of file block numbers that map to the physical volume block number. For example, assume that file block number0and file block number5of a file contain the same data, while file block numbers1-4contain unique data. File block numbers1-4are mapped to different physical volume block numbers. File block number0and file block number5may be mapped to the same physical volume block number, thereby reducing storage requirements for the file. Similarly, blocks in different files that contain the same data can be mapped to the same physical volume block number. For example, if file block number0of file A contains the same data as file block number3of file B, file block number0of file A may be mapped to the same physical volume block number as file block number3of file B. In another example of background deduplication, a changelog is utilized to track blocks that are written to the storage device. Background deduplication also maintains a fingerprint database (e.g., a flat metafile) that tracks all unique block data such as by tracking a fingerprint and other filesystem metadata associated with block data. Background deduplication can be periodically executed or triggered based upon an event such as when the changelog fills beyond a threshold. As part of background deduplication, data in both the changelog and the fingerprint database is sorted based upon fingerprints. This ensures that all duplicates are sorted next to each other. The duplicates are moved to a dup file. The unique changelog entries are moved to the fingerprint database, which will serve as duplicate data for a next deduplication operation. In order to optimize certain filesystem operations needed to deduplicate a block, duplicate records in the dup file are sorted in certain filesystem sematic order (e.g., inode number and block number). Next, the duplicate data is loaded from the storage device and a whole block byte by byte comparison is performed to make sure duplicate data is an actual duplicate of the data to be written to the storage device. After, the block in the changelog is modified to point directly to the duplicate data as opposed to redundantly storing data of the block. In an example, deduplication operations performed by a data deduplication layer of a node can be leveraged for use on another node during data replication operations. For example, the first node130may perform deduplication operations to provide for storage efficiency with respect to data stored on a storage volume. The benefit of the deduplication operations performed on first node130can be provided to the second node132with respect to the data on first node130that is replicated to the second node132. In some aspects, a data transfer protocol, referred to as the LRSE (Logical Replication for Storage Efficiency) protocol, can be used as part of replicating consistency group differences from the first node130to the second node132. In the LRSE protocol, the second node132maintains a history buffer that keeps track of data blocks that it has previously received. The history buffer tracks the physical volume block numbers and file block numbers associated with the data blocks that have been transferred from first node130to the second node132. A request can be made of the first node130to not transfer blocks that have already been transferred. Thus, the second node132can receive deduplicated data from the first node130, and will not need to perform deduplication operations on the deduplicated data replicated from first node130. In an example, the first node130may preserve deduplication of data that is transmitted from first node130to the distributed computing platform102. For example, the first node130may create an object comprising deduplicated data. The object is transmitted from the first node130to the distributed computing platform102for storage. In this way, the object within the distributed computing platform102maintains the data in a deduplicated state. Furthermore, deduplication may be preserved when deduplicated data is transmitted/replicated/mirrored between the client device128, the first node130, the distributed computing platform102, and/or other nodes or devices. In an example, compression may be implemented by a compression module associated with the storage operating system. The compression module may utilize various types of compression techniques to replace longer sequences of data (e.g., frequently occurring and/or redundant sequences) with shorter sequences, such as by using Huffman coding, arithmetic coding, compression dictionaries, etc. For example, an decompressed portion of a file may comprise “ggggnnnnnnqqqqqqqqqq”, which is compressed to become “4g6n10q”. In this way, the size of the file can be reduced to improve storage efficiency. Compression may be implemented for compression groups. A compression group may correspond to a compressed group of blocks. The compression group may be represented by virtual volume block numbers. The compression group may comprise contiguous or non-contiguous blocks. Compression may be preserved when compressed data is transmitted/replicated/mirrored between the client device128, a node, the distributed computing platform102, and/or other nodes or devices. For example, an object may be created by the first node130to comprise compressed data. The object is transmitted from the first node130to the distributed computing platform102for storage. In this way, the object within the distributed computing platform102maintains the data in a compressed state. In an example, various types of synchronization may be implemented by a synchronization module associated with the storage operating system. In an example, synchronous replication may be implemented, such as between the first node130and the second node132. It may be appreciated that the synchronization module may implement synchronous replication between any devices within the operating environment100, such as between the first node130of the first cluster134and the third node136of the second cluster138and/or between a node of a cluster and an instance of a node or virtual machine in the distributed computing platform102. As an example, during synchronous replication, the first node130may receive a write operation from the client device128. The write operation may target a file stored within a volume managed by the first node130. The first node130replicates the write operation to create a replicated write operation. The first node130locally implements the write operation upon the file within the volume. The first node130also transmits the replicated write operation to a synchronous replication target, such as the second node132that maintains a replica volume as a replica of the volume maintained by the first node130. The second node132will execute the replicated write operation upon the replica volume so that the file within the volume and the replica volume comprises the same data. After, the second node132will transmit a success message to the first node130. With synchronous replication, the first node130does not respond with a success message to the client device128for the write operation until both the write operation is executed upon the volume and the first node130receives the success message that the second node132executed the replicated write operation upon the replica volume. In another example, asynchronous replication may be implemented, such as between the first node130and the third node136. It may be appreciated that the synchronization module may implement asynchronous replication between any devices within the operating environment100, such as between the first node130of the first cluster134and the distributed computing platform102. In an example, the first node130may establish an asynchronous replication relationship with the third node136. The first node130may capture a baseline snapshot of a first volume as a point in time representation of the first volume. The first node130may utilize the baseline snapshot to perform a baseline transfer of the data within the first volume to the third node136in order to create a second volume within the third node136comprising data of the first volume as of the point in time at which the baseline snapshot was created. After the baseline transfer, the first node130may subsequently create snapshots of the first volume over time. As part of asynchronous replication, an incremental transfer is performed between the first volume and the second volume. In particular, a snapshot of the first volume is created. The snapshot is compared with a prior snapshot that was previously used to perform the last asynchronous transfer (e.g., the baseline transfer or a prior incremental transfer) of data to identify a difference in data of the first volume between the snapshot and the prior snapshot (e.g., changes to the first volume since the last asynchronous transfer). Accordingly, the difference in data is incrementally transferred from the first volume to the second volume. In this way, the second volume will comprise the same data as the first volume as of the point in time when the snapshot was created for performing the incremental transfer. It may be appreciated that other types of replication may be implemented, such as semi-sync replication. In an embodiment, the first node130may store data or a portion thereof within storage hosted by the distributed computing platform102by transmitting the data within objects to the distributed computing platform102. In one example, the first node130may locally store frequently accessed data within locally attached storage. Less frequently accessed data may be transmitted to the distributed computing platform102for storage within a data storage tier108. The data storage tier108may store data within a service data store120, and may store client specific data within client data stores assigned to such clients such as a client (1) data store122used to store data of a client (1) and a client (N) data store124used to store data of a client (N). The data stores may be physical storage devices or may be defined as logical storage, such as a virtual volume, LUNs, or other logical organizations of data that can be defined across one or more physical storage devices. In another example, the first node130transmits and stores all client data to the distributed computing platform102. In yet another example, the client device128transmits and stores the data directly to the distributed computing platform102without the use of the first node130. The management of storage and access to data can be performed by one or more storage virtual machines (SVMs) or other storage applications that provide software as a service (SaaS) such as storage software services. In one example, an SVM may be hosted within the client device128, within the first node130, or within the distributed computing platform102such as by the application server tier106. In another example, one or more SVMs may be hosted across one or more of the client device128, the first node130, and the distributed computing platform102. The one or more SVMs may host instances of the storage operating system. In an example, the storage operating system may be implemented for the distributed computing platform102. The storage operating system may allow client devices to access data stored within the distributed computing platform102using various types of protocols, such as a Network File System (NFS) protocol, a Server Message Block (SMB) protocol and Common Internet File System (CIFS), and Internet Small Computer Systems Interface (iSCSI), and/or other protocols. The storage operating system may provide various storage services, such as disaster recovery (e.g., the ability to non-disruptively transition client devices from accessing a primary node that has failed to a secondary node that is taking over for the failed primary node), backup and archive function, replication such as asynchronous and/or synchronous replication, deduplication, compression, high availability storage, cloning functionality (e.g., the ability to clone a volume, such as a space efficient flex clone), snapshot functionality (e.g., the ability to create snapshots and restore data from snapshots), data tiering (e.g., migrating infrequently accessed data to slower/cheaper storage), encryption, managing storage across various platforms such as between on-premise storage systems and multiple cloud systems, etc. In one example of the distributed computing platform102, one or more SVMs may be hosted by the application server tier106. For example, a server (1)116is configured to host SVMs used to execute applications such as storage applications that manage the storage of data of the client (1) within the client (1) data store122. Thus, an SVM executing on the server (1)116may receive data and/or operations from the client device128and/or the first node130over the network126. The SVM executes a storage application and/or an instance of the storage operating system to process the operations and/or store the data within the client (1) data store122. The SVM may transmit a response back to the client device128and/or the first node130over the network126, such as a success message or an error message. In this way, the application server tier106may host SVMs, services, and/or other storage applications using the server (1)116, the server (N)118, etc. A user interface tier104of the distributed computing platform102may provide the client device128and/or the first node130with access to user interfaces associated with the storage and access of data and/or other services provided by the distributed computing platform102. In an example, a service user interface110may be accessible from the distributed computing platform102for accessing services subscribed to by clients and/or nodes, such as data replication services, application hosting services, data security services, human resource services, warehouse tracking services, accounting services, etc. For example, client user interfaces may be provided to corresponding clients, such as a client (1) user interface112, a client (N) user interface114, etc. The client (1) can access various services and resources subscribed to by the client (1) through the client (1) user interface112, such as access to a web service, a development environment, a human resource application, a warehouse tracking application, and/or other services and resources provided by the application server tier106, which may use data stored within the data storage tier108. The client device128and/or the first node130may subscribe to certain types and amounts of services and resources provided by the distributed computing platform102. For example, the client device128may establish a subscription to have access to three virtual machines, a certain amount of storage, a certain type/amount of data redundancy, a certain type/amount of data security, certain service level agreements (SLAs) and service level objectives (SLOs), latency guarantees, bandwidth guarantees, access to execute or host certain applications, etc. Similarly, the first node130can establish a subscription to have access to certain services and resources of the distributed computing platform102. As shown, a variety of clients, such as the client device128and the first node130, incorporating and/or incorporated into a variety of computing devices may communicate with the distributed computing platform102through one or more networks, such as the network126. For example, a client may incorporate and/or be incorporated into a client application (e.g., software) implemented at least in part by one or more of the computing devices. Examples of suitable computing devices include personal computers, server computers, desktop computers, nodes, storage servers, nodes, laptop computers, notebook computers, tablet computers or personal digital assistants (PDAs), smart phones, cell phones, and consumer electronic devices incorporating one or more computing device components, such as one or more electronic processors, microprocessors, central processing units (CPU), or controllers. Examples of suitable networks include networks utilizing wired and/or wireless communication technologies and networks operating in accordance with any suitable networking and/or communication protocol (e.g., the Internet). In use cases involving the delivery of customer support services, the computing devices noted represent the endpoint of the customer support delivery process, i.e., the consumer's device. The distributed computing platform102, such as a multi-tenant business data processing platform or cloud computing environment, may include multiple processing tiers, including the user interface tier104, the application server tier106, and a data storage tier108. The user interface tier104may maintain multiple user interfaces, including graphical user interfaces and/or web-based interfaces. The user interfaces may include the service user interface110for a service to provide access to applications and data for a client (e.g., a “tenant”) of the service, as well as one or more user interfaces that have been specialized/customized in accordance with user specific requirements (e.g., as discussed above), which may be accessed via one or more APIs. The service user interface110may include components enabling a tenant to administer the tenant's participation in the functions and capabilities provided by the distributed computing platform102, such as accessing data, causing execution of specific data processing operations, etc. Each processing tier may be implemented with a set of computers, virtualized computing environments such as a storage virtual machine or storage virtual server, and/or computer components including computer servers and processors, and may perform various functions, methods, processes, or operations as determined by the execution of a software application or set of instructions. The data storage tier108may include one or more data stores, which may include the service data store120and one or more client data stores122-124. Each client data store may contain tenant-specific data that is used as part of providing a range of tenant-specific business and storage services or functions, including but not limited to ERP, CRM, eCommerce, Human Resources management, payroll, storage services, etc. Data stores may be implemented with any suitable data storage technology, including structured query language (SQL) based relational database management systems (RDBMS), file systems hosted by operating systems, object storage, etc. The distributed computing platform102may be a multi-tenant and service platform operated by an entity in order to provide multiple tenants with a set of business related applications, data storage, and functionality. These applications and functionality may include ones that a business uses to manage various aspects of its operations. For example, the applications and functionality may include providing web-based access to business information systems, thereby allowing a user with a browser and an Internet or intranet connection to view, enter, process, or modify certain types of business information or any other type of information. A clustered network environment200that may implement one or more aspects of the techniques described and illustrated herein is shown inFIG.2. The clustered network environment200includes data storage apparatuses202(1)-202(n) that are coupled over a cluster or cluster fabric204that includes one or more communication network(s) and facilitates communication between the data storage apparatuses202(1)-202(n) (and one or more modules, components, etc. therein, such as, node computing devices206(1)-206(n), for example), although any number of other elements or components can also be included in the clustered network environment200in other examples. This technology provides a number of advantages including methods, non-transitory computer readable media, and computing devices that implement the techniques described herein. In this example, node computing devices206(1)-206(n) can be primary or local storage controllers or secondary or remote storage controllers that provide client devices208(1)-208(n) with access to data stored within data storage devices210(1)-210(n) and cloud storage device(s)236(also referred to as cloud storage node(s)). The node computing devices206(1)-206(n) may be implemented as hardware, software (e.g., a storage virtual machine), or combination thereof. The data storage apparatuses202(1)-202(n) and/or node computing devices206(1)-206(n) of the examples described and illustrated herein are not limited to any particular geographic areas and can be clustered locally and/or remotely via a cloud network, or not clustered in other examples. Thus, in one example the data storage apparatuses202(1)-202(n) and/or node computing device206(1)-206(n) can be distributed over a plurality of storage systems located in a plurality of geographic locations (e.g., located on-premise, located within a cloud computing environment, etc.); while in another example a clustered network can include data storage apparatuses202(1)-202(n) and/or node computing device206(1)-206(n) residing in a same geographic location (e.g., in a single on-site rack). In the illustrated example, one or more of the client devices208(1)-208(n), which may be, for example, personal computers (PCs), computing devices used for storage (e.g., storage servers), or other computers or peripheral devices, are coupled to the respective data storage apparatuses202(1)-202(n) by network connections212(1)-212(n). Network connections212(1)-212(n) may include a local area network (LAN) or wide area network (WAN) (i.e., a cloud network), for example, that utilize TCP/IP and/or one or more Network Attached Storage (NAS) protocols, such as a Common Internet Filesystem (CIFS) protocol or a Network Filesystem (NFS) protocol to exchange data packets, a Storage Area Network (SAN) protocol, such as Small Computer System Interface (SCSI) or Fiber Channel Protocol (FCP), an object protocol, such as simple storage service (S3), and/or non-volatile memory express (NVMe), for example. Illustratively, the client devices208(1)-208(n) may be general-purpose computers running applications and may interact with the data storage apparatuses202(1)-202(n) using a client/server model for exchange of information. That is, the client devices208(1)-208(n) may request data from the data storage apparatuses202(1)-202(n) (e.g., data on one of the data storage devices210(1)-210(n) managed by a network storage controller configured to process I/O commands issued by the client devices208(1)-208(n)), and the data storage apparatuses202(1)-202(n) may return results of the request to the client devices208(1)-208(n) via the network connections212(1)-212(n). The node computing devices206(1)-206(n) of the data storage apparatuses202(1)-202(n) can include network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, cloud storage (e.g., a storage endpoint may be stored within cloud storage device(s)236), etc., for example. Such node computing devices206(1)-206(n) can be attached to the cluster fabric204at a connection point, redistribution point, or communication endpoint, for example. One or more of the node computing devices206(1)-206(n) may be capable of sending, receiving, and/or forwarding information over a network communications channel, and could comprise any type of device that meets any or all of these criteria. In an example, the node computing devices206(1) and206(n) may be configured according to a disaster recovery configuration whereby a surviving node provides switchover access to the storage devices210(1)-210(n) in the event a disaster occurs at a disaster storage site (e.g., the node computing device206(1) provides client device212(n) with switchover data access to data storage devices210(n) in the event a disaster occurs at the second storage site). In other examples, the node computing device206(n) can be configured according to an archival configuration and/or the node computing devices206(1)-206(n) can be configured based on another type of replication arrangement (e.g., to facilitate load sharing). Additionally, while two node computing devices are illustrated inFIG.2, any number of node computing devices or data storage apparatuses can be included in other examples in other types of configurations or arrangements. As illustrated in the clustered network environment200, node computing devices206(1)-206(n) can include various functional components that coordinate to provide a distributed storage architecture. For example, the node computing devices206(1)-206(n) can include network modules214(1)-214(n) and disk modules216(1)-216(n). Network modules214(1)-214(n) can be configured to allow the node computing devices206(1)-206(n) (e.g., network storage controllers) to connect with client devices208(1)-208(n) over the storage network connections212(1)-212(n), for example, allowing the client devices208(1)-208(n) to access data stored in the clustered network environment200. Further, the network modules214(1)-214(n) can provide connections with one or more other components through the cluster fabric204. For example, the network module214(1) of node computing device206(1) can access the data storage device210(n) by sending a request via the cluster fabric204through the disk module216(n) of node computing device206(n) when the node computing device206(n) is available. Alternatively, when the node computing device206(n) fails, the network module214(1) of node computing device206(1) can access the data storage device210(n) directly via the cluster fabric204. The cluster fabric204can include one or more local and/or wide area computing networks (i.e., cloud networks) embodied as Infiniband, Fibre Channel (FC), or Ethernet networks, for example, although other types of networks supporting other protocols can also be used. Disk modules216(1)-216(n) can be configured to connect data storage devices210(1)-210(n), such as disks or arrays of disks, SSDs, flash memory, or some other form of data storage, to the node computing devices206(1)-206(n). Often, disk modules216(1)-216(n) communicate with the data storage devices210(1)-210(n) according to the SAN protocol, such as SCSI or FCP, for example, although other protocols can also be used. Thus, as seen from an operating system on node computing devices206(1)-206(n), the data storage devices210(1)-210(n) can appear as locally attached. In this manner, different node computing devices206(1)-206(n), etc. may access data blocks, files, or objects through the operating system, rather than expressly requesting abstract files. While the clustered network environment200illustrates an equal number of network modules214(1)-214(n) and disk modules216(1)-216(n), other examples may include a differing number of these modules. For example, there may be a plurality of network and disk modules interconnected in a cluster that do not have a one-to-one correspondence between the network and disk modules. That is, different node computing devices can have a different number of network and disk modules, and the same node computing device can have a different number of network modules than disk modules. Further, one or more of the client devices208(1)-208(n) can be networked with the node computing devices206(1)-206(n) in the cluster, over the storage connections212(1)-212(n). As an example, respective client devices208(1)-208(n) that are networked to a cluster may request services (e.g., exchanging of information in the form of data packets) of node computing devices206(1)-206(n) in the cluster, and the node computing devices206(1)-206(n) can return results of the requested services to the client devices208(1)-208(n). In one example, the client devices208(1)-208(n) can exchange information with the network modules214(1)-214(n) residing in the node computing devices206(1)-206(n) (e.g., network hosts) in the data storage apparatuses202(1)-202(n). In one example, the storage apparatuses202(1)-202(n) host aggregates corresponding to physical local and remote data storage devices, such as local flash or disk storage in the data storage devices210(1)-210(n), for example. One or more of the data storage devices210(1)-210(n) can include mass storage devices, such as disks of a disk array. The disks may comprise any type of mass storage devices, including but not limited to magnetic disk drives, flash memory, and any other similar media adapted to store information, including, for example, data and/or parity information. The aggregates include volumes218(1)-218(n) in this example, although any number of volumes can be included in the aggregates. The volumes218(1)-218(n) are virtual data stores or storage objects that define an arrangement of storage and one or more filesystems within the clustered network environment200. Volumes218(1)-218(n) can span a portion of a disk or other storage device, a collection of disks, or portions of disks, for example, and typically define an overall logical arrangement of data storage. In one example volumes218(1)-218(n) can include stored user data as one or more files, blocks, or objects that may reside in a hierarchical directory structure within the volumes218(1)-218(n). Volumes218(1)-218(n) are typically configured in formats that may be associated with particular storage systems, and respective volume formats typically comprise features that provide functionality to the volumes218(1)-218(n), such as providing the ability for volumes218(1)-218(n) to form clusters, among other functionality. Optionally, one or more of the volumes218(1)-218(n) can be in composite aggregates and can extend between one or more of the data storage devices210(1)-210(n) and one or more of the cloud storage device(s)236to provide tiered storage, for example, and other arrangements can also be used in other examples. In one example, to facilitate access to data stored on the disks or other structures of the data storage devices210(1)-210(n), a filesystem may be implemented that logically organizes the information as a hierarchical structure of directories and files. In this example, respective files may be implemented as a set of disk blocks of a particular size that are configured to store information, whereas directories may be implemented as specially formatted files in which information about other files and directories are stored. Data can be stored as files or objects within a physical volume and/or a virtual volume, which can be associated with respective volume identifiers. The physical volumes correspond to at least a portion of physical storage devices, such as the data storage devices210(1)-210(n) (e.g., a Redundant Array of Independent (or Inexpensive) Disks (RAID system)) whose address, addressable space, location, etc. does not change. Typically the location of the physical volumes does not change in that the range of addresses used to access it generally remains constant. Virtual volumes, in contrast, can be stored over an aggregate of disparate portions of different physical storage devices. Virtual volumes may be a collection of different available portions of different physical storage device locations, such as some available space from disks, for example. It will be appreciated that since the virtual volumes are not “tied” to any one particular storage device, virtual volumes can be said to include a layer of abstraction or virtualization, which allows it to be resized and/or flexible in some regards. Further, virtual volumes can include one or more logical unit numbers (LUNs), directories, Qtrees, files, and/or other storage objects, for example. Among other things, these features, but more particularly the LUNs, allow the disparate memory locations within which data is stored to be identified, for example, and grouped as data storage unit. As such, the LUNs may be characterized as constituting a virtual disk or drive upon which data within the virtual volumes is stored within an aggregate. For example, LUNs are often referred to as virtual drives, such that they emulate a hard drive, while they actually comprise data blocks stored in various parts of a volume. In one example, the data storage devices210(1)-210(n) can have one or more physical ports, wherein each physical port can be assigned a target address (e.g., SCSI target address). To represent respective volumes, a target address on the data storage devices210(1)-210(n) can be used to identify one or more of the LUNs. Thus, for example, when one of the node computing devices206(1)-206(n) connects to a volume, a connection between the one of the node computing devices206(1)-206(n) and one or more of the LUNs underlying the volume is created. Respective target addresses can identify multiple of the LUNs, such that a target address can represent multiple volumes. The I/O interface, which can be implemented as circuitry and/or software in a storage adapter or as executable code residing in memory and executed by a processor, for example, can connect to volumes by using one or more addresses that identify the one or more of the LUNs. Referring toFIG.3, node computing device206(1) in this particular example includes processor(s)300, a memory302, a network adapter304, a cluster access adapter306, and a storage adapter308interconnected by a system bus310. In other examples, the node computing device206(1) comprises a virtual machine, such as a virtual storage machine. The node computing device206(1) also includes a storage operating system312installed in the memory302that can, for example, implement a RAID data loss protection and recovery scheme to optimize reconstruction of data of a failed disk or drive in an array, along with other functionality such as deduplication, compression, snapshot creation, data mirroring, synchronous replication, asynchronous replication, encryption, etc. In some examples, the node computing device206(n) is substantially the same in structure and/or operation as node computing device206(1), although the node computing device206(n) can also include a different structure and/or operation in one or more aspects than the node computing device206(1). In an example, a file system may be implemented for persistent memory. The network adapter304in this example includes the mechanical, electrical and signaling circuitry needed to connect the node computing device206(1) to one or more of the client devices208(1)-208(n) over network connections212(1)-212(n), which may comprise, among other things, a point-to-point connection or a shared medium, such as a local area network. In some examples, the network adapter304further communicates (e.g., using TCP/IP) via the cluster fabric204and/or another network (e.g. a WAN) (not shown) with cloud storage device(s)236to process storage operations associated with data stored thereon. The storage adapter308cooperates with the storage operating system312executing on the node computing device206(1) to access information requested by one of the client devices208(1)-208(n) (e.g., to access data on a data storage device210(1)-210(n) managed by a network storage controller). The information may be stored on any type of attached array of writeable media such as magnetic disk drives, flash memory, and/or any other similar media adapted to store information. In the exemplary data storage devices210(1)-210(n), information can be stored in data blocks on disks. The storage adapter308can include I/O interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a storage area network (SAN) protocol (e.g., Small Computer System Interface (SCSI), Internet SCSI (iSCSI), hyperSCSI, Fiber Channel Protocol (FCP)). The information is retrieved by the storage adapter308and, if necessary, processed by the processor(s)300(or the storage adapter308itself) prior to being forwarded over the system bus310to the network adapter304(and/or the cluster access adapter306if sending to another node computing device in the cluster) where the information is formatted into a data packet and returned to a requesting one of the client devices208(1)-208(n) and/or sent to another node computing device attached via the cluster fabric204. In some examples, a storage driver314in the memory302interfaces with the storage adapter to facilitate interactions with the data storage devices210(1)-210(n). The storage operating system312can also manage communications for the node computing device206(1) among other devices that may be in a clustered network, such as attached to a cluster fabric204. Thus, the node computing device206(1) can respond to client device requests to manage data on one of the data storage devices210(1)-210(n) or cloud storage device(s)236(e.g., or additional clustered devices) in accordance with the client device requests. The file system module318of the storage operating system312can establish and manage one or more filesystems including software code and data structures that implement a persistent hierarchical namespace of files and directories, for example. As an example, when a new data storage device (not shown) is added to a clustered network system, the file system module318is informed where, in an existing directory tree, new files associated with the new data storage device are to be stored. This is often referred to as “mounting” a filesystem. In the example node computing device206(1), memory302can include storage locations that are addressable by the processor(s)300and adapters304,306, and308for storing related software application code and data structures. The processor(s)300and adapters304,306, and308may, for example, include processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. In the example, the node computing device206(1) comprises persistent memory320. The persistent memory320comprises a plurality of pages within which data can be stored. The plurality of pages may be indexed by page block numbers. The storage operating system312, portions of which are typically resident in the memory302and executed by the processor(s)300, invokes storage operations in support of a file service implemented by the node computing device206(1). Other processing and memory mechanisms, including various computer readable media, may be used for storing and/or executing application instructions pertaining to the techniques described and illustrated herein. For example, the storage operating system312can also utilize one or more control files (not shown) to aid in the provisioning of virtual machines. In this particular example, the memory302also includes a module configured to implement the techniques described herein, as discussed above and further below. The examples of the technology described and illustrated herein may be embodied as one or more non-transitory computer or machine readable media, such as the memory302, having machine or processor-executable instructions stored thereon for one or more aspects of the present technology, which when executed by processor(s), such as processor(s)300, cause the processor(s) to carry out the steps necessary to implement the methods of this technology, as described and illustrated with the examples herein. In some examples, the executable instructions are configured to perform one or more steps of a method described and illustrated later. One embodiment of implementing a layout format for compressed data is illustrated by an exemplary method400ofFIG.4, which is further described in conjunction with system500ofFIGS.5A-5D. A node502may store data509on behalf of client devices within storage, such as solid state drives, disk drives, memory, cloud storage, or a variety of other types of storage. The data509may be stored within disk blocks of physical storage, and may be organized by a file system for access by the client devices. As provided herein, the node502may provide storage efficiency functionality for more efficiently storing the data509, along with a layout format for compressed data. In an embodiment, the node502may provide deduplication functionality506used to deduplicate the data509in order to improve storage efficiency by eliminated instances of redundantly stored data, as illustrated byFIG.5A. In an example of implementing the deduplication functionality506, data blocks that are duplicated between files within the data509are rearranged within storage units such that one copy of the data occupies physical storage. References to the single copy of the data can be inserted into a file system structure such that all files or containers that contain the data refer to the same instance of the data within the physical storage. Deduplication can be performed on a data storage device block basis. In an example, data blocks on a storage device can be identified using a physical volume block number. The physical volume block number uniquely identifies a particular block on the storage device. Additionally, blocks within a file can be identified by a file block number. The file block number is a logical block number that indicates the logical position of a block within a file relative to other blocks in the file. For example, file block number0represents the first block of a file, file block number1represents the second block, etc. File block numbers can be mapped to a physical volume block number that is the actual data block on the storage device. During deduplication operations, blocks in a file that contain the same data are deduplicated by mapping the file block number for the block to the same physical volume block number, and maintaining a reference count of the number of file block numbers that map to the physical volume block number. For example, assume that file block number0and file block number5of a file contain the same data, while file block numbers1-4contain unique data. File block numbers1-4are mapped to different physical volume block numbers. File block number0and file block number5may be mapped to the same physical volume block number, thereby reducing storage requirements for the file. Similarly, blocks in different files that contain the same data can be mapped to the same physical volume block number. For example, if file block number0of file A contains the same data as file block number3of file B, file block number0of file A may be mapped to the same physical volume block number as file block number3of file B. In an embodiment of implementing the deduplication functionality506using the layout format, a set of the data509may be deduplicated by the deduplication functionality506to create a deduplicated set of data. The set of the data509may correspond to one or more files within a container (e.g., a volume or other container within which files can be organized). In an embodiment, the set of the data509may be deduplicated by the deduplication functionality506at logical block boundaries. For example, the set of the data509may be deduplicated at logical block boundaries of 4 kb or any other data block size used by a file system of the node to store the data509. In an embodiment, the set of the data509may be deduplicated by the deduplication functionality506at a file level. For example, one or more select files within the container may be deduplicated, but other files may not be deduplicated by the deduplication functionality506. In an example of deduplicating a data block within the set of the data509, the data block may be deduplicated to point to a duplicate data block within the container based upon the data block comprising the same data as the duplicate data block. In an example, a single logical block within the set of the data509may point to multiple disk blocks after deduplication is performed by the deduplication functionality506. As will be further described in conjunction withFIG.5BandFIG.5C, one or more sets/groups of data within the data509may be deduplicated prior to compressing the one or more sets/groups of data within the data509. In this way, container level compression may be performed by compression functionality504of the node502upon the deduplicated set of data at a container level. That is, the deduplication functionality506deduplicates the data509at a file level such that individual files can be selectively deduplicated. In contrast, the compression functionality504compresses the data509at a container level, where a container (e.g., a volume or other data structure within which files can be organized and stored) comprises/contains one or more files. In an embodiment, deduplication of the data509may be optionally performed, and thus the data509may be compressed using the compression functionality504without deduplicating the data509. That is, the compression functionality504and the deduplication functionality506of the node502may be separate and independent, such that the data509may be deduplicated by the deduplication functionality506or not, and similarly, the data509may be compressed by the compression functionality504or not. In an embodiment of implementing the compression functionality504, the data509may be grouped into one or more groups of data blocks, at402. For example, the data509may be grouped by the compression functionality504into a first group of data blocks510, a second group of data blocks512, a third group of data blocks514, and/or other data blocks, as illustrated byFIG.5B. In this way, data blocks (e.g., data blocks storing at least some of the data509) may be grouped into a group of data blocks by the compression functionality504. The data blocks may be grouped based upon various criteria. In an embodiment, the data blocks may be grouped together by the compression functionality504based upon the data blocks comprising user data as opposed to other types of data such as metadata because the user data may have similar access patterns compared to the metadata. Similarly, the data blocks may be grouped together by the compression functionality504based upon the data blocks comprising metadata as opposed to other types of data such as user data because the metadata may have similar access patterns compared to the user data. In an embodiment, the data blocks may be grouped together by the compression functionality504based upon the data blocks having similar access frequencies. For example, infrequently accessed data blocks may be grouped together into a group of data blocks, while frequently/recently accessed data blocks may be grouped together into a different group of data blocks than the infrequently accessed data blocks. In an embodiment, the data blocks may be grouped together into a group of data blocks by the compression functionality504based upon the data blocks comprising primary data that is actively available for access by client devices, while secondary backup data (e.g., data maintained as a replica of the primary data or a snapshot of a file system associated with the primary data) may be separately grouped together into a different group of data blocks than the primary data. In this way, similar types of data blocks may be grouped together into the same group of data blocks. At404, container level compression may be performed by the compression functionality504upon a group of data blocks to compress the group of data blocks as compressed data blocks within a compression group. In an example, the compression functionality504may implement a first compression algorithm to compress the group of data block to create the compressed data blocks. In an example, the compression functionality504may compress the first group of data blocks510using the first compression algorithm to create the first compression group520, as illustrated byFIG.5C. The compression functionality504may compress the second group of data blocks512using a compression algorithm (e.g., the first compression algorithm or a different compression algorithm) to create a second compression group522. The compression functionality504may compress the third group of data blocks514using a compression algorithm (e.g., the first compression algorithm or a different compression algorithm) to create a third compression group524. In an embodiment, the compression functionality504may compress the group of data blocks using variable chunk size compression algorithms. In an embodiment, the compression functionality504may store a reference within a data block of a compression group (e.g., a data block maintained by the file system) to point to one or more disk blocks (e.g., disk blocks within a storage device) comprising compressed data of the data block. The reference within the data block may point to a single disk block or multiple disk blocks comprising the compressed data of the data block, and thus a data block of the file system may point to multiple disks blocks of a storage device. In an embodiment, the compression functionality504may represent a disk location of a disk block (e.g., a physical location within the storage device) comprising compressed data of a data block of the compression group using an encoding. The disk location may be encoded within a file system indirect block (e.g., an indirect block of a file system that points to the data block whose data is stored within the disk block). The disk location may be encoded by encoding disk block information as a starting disk block number and a count of disk blocks (e.g., a number of disk blocks, from a starting disk block, at which the disk block is located within the storage) within the file system indirect block. At406, compressed data blocks of a compression group are stored within a variable number of disk blocks by the compression functionality504, at408. That is, the compressed data blocks are not limited to being stored within a set number of disk blocks, but can be stored in a variable number of disk blocks. The compression functionality504may generate metadata for the compressed data blocks of the compression group. For example, the compression functionality504may generate first metadata for a first compressed data block within the first compression group520. The first metadata may comprise information for decompressing the first compressed data block. The first metadata may comprise information for how to read the first compressed data block. The compression functionality504may store the first metadata in a disk block within which the first compressed data block is stored, at408. In this way, the disk block comprises the first compressed data block, along with the information of the first metadata used to read and decompress the first compressed data block. In response to receiving a request to access the first compressed data block within the disk block, the first metadata within the disk block is utilized to decompress and read the first compressed data block. In an embodiment of implementing the compression functionality504, a first set of data blocks are grouped into a first group based upon a first frequency of access to the first set of data blocks. For example, the first set of data blocks are accessed less than a threshold frequency, and thus are grouped together into the first group. A second set of data blocks are grouped into a second group based upon a second frequency of access to the second set of data blocks. For example, the second set of data blocks are accessed greater than the threshold frequency, and thus are grouped together into the second group. In an embodiment, a first group size of the first group is independent of a second group size of the second group, such that the first group size and the second group size may be the same size or different sizes. The compression functionality504may compress the first set of data blocks within the first group into a first compression group using a first compression algorithm. The compression functionality504may compress the second set of data blocks within the second group into a second compression group using a second compression algorithm. The first compression algorithm and the second compression algorithm may be the same compression algorithm or different compression algorithms. The first compression algorithm may utilize a first compression size. The second compression algorithm may utilize a second compression size that is the same or different than the first compression size. In an embodiment, a third set of data blocks are grouped into a third group based upon the third set of data blocks comprising secondary backup data (e.g., a backup of primary data actively accessible to client devices). The third set of data blocks within the third group may be compressed into a third compression grouping using a third compression algorithm. The third compression algorithm may be the same or different than the first compression algorithm and/or the second compression algorithm. The third compression algorithm may utilize a third compression size. The third compression size may be the same or different than the first compression size and/or the second compression size. In an embodiment, garbage collection functionality508may be implemented by the node502for the data509, such as for deduplicated data and/or compressed data, as illustrated byFIG.5D. The garbage collection functionality508may identify the third compression group524as comprising data that can be freed from storage. The garbage collection functionality508may decompress the compressed data blocks of the third compression group524as uncompressed data blocks. The garbage collection functionality508may free one or more of the uncompressed data blocks as freed data blocks that become available to store other data. The remaining uncompressed data blocks may be recompressed to create a recompressed group530. In an embodiment, the garbage collection functionality508may asynchronously perform garbage collection to free disk blocks of storage based upon the disk blocks having a number of logical blocks pointing to the disk blocks that is less than a count of the disk blocks. Otherwise, the disk blocks may not be freed if the disk blocks have a number of logical blocks pointing to the disk blocks that is greater than a count of the disk blocks In an example of the layout format used to compress data blocks. A node may store data within storage as uncompressed data blocks, such as 8 uncompressed data blocks or any other number of compressed data blocks. Virtual volume block numbers100to107may correspond to 8 user data blocks that will be compressed together. The uncompressed data blocks may be compressed using a compression algorithm to create compressed data blocks, such as 5 data blocks (e.g., the 8 uncompressed data blocks are compressed into 5 data blocks). That is, the 8 user data blocks are compressed together such that the resulting on-disk storage is 5 blocks. In an example, the compressed data blocks are stored on-disk at 5 physical volume block numbers starting at physical volume block number1000. Disk locations of disk blocks comprising the compressed data blocks may be encoded as an encoding. The encoding may comprise an encoding of a physical volume block number. The encoding may comprise information corresponding to a number of compressed blocks, which may be stored within one or more bits. The encoding may comprise an actual physical volume block number. One or more bits are used to represent the number of physical volume block numbers where compressed data is stored. For example, the compressed data is stored at physical volume block number starting at 1000 and uses 5 physical volume block numbers. So, the 3 bits are set to ‘101’ for binary 5, and the actual physical volume block number bits are set to 1000. This encoded physical volume block number value is written at P:1000,5. In an example, a user file may be represented by a user file L1 format (e.g., a level 1 within a file system comprising the user file). The user file L1 format may comprise information for file block numbers of the user file. The user file L1 format may represent a format for a user file indirect block within a file system volume. Each slot within the user file L1 format represents a file block. Each slot points to a virtual volume block number value and a physical volume block number value for a corresponding file block. In this example, there are 8 virtual volume block numbers (100to107), which are compressed together to create compressed data that is stored at 5 physical volume block numbers starting at physical volume block number100. These file blocks point to the virtual volume block numbers100-107and are encoded as physical volume block number P:1000,5. In an example, a container comprising the user file and/or other user files may be represented by a container file (L1) format. The container file (L1) format may comprise information for virtual volume block numbers associated with the container. The container file (L1) format may represent a format for a container file within the file system volume. Each slots represents a container file mapping from virtual volume block number values to physical volume block number values. In this example, virtual volume block numbers100-107are compressed and the data is stored in 5 physical volume block numbers starting at physical volume block number1000. So, each of the virtual volume block numbers100-107point to the encoded physical volume block number P:1000,5. Still another embodiment involves a computer-readable medium600comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device that is devised in these ways is illustrated inFIG.6, wherein the implementation comprises a computer-readable medium608, such as a compact disc-recordable (CD-R), a digital versatile disc-recordable (DVD-R), flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data606. This computer-readable data606, such as binary data comprising at least one of a zero or a one, in turn comprises processor-executable computer instructions604configured to operate according to one or more of the principles set forth herein. In some embodiments, the processor-executable computer instructions604are configured to perform a method602, such as at least some of the exemplary method400ofFIG.4, for example. In some embodiments, the processor-executable computer instructions604are configured to implement a system, such as at least some of the exemplary system500ofFIGS.5A-5D, for example. Many such computer-readable media are contemplated to operate in accordance with the techniques presented herein. In an embodiment, the described methods and/or their equivalents may be implemented with computer executable instructions. Thus, in an embodiment, a non-transitory computer readable/storage medium is configured with stored computer executable instructions of an algorithm/executable application that when executed by a machine(s) cause the machine(s) (and/or associated components) to perform the method. Example machines include but are not limited to a processor, a computer, a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, and so on. In an embodiment, a computing device is implemented with one or more executable algorithms that are configured to perform any of the disclosed methods. It will be appreciated that processes, architectures and/or procedures described herein can be implemented in hardware, firmware and/or software. It will also be appreciated that the provisions set forth herein may apply to any type of special-purpose computer (e.g., file host, storage server and/or storage serving appliance) and/or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings herein can be configured to a variety of storage system architectures including, but not limited to, a network-attached storage environment and/or a storage area network and disk assembly directly attached to a client or host computer. Storage system should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. In some embodiments, methods described and/or illustrated in this disclosure may be realized in whole or in part on computer-readable media. Computer readable media can include processor-executable instructions configured to implement one or more of the methods presented herein, and may include any mechanism for storing this data that can be thereafter read by a computer system. Examples of computer readable media include (hard) drives (e.g., accessible via network attached storage (NAS)), Storage Area Networks (SAN), volatile and non-volatile memory, such as read-only memory (ROM), random-access memory (RAM), electrically erasable programmable read-only memory (EEPROM) and/or flash memory, compact disk read only memory (CD-ROM)s, CD-Rs, compact disk re-writeable (CD-RW)s, DVDs, cassettes, magnetic tape, magnetic disk storage, optical or non-optical data storage devices and/or any other medium which can be used to store data. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Various operations of embodiments are provided herein. The order in which some or all of the operations are described should not be construed to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated given the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Furthermore, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard application or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer application accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. As used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component includes a process running on a processor, a processor, an object, an executable, a thread of execution, an application, or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers. Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B and/or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used, such terms are intended to be inclusive in a manner similar to the term “comprising”. Many modifications may be made to the instant disclosure without departing from the scope or spirit of the claimed subject matter. Unless specified otherwise, “first,” “second,” or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first set of information and a second set of information generally correspond to set of information A and set of information B or two different or two identical sets of information or the same set of information. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. | 85,705 |
11861170 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for determining effective space utilization in a storage system in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in some implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of storage device utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In some implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B may include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In some implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In some implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drives171A-F. In some implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drives171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In some implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instant, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110A) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In some implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In some implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In some implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In some implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In some implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In some implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. In some implementations, storage drive171A-F may be one or more zoned storage devices. In some implementations, the one or more zoned storage devices may be a shingled HDD. In some implementations, the one or more storage devices may be a flash-based SSD. In a zoned storage device, a zoned namespace on the zoned storage device can be addressed by groups of blocks that are grouped and aligned by a natural size, forming a number of addressable zones. In some implementations utilizing an SSD, the natural size may be based on the erase block size of the SSD. In some implementations, the zones of the zoned storage device may be defined during initialization of the zoned storage device. In some implementations, the zones may be defined dynamically as data is written to the zoned storage device. In some implementations, zones may be heterogeneous, with some zones each being a page group and other zones being multiple page groups. In some implementations, some zones may correspond to an erase block and other zones may correspond to multiple erase blocks. In an implementation, zones may be any combination of differing numbers of pages in page groups and/or erase blocks, for heterogeneous mixes of programming modes, manufacturers, product types and/or product generations of storage devices, as applied to heterogeneous assemblies, upgrades, distributed storages, etc. In some implementations, zones may be defined as having usage characteristics, such as a property of supporting data with particular kinds of longevity (very short lived or very long lived, for example). These properties could be used by a zoned storage device to determine how the zone will be managed over the zone's expected lifetime. It should be appreciated that a zone is a virtual construct. Any particular zone may not have a fixed location at a storage device. Until allocated, a zone may not have any location at a storage device. A zone may correspond to a number representing a chunk of virtually allocatable space that is the size of an erase block or other block size in various implementations. When the system allocates or opens a zone, zones get allocated to flash or other solid-state storage memory and, as the system writes to the zone, pages are written to that mapped flash or other solid-state storage memory of the zoned storage device. When the system closes the zone, the associated erase block(s) or other sized block(s) are completed. At some point in the future, the system may delete a zone which will free up the zone's allocated space. During its lifetime, a zone may be moved around to different locations of the zoned storage device, e.g., as the zoned storage device does internal maintenance. In some implementations, the zones of the zoned storage device may be in different states. A zone may be in an empty state in which data has not been stored at the zone. An empty zone may be opened explicitly, or implicitly by writing data to the zone. This is the initial state for zones on a fresh zoned storage device, but may also be the result of a zone reset. In some implementations, an empty zone may have a designated location within the flash memory of the zoned storage device. In an implementation, the location of the empty zone may be chosen when the zone is first opened or first written to (or later if writes are buffered into memory). A zone may be in an open state either implicitly or explicitly, where a zone that is in an open state may be written to store data with write or append commands. In an implementation, a zone that is in an open state may also be written to using a copy command that copies data from a different zone. In some implementations, a zoned storage device may have a limit on the number of open zones at a particular time. A zone in a closed state is a zone that has been partially written to, but has entered a closed state after issuing an explicit close operation. A zone in a closed state may be left available for future writes, but may reduce some of the run-time overhead consumed by keeping the zone in an open state. In some implementations, a zoned storage device may have a limit on the number of closed zones at a particular time. A zone in a full state is a zone that is storing data and can no longer be written to. A zone may be in a full state either after writes have written data to the entirety of the zone or as a result of a zone finish operation. Prior to a finish operation, a zone may or may not have been completely written. After a finish operation, however, the zone may not be opened a written to further without first performing a zone reset operation. The mapping from a zone to an erase block (or to a shingled track in an HDD) may be arbitrary, dynamic, and hidden from view. The process of opening a zone may be an operation that allows a new zone to be dynamically mapped to underlying storage of the zoned storage device, and then allows data to be written through appending writes into the zone until the zone reaches capacity. The zone can be finished at any point, after which further data may not be written into the zone. When the data stored at the zone is no longer needed, the zone can be reset which effectively deletes the zone's content from the zoned storage device, making the physical storage held by that zone available for the subsequent storage of data. Once a zone has been written and finished, the zoned storage device ensures that the data stored at the zone is not lost until the zone is reset. In the time between writing the data to the zone and the resetting of the zone, the zone may be moved around between shingle tracks or erase blocks as part of maintenance operations within the zoned storage device, such as by copying data to keep the data refreshed or to handle memory cell aging in an SSD. In some implementations utilizing an HDD, the resetting of the zone may allow the shingle tracks to be allocated to a new, opened zone that may be opened at some point in the future. In some implementations utilizing an SSD, the resetting of the zone may cause the associated physical erase block(s) of the zone to be erased and subsequently reused for the storage of data. In some implementations, the zoned storage device may have a limit on the number of open zones at a point in time to reduce the amount of overhead dedicated to keeping zones open. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage device controller119. In one embodiment, storage device controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120nstored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the stored energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example storage system124for data storage in accordance with some implementations. In one embodiment, storage system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, storage controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one storage controller125ato another storage controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage152units or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storage152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storage152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storage152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In embodiments, authorities168operate to determine how operations will proceed against particular logical elements. Each of the logical elements may be operated on through a particular authority across a plurality of storage controllers of a storage system. The authorities168may communicate with the plurality of storage controllers so that the plurality of storage controllers collectively perform operations against those particular logical elements. In embodiments, logical elements could be, for example, files, directories, object buckets, individual objects, delineated parts of files or objects, other forms of key-value pair databases, or tables. In embodiments, performing an operation can involve, for example, ensuring consistency, structural integrity, and/or recoverability with other operations against the same logical element, reading metadata and data associated with that logical element, determining what data should be written durably into the storage system to persist any changes for the operation, or where metadata and data can be determined to be stored across modular storage devices attached to a plurality of the storage controllers in the storage system. In some embodiments the operations are token based transactions to efficiently communicate within a distributed system. Each transaction may be accompanied by or associated with a token, which gives permission to execute the transaction. The authorities168are able to maintain a pre-transaction state of the system until completion of the operation in some embodiments. The token based communication may be accomplished without a global lock across the system, and also enables restart of an operation in case of a disruption or other failure. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage152unit may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e., multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The non-volatile solid state storage152units described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple non-volatile sold state storage152units and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage152units ofFIGS.2A-C. In this version, each non-volatile solid state storage152unit has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The non-volatile solid state storage152unit may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two non-volatile solid state storage152units may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the non-volatile solid state storage152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a non-volatile solid state storage152unit fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g., partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services through the implementation of an infrastructure as a service (‘IaaS’) service model, through the implementation of a platform as a service (‘PaaS’) service model, through the implementation of a software as a service (‘SaaS’) service model, through the implementation of an authentication as a service (‘AaaS’) service model, through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306, and so on. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage system306and remote, cloud-based storage that is utilized by the storage system306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive vast amounts of telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed for a vast array of purposes including, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. Although the example depicted inFIG.3Aillustrates the storage system306being coupled for data communications with the cloud services provider302, in other embodiments the storage system306may be part of a hybrid cloud deployment in which private cloud elements (e.g., private cloud services, on-premises infrastructure, and so on) and public cloud elements (e.g., public cloud services, infrastructure, and so on that may be provided by one or more cloud services providers) are combined to form a single solution, with orchestration among the various platforms. Such a hybrid cloud deployment may leverage hybrid cloud management software such as, for example, Azur™ Arc from Microsoft™, that centralize the management of the hybrid cloud deployment to any infrastructure and enable the deployment of services anywhere. In such an example, the hybrid cloud management software may be configured to create, update, and delete resources (both physical and virtual) that form the hybrid cloud deployment, to allocate compute and storage to specific workloads, to monitor workloads and resources for performance, policy compliance, updates and patches, security status, or to perform a variety of other tasks. Readers will appreciate that by pairing the storage systems described herein with one or more cloud services providers, various offerings may be enabled. For example, disaster recovery as a service (‘DRaaS’) may be provided where cloud resources are utilized to protect applications and data from disruption caused by disaster, including in embodiments where the storage systems may serve as the primary data store. In such embodiments, a total system backup may be taken that allows for business continuity in the event of system failure. In such embodiments, cloud data backup techniques (by themselves or as part of a larger DRaaS solution) may also be integrated into an overall solution that includes the storage systems and cloud services providers described herein. The storage systems described herein, as well as the cloud services providers, may be utilized to provide a wide array of security features. For example, the storage systems may encrypt data at rest (and data may be sent to and from the storage systems encrypted) and may make use of Key Management-as-a-Service (‘KMaaS’) to manage encryption keys, keys for locking and unlocking storage devices, and so on. Likewise, cloud data security gateways or similar mechanisms may be utilized to ensure that data stored within the storage systems does not improperly end up being stored in the cloud as part of a cloud data backup operation. Furthermore, microsegmentation or identity-based-segmentation may be utilized in a data center that includes the storage systems or within the cloud services provider, to create secure zones in data centers and cloud deployments that enables the isolation of workloads from one another. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Bmay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The storage resources308depicted inFIG.3Bmay also include racetrack memory (also referred to as domain-wall memory). Such racetrack memory may be embodied as a form of non-volatile, solid-state memory that relies on the intrinsic strength and orientation of the magnetic field created by an electron as it spins in addition to its electronic charge, in solid-state devices. Through the use of spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire, the domains may pass by magnetic read/write heads positioned near the wire as current is passed through the wire, which alter the domains to record patterns of bits. In order to create a racetrack memory device, many such wires and read/write elements may be packaged together. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The example storage system306depicted inFIG.3Bmay leverage the storage resources described above in a variety of different ways. For example, some portion of the storage resources may be utilized to serve as a write cache, storage resources within the storage system may be utilized as a read cache, or tiering may be achieved within the storage systems by placing data within the storage system in accordance with one or more tiering policies. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘TB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques. Such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include data archiving, data backup, data replication, data snapshotting, data and database cloning, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage system306. For example, the software resources314may include software modules that perform various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’)™, Microsoft Azure™, Google Cloud Platform™, IBM Cloud™, Oracle Cloud™, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. For example, each of the cloud computing instances320,322may execute on an Azure VM, where each Azure VM may include high speed temporary storage that may be leveraged as a cache (e.g., as a read cache). In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318, monitoring and reporting of storage device utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322such as distinct EC2 instances. Readers will appreciate that other embodiments that do not include a primary and secondary controller are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340nmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block storage342,344,346that is offered by the cloud computing environment316such as, for example, as Amazon Elastic Block Store (‘EBS’) volumes. In such an example, the block storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud comping instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud comping instance340a,340b,340n. In an alternative embodiment, rather than using the block storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In yet another embodiment, high performance block storage resources such as one or more Azure Ultra Disks may be utilized as the NVRAM. When a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336may be configured to not only write the data to its own local storage330,334,338resources and any appropriate block storage342,344,346resources, but the software daemon328,332,336may also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’). In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. In other embodiments, rather than using both the cloud computing instances340a,340b,340nwith local storage330,334,338(also referred to herein as ‘virtual drives’) and the cloud-based object storage348to store data, a persistent storage layer may be implemented in other ways. For example, one or more Azure Ultra disks may be used to persistently store data (e.g., after the data has been written to the NVRAM layer). In an embodiment where one or more Azure Ultra disks may be used to persistently store data, the usage of a cloud-based object storage348may be eliminated such that data is only stored persistently in the Azure Ultra disks without also writing the data to an object storage layer. While the local storage330,334,338resources and the block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. The software daemon328,332,336may therefore be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. One or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. For example, if the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318, a monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described in this disclosure may be useful for supporting various types of software applications. In fact, the storage systems may be ‘application aware’ in the sense that the storage systems may obtain, maintain, or otherwise have access to information describing connected applications (e.g., applications that utilize the storage systems) to optimize the operation of the storage system based on intelligence about the applications and their utilization patterns. For example, the storage system may optimize data layouts, optimize caching behaviors, optimize ‘QoS’ levels, or perform some other optimization that is designed to improve the storage performance that is experienced by the application. As an example of one type of application that may be supported by the storage systems describe herein, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, XOps projects (e.g., DevOps projects, DataOps projects, MLOps projects, ModelOps projects, PlatformOps projects), electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson™, Microsoft Oxford™, Google DeepMind™, Baidu Minwa™, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks, including the development of multi-layer neural networks, have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of AI techniques have materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. In order for the storage systems described above to serve as a data hub or as part of an AI deployment, in some embodiments the storage systems may be configured to provide DMA between storage devices that are included in the storage systems and one or more GPUs that are used in an AI or big data analytics pipeline. The one or more GPUs may be coupled to the storage system, for example, via NVMe-over-Fabrics (‘NVMe-oF’) such that bottlenecks such as the host CPU can be bypassed and the storage system (or one of the components contained therein) can directly access GPU memory. In such an example, the storage systems may leverage API hooks to the GPUs to transfer data directly to the GPUs. For example, the GPUs may be embodied as Nvidia™ GPUs and the storage systems may support GPUDirect Storage (‘GDS’) software, or have similar proprietary software, that enables the storage system to transfer data to the GPUs via RDMA or similar mechanism. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains and derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics, including being leveraged as part of a composable data analytics pipeline where containerized analytics architectures, for example, make analytics capabilities more composable. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa™, Apple Siri™, Google Voice™, Samsung Bixby™, Microsoft Cortana™, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various “things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming though the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubernetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. The storage systems described above may also be configured to implement NVMe Zoned Namespaces. Through the use of NVMe Zoned Namespaces, the logical address space of a namespace is divided into zones. Each zone provides a logical block address range that must be written sequentially and explicitly reset before rewriting, thereby enabling the creation of namespaces that expose the natural boundaries of the device and offload management of internal mapping tables to the host. In order to implement NVMe Zoned Name Spaces (‘ZNS’), ZNS SSDs or some other form of zoned block devices may be utilized that expose a namespace logical address space using zones. With the zones aligned to the internal physical properties of the device, several inefficiencies in the placement of data can be eliminated. In such embodiments, each zone may be mapped, for example, to a separate application such that functions like wear levelling and garbage collection could be performed on a per-zone or per-application basis rather than across the entire device. In order to support ZNS, the storage controllers described herein may be configured with to interact with zoned block devices through the usage of, for example, the Linux™ kernel zoned block device interface or other tools. The storage systems described above may also be configured to implement zoned storage in other ways such as, for example, through the usage of shingled magnetic recording (SMR) storage devices. In examples where zoned storage is used, device-managed embodiments may be deployed where the storage devices hide this complexity by managing it in the firmware, presenting an interface like any other storage device. Alternatively, zoned storage may be implemented via a host-managed embodiment that depends on the operating system to know how to handle the drive, and only write sequentially to certain regions of the drive. Zoned storage may similarly be implemented using a host-aware embodiment in which a combination of a drive managed and host managed implementation is deployed. The storage systems described herein may be used to form a data lake. A data lake may operate as the first place that an organization's data flows to, where such data may be in a raw format. Metadata tagging may be implemented to facilitate searches of data elements in the data lake, especially in embodiments where the data lake contains multiple stores of data, in formats not easily accessible or readable (e.g., unstructured data, semi-structured data, structured data). From the data lake, data may go downstream to a data warehouse where data may be stored in a more processed, packaged, and consumable format. The storage systems described above may also be used to implement such a data warehouse. In addition, a data mart or data hub may allow for data that is even more easily consumed, where the storage systems described above may also be used to provide the underlying storage resources necessary for a data mart or data hub. In embodiments, queries the data lake may require a schema-on-read approach, where data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes into the stored location. The storage systems described herein may also be configured implement a recovery point objective (‘RPO’), which may be establish by a user, established by an administrator, established as a system default, established as part of a storage class or service that the storage system is participating in the delivery of, or in some other way. A “recovery point objective” is a goal for the maximum time difference between the last update to a source dataset and the last recoverable replicated dataset update that would be correctly recoverable, given a reason to do so, from a continuously or frequently updated copy of the source dataset. An update is correctly recoverable if it properly takes into account all updates that were processed on the source dataset prior to the last recoverable replicated dataset update. In synchronous replication, the RPO would be zero, meaning that under normal operation, all completed updates on the source dataset should be present and correctly recoverable on the copy dataset. In best effort nearly synchronous replication, the RPO can be as low as a few seconds. In snapshot-based replication, the RPO can be roughly calculated as the interval between snapshots plus the time to transfer the modifications between a previous already transferred snapshot and the most recent to-be-replicated snapshot. If updates accumulate faster than they are replicated, then an RPO can be missed. If more data to be replicated accumulates between two snapshots, for snapshot-based replication, than can be replicated between taking the snapshot and replicating that snapshot's cumulative updates to the copy, then the RPO can be missed. If, again in snapshot-based replication, data to be replicated accumulates at a faster rate than could be transferred in the time between subsequent snapshots, then replication can start to fall further behind which can extend the miss between the expected recovery point objective and the actual recovery point that is represented by the last correctly replicated update. The storage systems described above may also be part of a shared nothing storage cluster. In a shared nothing storage cluster, each node of the cluster has local storage and communicates with other nodes in the cluster through networks, where the storage used by the cluster is (in general) provided only by the storage connected to each individual node. A collection of nodes that are synchronously replicating a dataset may be one example of a shared nothing storage cluster, as each storage system has local storage and communicates to other storage systems through a network, where those storage systems do not (in general) use storage from somewhere else that they share access to through some kind of interconnect. In contrast, some of the storage systems described above are themselves built as a shared-storage cluster, since there are drive shelves that are shared by the paired controllers. Other storage systems described above, however, are built as a shared nothing storage cluster, as all storage is local to a particular node (e.g., a blade) and all communication is through networks that link the compute nodes together. In other embodiments, other forms of a shared nothing storage cluster can include embodiments where any node in the cluster has a local copy of all storage they need, and where data is mirrored through a synchronous style of replication to other nodes in the cluster either to ensure that the data isn't lost or because other nodes are also using that storage. In such an embodiment, if a new cluster node needs some data, that data can be copied to the new node from other nodes that have copies of the data. In some embodiments, mirror-copy-based shared storage clusters may store multiple copies of all the cluster's stored data, with each subset of data replicated to a particular set of nodes, and different subsets of data replicated to different sets of nodes. In some variations, embodiments may store all of the cluster's stored data in all nodes, whereas in other variations nodes may be divided up such that a first set of nodes will all store the same set of data and a second, different set of nodes will all store a different set of data. Readers will appreciate that RAFT-based databases (e.g., etcd) may operate like shared-nothing storage clusters where all RAFT nodes store all data. The amount of data stored in a RAFT cluster, however, may be limited so that extra copies don't consume too much storage. A container server cluster might also be able to replicate all data to all cluster nodes, presuming the containers don't tend to be too large and their bulk data (the data manipulated by the applications that run in the containers) is stored elsewhere such as in an S3 cluster or an external file server. In such an example, the container storage may be provided by the cluster directly through its shared-nothing storage model, with those containers providing the images that form the execution environment for parts of an application or service. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.3Eillustrates an example of a fleet of storage systems376for providing storage services (also referred to herein as ‘data services’). The fleet of storage systems376depicted inFIG.3includes a plurality of storage systems374a,374b,374c,374d,374nthat may each be similar to the storage systems described herein. The storage systems374a,374b,374c,374d,374nin the fleet of storage systems376may be embodied as identical storage systems or as different types of storage systems. For example, two of the storage systems374a,374ndepicted inFIG.3Eare depicted as being cloud-based storage systems, as the resources that collectively form each of the storage systems374a,374nare provided by distinct cloud services providers370,372. For example, the first cloud services provider370may be Amazon AWS™ whereas the second cloud services provider372is Microsoft Azure™, although in other embodiments one or more public clouds, private clouds, or combinations thereof may be used to provide the underlying resources that are used to form a particular storage system in the fleet of storage systems376. The example depicted inFIG.3Eincludes an edge management service366for delivering storage services in accordance with some embodiments of the present disclosure. The storage services (also referred to herein as ‘data services’) that are delivered may include, for example, services to provide a certain amount of storage to a consumer, services to provide storage to a consumer in accordance with a predetermined service level agreement, services to provide storage to a consumer in accordance with predetermined regulatory requirements, and many others. The edge management service366depicted inFIG.3Emay be embodied, for example, as one or more modules of computer program instructions executing on computer hardware such as one or more computer processors. Alternatively, the edge management service366may be embodied as one or more modules of computer program instructions executing on a virtualized execution environment such as one or more virtual machines, in one or more containers, or in some other way. In other embodiments, the edge management service366may be embodied as a combination of the embodiments described above, including embodiments where the one or more modules of computer program instructions that are included in the edge management service366are distributed across multiple physical or virtual execution environments. The edge management service366may operate as a gateway for providing storage services to storage consumers, where the storage services leverage storage offered by one or more storage systems374a,374b,374c,374d,374n. For example, the edge management service366may be configured to provide storage services to host devices378a,378b,378c,378d,378nthat are executing one or more applications that consume the storage services. In such an example, the edge management service366may operate as a gateway between the host devices378a,378b,378c,378d,378nand the storage systems374a,374b,374c,374d,374n, rather than requiring that the host devices378a,378b,378c,378d,378ndirectly access the storage systems374a,374b,374c,374d,374n. The edge management service366ofFIG.3Eexposes a storage services module364to the host devices378a,378b,378c,378d,378nofFIG.3E, although in other embodiments the edge management service366may expose the storage services module364to other consumers of the various storage services. The various storage services may be presented to consumers via one or more user interfaces, via one or more APIs, or through some other mechanism provided by the storage services module364. As such, the storage services module364depicted inFIG.3Emay be embodied as one or more modules of computer program instructions executing on physical hardware, on a virtualized execution environment, or combinations thereof, where executing such modules causes enables a consumer of storage services to be offered, select, and access the various storage services. The edge management service366ofFIG.3Ealso includes a system management services module368. The system management services module368ofFIG.3Eincludes one or more modules of computer program instructions that, when executed, perform various operations in coordination with the storage systems374a,374b,374c,374d,374nto provide storage services to the host devices378a,378b,378c,378d,378n. The system management services module368may be configured, for example, to perform tasks such as provisioning storage resources from the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, migrating datasets or workloads amongst the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, setting one or more tunable parameters (i.e., one or more configurable settings) on the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, and so on. For example, many of the services described below relate to embodiments where the storage systems374a,374b,374c,374d,374nare configured to operate in some way. In such examples, the system management services module368may be responsible for using APIs (or some other mechanism) provided by the storage systems374a,374b,374c,374d,374nto configure the storage systems374a,374b,374c,374d,374nto operate in the ways described below. In addition to configuring the storage systems374a,374b,374c,374d,374n, the edge management service366itself may be configured to perform various tasks required to provide the various storage services. Consider an example in which the storage service includes a service that, when selected and applied, causes personally identifiable information (‘PII’) contained in a dataset to be obfuscated when the dataset is accessed. In such an example, the storage systems374a,374b,374c,374d,374nmay be configured to obfuscate PII when servicing read requests directed to the dataset. Alternatively, the storage systems374a,374b,374c,374d,374nmay service reads by returning data that includes the PII, but the edge management service366itself may obfuscate the PII as the data is passed through the edge management service366on its way from the storage systems374a,374b,374c,374d,374nto the host devices378a,378b,378c,378d,378n. The storage systems374a,374b,374c,374d,374ndepicted inFIG.3Emay be embodied as one or more of the storage systems described above with reference toFIGS.1A-3D, including variations thereof. In fact, the storage systems374a,374b,374c,374d,374nmay serve as a pool of storage resources where the individual components in that pool have different performance characteristics, different storage characteristics, and so on. For example, one of the storage systems374amay be a cloud-based storage system, another storage system374bmay be a storage system that provides block storage, another storage system374cmay be a storage system that provides file storage, another storage system374dmay be a relatively high-performance storage system while another storage system374nmay be a relatively low-performance storage system, and so on. In alternative embodiments, only a single storage system may be present. The storage systems374a,374b,374c,374d,374ndepicted inFIG.3Emay also be organized into different failure domains so that the failure of one storage system374ashould be totally unrelated to the failure of another storage system374b. For example, each of the storage systems may receive power from independent power systems, each of the storage systems may be coupled for data communications over independent data communications networks, and so on. Furthermore, the storage systems in a first failure domain may be accessed via a first gateway whereas storage systems in a second failure domain may be accessed via a second gateway. For example, the first gateway may be a first instance of the edge management service366and the second gateway may be a second instance of the edge management service366, including embodiments where each instance is distinct, or each instance is part of a distributed edge management service366. As an illustrative example of available storage services, storage services may be presented to a user that are associated with different levels of data protection. For example, storage services may be presented to the user that, when selected and enforced, guarantee the user that data associated with that user will be protected such that various recovery point objectives (‘RPO’) can be guaranteed. A first available storage service may ensure, for example, that some dataset associated with the user will be protected such that any data that is more than 5 seconds old can be recovered in the event of a failure of the primary data store whereas a second available storage service may ensure that the dataset that is associated with the user will be protected such that any data that is more than 5 minutes old can be recovered in the event of a failure of the primary data store. An additional example of storage services that may be presented to a user, selected by a user, and ultimately applied to a dataset associated with the user can include one or more data compliance services. Such data compliance services may be embodied, for example, as services that may be provided to consumers (i.e., a user) the data compliance services to ensure that the user's datasets are managed in a way to adhere to various regulatory requirements. For example, one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to the General Data Protection Regulation (‘GDPR’), one or data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to the Sarbanes-Oxley Act of 2002 (‘SOX’), or one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to some other regulatory act. In addition, the one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to some non-governmental guidance (e.g., to adhere to best practices for auditing purposes), the one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to a particular clients or organizations requirements, and so on. In order to provide this particular data compliance service, the data compliance service may be presented to a user (e.g., via a GUI) and selected by the user. In response to receiving the selection of the particular data compliance service, one or more storage services policies may be applied to a dataset associated with the user to carry out the particular data compliance service. For example, a storage services policy may be applied requiring that the dataset be encrypted prior to be stored in a storage system, prior to being stored in a cloud environment, or prior to being stored elsewhere. In order to enforce this policy, a requirement may be enforced not only requiring that the dataset be encrypted when stored, but a requirement may be put in place requiring that the dataset be encrypted prior to transmitting the dataset (e.g., sending the dataset to another party). In such an example, a storage services policy may also be put in place requiring that any encryption keys used to encrypt the dataset are not stored on the same system that stores the dataset itself. Readers will appreciate that many other forms of data compliance services may be offered and implemented in accordance with embodiments of the present disclosure. The storage systems374a,374b,374c,374d,374nin the fleet of storage systems376may be managed collectively, for example, by one or more fleet management modules. The fleet management modules may be part of or separate from the system management services module368depicted inFIG.3E. The fleet management modules may perform tasks such as monitoring the health of each storage system in the fleet, initiating updates or upgrades on one or more storage systems in the fleet, migrating workloads for loading balancing or other performance purposes, and many other tasks. As such, and for many other reasons, the storage systems374a,374b,374c,374d,374nmay be coupled to each other via one or more data communications links in order to exchange data between the storage systems374a,374b,374c,374d,374n. In some embodiments, one or more storage systems or one or more elements of storage systems (e.g., features, services, operations, components, etc. of storage systems), such as any of the illustrative storage systems or storage system elements described herein may be implemented in one or more container systems. A container system may include any system that supports execution of one or more containerized applications or services. Such a service may be software deployed as infrastructure for building applications, for operating a run-time environment, and/or as infrastructure for other services. In the discussion that follows, descriptions of containerized applications generally apply to containerized services as well. A container may combine one or more elements of a containerized software application together with a runtime environment for operating those elements of the software application bundled into a single image. For example, each such container of a containerized application may include executable code of the software application and various dependencies, libraries, and/or other components, together with network configurations and configured access to additional resources, used by the elements of the software application within the particular container in order to enable operation of those elements. A containerized application can be represented as a collection of such containers that together represent all the elements of the application combined with the various run-time environments needed for all those elements to run. As a result, the containerized application may be abstracted away from host operating systems as a combined collection of lightweight and portable packages and configurations, where the containerized application may be uniformly deployed and consistently executed in different computing environments that use different container-compatible operating systems or different infrastructures. In some embodiments, a containerized application shares a kernel with a host computer system and executes as an isolated environment (an isolated collection of files and directories, processes, system and network resources, and configured access to additional resources and capabilities) that is isolated by an operating system of a host system in conjunction with a container management framework. When executed, a containerized application may provide one or more containerized workloads and/or services. The container system may include and/or utilize a cluster of nodes. For example, the container system may be configured to manage deployment and execution of containerized applications on one or more nodes in a cluster. The containerized applications may utilize resources of the nodes, such as memory, processing and/or storage resources provided and/or accessed by the nodes. The storage resources may include any of the illustrative storage resources described herein and may include on-node resources such as a local tree of files and directories, off-node resources such as external networked file systems, databases or object stores, or both on-node and off-node resources. Access to additional resources and capabilities that could be configured for containers of a containerized application could include specialized computation capabilities such as GPUs and AI/ML engines, or specialized hardware such as sensors and cameras. In some embodiments, the container system may include a container orchestration system (which may also be referred to as a container orchestrator, a container orchestration platform, etc.) designed to make it reasonably simple and for many use cases automated to deploy, scale, and manage containerized applications. In some embodiments, the container system may include a storage management system configured to provision and manage storage resources (e.g., virtual volumes) for private or shared use by cluster nodes and/or containers of containerized applications. FIG.3Fillustrates an example container system380. In this example, the container system380includes a container storage system381that may be configured to perform one or more storage management operations to organize, provision, and manage storage resources for use by one or more containerized applications382-1through382-L of container system380. In particular, the container storage system381may organize storage resources into one or more storage pools383of storage resources for use by containerized applications382-1through382-L. The container storage system may itself be implemented as a containerized service. The container system380may include or be implemented by one or more container orchestration systems, including Kubernetes™, Mesos™, Docker Swarm™, among others. The container orchestration system may manage the container system380running on a cluster384through services implemented by a control node, depicted as385, and may further manage the container storage system or the relationship between individual containers and their storage, memory and CPU limits, networking, and their access to additional resources or services. A control plane of the container system380may implement services that include: deploying applications via a controller386, monitoring applications via the controller386, providing an interface via an API server387, and scheduling deployments via scheduler388. In this example, controller386, scheduler388, API server387, and container storage system381are implemented on a single node, node385. In other examples, for resiliency, the control plane may be implemented by multiple, redundant nodes, where if a node that is providing management services for the container system380fails, then another, redundant node may provide management services for the cluster384. A data plane of the container system380may include a set of nodes that provides container runtimes for executing containerized applications. An individual node within the cluster384may execute a container runtime, such as Docker™, and execute a container manager, or node agent, such as a kubelet in Kubernetes (not depicted) that communicates with the control plane via a local network-connected agent (sometimes called a proxy), such as an agent389. The agent389may route network traffic to and from containers using, for example, Internet Protocol (IP) port numbers. For example, a containerized application may request a storage class from the control plane, where the request is handled by the container manager, and the container manager communicates the request to the control plane using the agent389. Cluster384may include a set of nodes that run containers for managed containerized applications. A node may be a virtual or physical machine. A node may be a host system. The container storage system381may orchestrate storage resources to provide storage to the container system380. For example, the container storage system381may provide persistent storage to containerized applications382-1-382-L using the storage pool383. The container storage system381may itself be deployed as a containerized application by a container orchestration system. For example, the container storage system381application may be deployed within cluster384and perform management functions for providing storage to the containerized applications382. Management functions may include determining one or more storage pools from available storage resources, provisioning virtual volumes on one or more nodes, replicating data, responding to and recovering from host and network faults, or handling storage operations. The storage pool383may include storage resources from one or more local or remote sources, where the storage resources may be different types of storage, including, as examples, block storage, file storage, and object storage. The container storage system381may also be deployed on a set of nodes for which persistent storage may be provided by the container orchestration system. In some examples, the container storage system381may be deployed on all nodes in a cluster384using, for example, a Kubernetes DaemonSet. In this example, nodes390-1through390-N provide a container runtime where container storage system381executes. In other examples, some, but not all nodes in a cluster may execute the container storage system381. The container storage system381may handle storage on a node and communicate with the control plane of container system380, to provide dynamic volumes, including persistent volumes. A persistent volume may be mounted on a node as a virtual volume, such as virtual volumes391-1and391-P. After a virtual volume391is mounted, containerized applications may request and use, or be otherwise configured to use, storage provided by the virtual volume391. In this example, the container storage system381may install a driver on a kernel of a node, where the driver handles storage operations directed to the virtual volume. In this example, the driver may receive a storage operation directed to a virtual volume, and in response, the driver may perform the storage operation on one or more storage resources within the storage pool383, possibly under direction from or using additional logic within containers that implement the container storage system381as a containerized service. The container storage system381may, in response to being deployed as a containerized service, determine available storage resources. For example, storage resources392-1through392-M may include local storage, remote storage (storage on a separate node in a cluster), or both local and remote storage. Storage resources may also include storage from external sources such as various combinations of block storage systems, file storage systems, and object storage systems. The storage resources392-1through392-M may include any type(s) and/or configuration(s) of storage resources (e.g., any of the illustrative storage resources described above), and the container storage system381may be configured to determine the available storage resources in any suitable way, including based on a configuration file. For example, a configuration file may specify account and authentication information for cloud-based object storage348or for a cloud-based storage system318. The container storage system381may also determine availability of one or more storage devices356or one or more storage systems. An aggregate amount of storage from one or more of storage device(s)356, storage system(s), cloud-based storage system(s)318, edge management services366, cloud-based object storage348, or any other storage resources, or any combination or sub-combination of such storage resources may be used to provide the storage pool383. The storage pool383is used to provision storage for the one or more virtual volumes mounted on one or more of the nodes390within cluster384. In some implementations, the container storage system381may create multiple storage pools. For example, the container storage system381may aggregate storage resources of a same type into an individual storage pool. In this example, a storage type may be one of: a storage device356, a storage array102, a cloud-based storage system318, storage via an edge management service366, or a cloud-based object storage348. Or it could be storage configured with a certain level or type of redundancy or distribution, such as a particular combination of striping, mirroring, or erasure coding. The container storage system381may execute within the cluster384as a containerized container storage system service, where instances of containers that implement elements of the containerized container storage system service may operate on different nodes within the cluster384. In this example, the containerized container storage system service may operate in conjunction with the container orchestration system of the container system380to handle storage operations, mount virtual volumes to provide storage to a node, aggregate available storage into a storage pool383, provision storage for a virtual volume from a storage pool383, generate backup data, replicate data between nodes, clusters, environments, among other storage system operations. In some examples, the containerized container storage system service may provide storage services across multiple clusters operating in distinct computing environments. For example, other storage system operations may include storage system operations described herein. Persistent storage provided by the containerized container storage system service may be used to implement stateful and/or resilient containerized applications. The container storage system381may be configured to perform any suitable storage operations of a storage system. For example, the container storage system381may be configured to perform one or more of the illustrative storage management operations described herein to manage storage resources used by the container system. In some embodiments, one or more storage operations, including one or more of the illustrative storage management operations described herein, may be containerized. For example, one or more storage operations may be implemented as one or more containerized applications configured to be executed to perform the storage operation(s). Such containerized storage operations may be executed in any suitable runtime environment to manage any storage system(s), including any of the illustrative storage systems described herein. The storage systems described herein may support various forms of data replication. For example, two or more of the storage systems may synchronously replicate a dataset between each other. In synchronous replication, distinct copies of a particular dataset may be maintained by multiple storage systems, but all accesses (e.g., a read) of the dataset should yield consistent results regardless of which storage system the access was directed to. For example, a read directed to any of the storage systems that are synchronously replicating the dataset should return identical results. As such, while updates to the version of the dataset need not occur at exactly the same time, precautions must be taken to ensure consistent accesses to the dataset. For example, if an update (e.g., a write) that is directed to the dataset is received by a first storage system, the update may only be acknowledged as being completed if all storage systems that are synchronously replicating the dataset have applied the update to their copies of the dataset. In such an example, synchronous replication may be carried out through the use of I/O forwarding (e.g., a write received at a first storage system is forwarded to a second storage system), communications between the storage systems (e.g., each storage system indicating that it has completed the update), or in other ways. In other embodiments, a dataset may be replicated through the use of checkpoints. In checkpoint-based replication (also referred to as ‘nearly synchronous replication’), a set of updates to a dataset (e.g., one or more write operations directed to the dataset) may occur between different checkpoints, such that a dataset has been updated to a specific checkpoint only if all updates to the dataset prior to the specific checkpoint have been completed. Consider an example in which a first storage system stores a live copy of a dataset that is being accessed by users of the dataset. In this example, assume that the dataset is being replicated from the first storage system to a second storage system using checkpoint-based replication. For example, the first storage system may send a first checkpoint (at time t=0) to the second storage system, followed by a first set of updates to the dataset, followed by a second checkpoint (at time t=1), followed by a second set of updates to the dataset, followed by a third checkpoint (at time t=2). In such an example, if the second storage system has performed all updates in the first set of updates but has not yet performed all updates in the second set of updates, the copy of the dataset that is stored on the second storage system may be up-to-date until the second checkpoint. Alternatively, if the second storage system has performed all updates in both the first set of updates and the second set of updates, the copy of the dataset that is stored on the second storage system may be up-to-date until the third checkpoint. Readers will appreciate that various types of checkpoints may be used (e.g., metadata only checkpoints), checkpoints may be spread out based on a variety of factors (e.g., time, number of operations, an RPO setting), and so on. In other embodiments, a dataset may be replicated through snapshot-based replication (also referred to as ‘asynchronous replication’). In snapshot-based replication, snapshots of a dataset may be sent from a replication source such as a first storage system to a replication target such as a second storage system. In such an embodiment, each snapshot may include the entire dataset or a subset of the dataset such as, for example, only the portions of the dataset that have changed since the last snapshot was sent from the replication source to the replication target. Readers will appreciate that snapshots may be sent on-demand, based on a policy that takes a variety of factors into consideration (e.g., time, number of operations, an RPO setting), or in some other way. The storage systems described above may, either alone or in combination, by configured to serve as a continuous data protection store. A continuous data protection store is a feature of a storage system that records updates to a dataset in such a way that consistent images of prior contents of the dataset can be accessed with a low time granularity (often on the order of seconds, or even less), and stretching back for a reasonable period of time (often hours or days). These allow access to very recent consistent points in time for the dataset, and also allow access to access to points in time for a dataset that might have just preceded some event that, for example, caused parts of the dataset to be corrupted or otherwise lost, while retaining close to the maximum number of updates that preceded that event. Conceptually, they are like a sequence of snapshots of a dataset taken very frequently and kept for a long period of time, though continuous data protection stores are often implemented quite differently from snapshots. A storage system implementing a data continuous data protection store may further provide a means of accessing these points in time, accessing one or more of these points in time as snapshots or as cloned copies, or reverting the dataset back to one of those recorded points in time. Over time, to reduce overhead, some points in the time held in a continuous data protection store can be merged with other nearby points in time, essentially deleting some of these points in time from the store. This can reduce the capacity needed to store updates. It may also be possible to convert a limited number of these points in time into longer duration snapshots. For example, such a store might keep a low granularity sequence of points in time stretching back a few hours from the present, with some points in time merged or deleted to reduce overhead for up to an additional day. Stretching back in the past further than that, some of these points in time could be converted to snapshots representing consistent point-in-time images from only every few hours. Although some embodiments are described largely in the context of a storage system, readers of skill in the art will recognize that embodiments of the present disclosure may also take the form of a computer program product disposed upon computer readable storage media for use with any suitable processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, solid-state media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps described herein as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. In some examples, a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein. The instructions, when executed by a processor of a computing device, may direct the processor and/or computing device to perform one or more operations, including one or more of the operations described herein. Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media. A non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device). For example, a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media. Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g., a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory (“RAM”), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.). Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM). For further explanation,FIG.4sets forth a diagram illustrating a family of volumes and snapshots, as well as how the family of volumes and snapshots changes over time. At a time depicted as time1(414), a volume labelled as volume1(410) is illustrated as including four blocks of data, block A (402), block B (404), block C (406), and block D (408). In such an example, each block (402,404,406,408) in the volume may correlate to some unit of storage (i.e., a block) within storage devices that are included within a storage system. Consider an example, in which the storage system included a plurality of SSDs. In such an example, data may be written to the SSDs in sizes that correlate to the page size (e.g., 4 KB) of the SSD, where a certain number of pages (e.g., 128, 256) form a single block within the SSD. In this example, data may be erased from the SSDs at a block-level granularity. In the example depicted inFIG.4, a snapshot of the volume may be taken periodically. Each snapshot may represent a point-in-time copy of the contents of the volume at the time that the snapshot was taken. As such, in the example depicted inFIG.4, the snapshot labeled as snapshot1of volume1(412) includes the four same blocks as volume1(410), indicating that the contents of volume1(410) have not changed since the point in time that snapshot1of volume1(412) was taken. Readers will appreciate that because a storage system that stores the data that is contained in volume1(410) and snapshot1of volume1(412) may implement techniques such as data deduplication, the contents of each block (402,404,406,408) may only be stored once within the storage system. As such, although the family of volumes and snapshots depicted inFIG.4at time1(414) include a total of eight blocks, the storage system may only be required to store one copy of each block, where snapshot1of volume1(412) only includes pointers to the stored copy of each block without requiring that additional copies of each block actually be stored within the storage system. As such, the number of unique blocks that are actually stored within the storage system as the result of supporting the family of volumes and snapshots depicted inFIG.4at time1(414) would be four blocks. In the example depicted inFIG.4, a user overwrites block B (404) of volume1(410), such that volume1(410) now includes block B′ (418). A user may overwrite a particular block within volume1(410), for example, by issuing a request to write data to a logical address that corresponds to the portion of volume1(410) that includes block B (404). Readers will appreciate that although the underlying contents of the physical memory locations that store the data contained in block B (404) may not be altered, data may be written to another physical memory location which may be subsequently mapped to the logical address that corresponds to the portion of volume1(410) that includes block B (404). In the example method depicted inFIG.4, an additional snapshot labelled as snapshot2of volume1(420) is taken after volume1(410) has been updated to include the contents of block B′ (418). As such, snapshot2of volume1(420) represents a point-in-time copy of volume1(410), where that point-in-time is after volume1(410) has been updated to include the contents of block B′ (418). Readers will appreciate that at time2(422), although the family of volumes and snapshots depicted inFIG.4at time2(422) includes a total of twelve blocks, the storage system may only be required to store one copy of each unique block. As such, the number of unique blocks that are actually stored within the storage system as the result of supporting the family of volumes and snapshots depicted inFIG.4at time2(420) would be five blocks, as only a single copy of block A (402), block B (404), block B′ (418), block C (406), and block D (408) would need to be stored in the storage system. In the example depicted inFIG.4, a user subsequently overwrites block C (406) of volume1(410), such that volume1(410) now includes block C′ (426). A user may overwrite a particular block within volume1(410), for example, by issuing a request to write data to a logical address that corresponds to the portion of volume1(410) that includes block C (406). Readers will appreciate that although the underlying contents of the physical memory locations that store the data contained in block C (406) may not be altered, data may be written to another physical memory location which may be subsequently mapped to the logical address that corresponds to the portion of volume1(410) that includes block C (406). In the example method depicted inFIG.4, an additional snapshot labelled as snapshot3of volume1(428) is taken after volume1(410) has been updated to include the contents of block C′ (426). As such, snapshot3of volume1(428) represents a point-in-time copy of volume1(410), where that point-in-time is after volume1(410) has been updated to include the contents of block C′ (426). Readers will appreciate that at time3(430), although the family of volumes and snapshots depicted inFIG.4at time3(430) includes a total of sixteen blocks, the storage system may only be required to store one copy of each unique block. As such, the number of unique blocks that are actually stored within the storage system as the result of supporting the family of volumes and snapshots depicted inFIG.4at time3(430) would be six blocks, as only a single copy of block A (402), block B (404), block B′ (418), block C (406), block C′ (426), and block D (408) would need to be stored in the storage system. For further explanation,FIG.5sets forth a flow chart illustrating an example method of determining effective space utilization in a storage system that includes a plurality of storage devices (510,512) in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system (502) depicted inFIG.5may be similar to the storage systems described above with reference toFIGS.1A-1D,2A-2G, and3A-3B, as well as storage systems that include any combination of the components described in the preceding Figures. The example method depicted inFIG.5includes identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity. In the example method depicted inFIG.5, the user-visible entity may be embodied, for example, as a volume, logical drive, or other representation of a single accessible storage area. The amount of data stored within the storage system (502) that is associated with a user-visible entity may be identified by taking into account the results of various data reduction techniques such as, for example, data compression and data deduplication. As such, the amount of data stored within the storage system (502) that is associated with a user-visible entity represents the amount of physical storage within the storage system that is consumed as a result of supporting the user-visible entity (as opposed to the amount of data that users have attempted to write to the user-visible entity). Consider an example in which a user issues a series of write operations that are directed to the user-visible entity, where the cumulative amount of data that the user has included in such write operations is 10 MB. In such an example, assume that through data compression techniques and data reduction techniques, the storage system only has to store 3 MB of data to complete the write operations. In such an example, the amount of data stored within the storage system (502) that is associated with a user-visible entity would be equal to 3 MB, in spite of the fact that the user believes that they have requested that the storage system store 10 MB of data. In the example method depicted inFIG.5, identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity may be carried out, for example, by identifying each block of data that is stored within the storage system and associated with the user-visible entity, determining the size of each block of data that is stored within the storage system and associated with the user-visible entity, and summing the size of each block of data that is stored within the storage system and associated with the user-visible entity. For example, if the storage system identifies that the storage system has stored 50 blocks of data that are associated with a particular user-visible entity, and each block is 1 MB in size, then the amount of data stored within the storage system (502) that is associated with a user-visible entity would be equal to 50 MB. Readers will appreciate that the organization of data and supporting metadata structures may be useful in identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity, as will be described in greater detail below. The example method depicted inFIG.5also includes identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity. As described above, a snapshot of the user-visible entity may be taken periodically and each snapshot may represent a point-in-time copy of the contents of the user-visible entity at the time that the snapshot was taken. The amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity may be identified by taking into account the results of various data reduction techniques such as, for example, data compression and data deduplication. As such, the amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity represents the amount of physical storage within the storage system that is consumed as a result of taking snapshots of the user-visible entity (as opposed to the amount of data that is associated with the user-visible entity at the time that the snapshot was taken). Consider the user-visible entity is a volume, where the volume includes 10 MB of data. In such an example, assume that through data compression techniques and data reduction techniques, the storage system only has to store 3 MB of unique data to retain each snapshot of the volume, as the other 7 MB of the volume have not changed since each snapshot was taken and, as such, the storage system does not need to retain an additional copy of the 7 MB of the volume that has not changed since each snapshot was taken given that the storage system already has a copy of this 7 MB chuck of data by virtue of supporting the volume. In such an example, the amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity would be equal to 3 MB. Readers will appreciate that, because the storage system (502) depicted inFIG.5is capable of deduplicating data, only those blocks that are unique to the snapshot of the user-visible entity will contribute to the amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity. Consider the example described above with reference toFIG.4, where a particular volume (410) originally included four blocks: block A (402), block B (404), block C (406), and block D (408). Furthermore assume that a first snapshot (412) of the volume (410) was taken, block B (404) was subsequently overwritten with block B′ (418), and a second snapshot (420) was taken of volume1(410). In such an example, the amount of data stored within the storage system (502) that is associated with the user-visible entity (i.e., volume (410)) would be equal to the cumulative size of block A (402), block B′ (418), block C (406), and block D (408), as these blocks represent the contents of the volume. The amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity, however, would be equal to the size of block B (404) as block B (404) is only retained within the storage system (502) in order to retain a complete copy of the first snapshot (412). Readers will appreciate that a complete copy of the first snapshot (412) can be constructed given that block A (402), block C (406), and block D (408) are retained in the storage system to support the volume (410). Continuing with the example described above with reference toFIG.4and the example described in the preceding paragraph, assume that block C (406) was subsequently overwritten with block C′ (426). In such an example, the amount of data stored within the storage system (502) that is associated with the user-visible entity (i.e., volume (410)) would be equal to the cumulative size of block A (402), block B′ (418), block C′ (426), and block D (408), as these blocks represent the contents of the volume. The amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity, however, would be equal to the cumulative size of block B (404) and block C (406), as block B (404) is only retained within the storage system (502) in order to retain a complete copy of the first snapshot (412) and block C (406) is only retained within the storage system (502) in order to retain a complete copy of the second snapshot (420). Readers will appreciate that a complete copy of the first snapshot (420) can be constructed given that block A (402), block B′ (418), and block D (408) are retained in the storage system to support the volume (410). The example method depicted inFIG.5also includes reporting (508), in dependence upon the amount of data stored within the storage system (502) that is associated with the user-visible entity and the amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity, a total capacity utilization associated with the user-visible entity. The total capacity utilization associated with the user-visible entity may be determined, for example, by summing the amount of data stored within the storage system (502) that is associated with the user-visible entity and the amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity. In such an example, reporting (508) the total capacity utilization associated with the user-visible entity may be carried out, for example, by presenting the total capacity utilization associated with the user-visible entity to a system administrator, by sending the total capacity utilization associated with the user-visible entity to a billing module such as a billing module that charges users of the storage system (502) for resources as resources are consumed, and so on. For further explanation,FIG.6sets forth a flow chart illustrating an additional example method of determining effective space utilization in a storage system in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6is similar to the example method depicted inFIG.5, as the example method depicted inFIG.6also includes identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity, identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity, and reporting (508), in dependence upon the an amount of data stored within the storage system (502) that is associated with the user-visible entity and the amount of data stored within the storage system (502) that is associated with the snapshot of the user-visible entity, a total capacity utilization associated with the user-visible entity. In the example method depicted inFIG.6, identifying (504) the amount of data stored within the storage system (502) that is associated with the user-visible entity can include identifying (600) all blocks of data stored within the storage system (502) that are unique to the user-visible entity as well as identifying (602) all blocks of data stored within the storage system (502) that are included in both the user-visible entity and one or more snapshots of the user-visible entity. In the example method depicted inFIG.6, a particular block of data may be associated with the user-visible entity if it is unique to the user-visible entity and not included in any snapshots of the user-visible entity. For example, if a particular block of data is written to a volume and no snapshot of the volume has been taken after the particular block of data was written to the volume, the storage system (502) has still consumed storage resources in order to support the volume. In addition, a particular block of data may be associated with the user-visible entity if it is included in both the user-visible entity and one or more snapshots of the user-visible entity. For example, if a particular block of data is written to a volume and a snapshot of the volume is subsequently taken after the particular block of data was written to the volume, the storage system (502) has still consumed storage resources in order to support the volume. In this instance, however, the snapshot may simply include a reference to the block of data that was written to the volume, as described in greater detail above and as the result of data reduction techniques such as data deduplication. In the example method depicted inFIG.6, identifying (504) the amount of data stored within the storage system (502) that is associated with the user-visible entity can alternatively include identifying (604) blocks of data stored within the storage system (502) that are associated with the user-visible entity as well as one or more other user-visible entities. In the example method depicted inFIG.6, a particular block of data that is associated with the user-visible entity as well as one or more other user-visible entities may receive different treatments depending on the implementation. For example, a particular block of data that is associated with the user-visible entity as well as one or more other user-visible entities may not be counted towards the amount of data stored within the storage system (502) that is associated with the user-visible entity. Alternatively, a particular block of data that is associated with the user-visible entity as well as one or more other user-visible entities may only have a fractional portion counted towards the amount of data stored within the storage system (502) that is associated with the user-visible entity. Likewise, a particular block of data that is associated with the user-visible entity as well as one or more other user-visible entities may be fully counted towards the amount of data stored within the storage system (502) that is associated with the user-visible entity. For example, if a particular 1 MB block of data that is stored within the storage system is included in volume1and volume2, the particular block of data may not be counted towards the amount of data stored within the storage system (502) that is associated with volume1given that the particular block of data would be stored within the storage system regardless of volume1attempting to store the block of data, the block of data may have a fractional portion (e.g., 0.5 MB) of its size counted towards the amount of data stored within the storage system (502) that is associated with volume1, or the size of the particular block of data (1 MB) may be fully counted towards the amount of data stored within the storage system (502) that is associated with volume1, depending on the implementation. Given that the treatment of such blocks may be different in different implementations, however, it may still be valuable to identify (606) blocks of data stored within the storage system (502) that are associated with the user-visible entity as well as one or more other user-visible entities. In the example method depicted inFIG.6, identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity can include identifying (606) all blocks of data stored within the storage system (502) that are unique to the snapshots of the user-visible entity. In the example method depicted inFIG.6, a particular block of data may be unique to the snapshots of the user-visible entity if the block is included in one or more snapshots of the user-visible entity but the block of data is no longer part of the active dataset represented by the user-visible entity. For example, if a particular block of data is written to a volume, and snapshot of the volume is taken, and the particular block of data is subsequently overwritten with a new block of data, the particular block of data may be unique to the snapshot as it is no longer part of the dataset contained in the volume. Referring back toFIG.4, block B (404) would be an example of a block of data that is unique to snapshot1of volume1(412) from time2(422) and beyond. In the example method depicted inFIG.6, identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity can alternatively include identifying (608) all blocks of data stored within the storage system (502) that are associated with the snapshots of the user-visible entity as well as one or more other user-visible entities or one or more snapshots of another user-visible entity may receive different treatments depending on the implementation. For example, a particular block of data that is associated with the snapshots of the user-visible entity as well as one or more other user-visible entities or one or more snapshots of another user-visible entity may not be counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of the user-visible entity. Alternatively, a particular block of data that is associated with the snapshots of the user-visible entity as well as one or more other user-visible entities or one or more snapshots of another user-visible entity may only have a fractional portion counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of the user-visible entity. Likewise, a particular block of data that is associated with the snapshots of the user-visible entity as well as one or more other user-visible entities or one or more snapshots of another user-visible entity may be fully counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of the user-visible entity. For example, if a particular 1 MB block of data that is stored within the storage system is included in snapshot1of volume1and the block of data is also part of volume2, the particular block of data may not be counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of volume1given that the particular block of data would be stored within the storage system regardless of the presence of snapshot1. Alternatively, the block of data may have a fractional portion (e.g., 0.5 MB) of its size counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of volume1, or the size of the particular block of data (1 MB) may be fully counted towards the amount of data stored within the storage system (502) that is associated with the snapshots of volume1, depending on the implementation. Given that the treatment of such blocks may be different in different implementations, however, it may still be valuable to identify (610) all blocks of data stored within the storage system (502) that are associated with the snapshots of the user-visible entity as well as one or more other user-visible entities or one or more snapshots of another user-visible entity. For further explanation,FIG.7sets forth a flow chart illustrating an additional example method of determining effective space utilization in a storage system in accordance with some embodiments of the present disclosure. The example method depicted inFIG.7is similar to the example method depicted inFIG.5, as the example method depicted inFIG.7also includes identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity, identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity, and reporting (508), in dependence upon the an amount of data stored within the storage system (502) that is associated with the user-visible entity and the amount of data stored within the storage system (502) that is associated with the snapshot of the user-visible entity, a total capacity utilization associated with the user-visible entity. In the example method depicted inFIG.7, identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity can include determining (704), at a plurality of logical offsets, whether a block of data at the logical offset within a particular snapshot is identical to a block of data at the logical offset within the user-visible entity. Readers will appreciate that when a block of data at a particular logical offset within a particular snapshot is identical to a block of data at the logical offset within the user-visible entity, the contents of the particular snapshot and the user-visible entity at the logical offset are identical. As such, and as described in greater detail below, in storage systems that implement data reduction techniques such as data deduplication, the presence of an identical block of data in a user-visible entity and in a snapshot means that only one copy of the data block must actually be retained in the system. As such, taking a snapshot of the volume does not require that additional storage resources be consumed as no additional data is actually stored in the storage system. Readers will appreciate that when a block of data at a particular logical offset within a particular snapshot is not identical to a block of data at the logical offset within the user-visible entity, however, the storage system must store two blocks of data (a first block that represents the content of the volume at a particular logical offset and a second block that represents the contents of the snapshot at a particular logical offset). As such, in situations where a block of data at a particular logical offset within a particular snapshot is not identical to a block of data at the logical offset within the user-visible entity, the block of data within the snapshot that does not match the block of data in the volume must be attributed to the snapshot for the purposes of identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity. Consider the example depicted inFIG.4, assuming for the purposes of this example that each block of data depicted inFIG.4is 1 MB in size and that each volume and snapshot is being examined at time3(430). In such an example, an examination of the blocks of data stored within volume1(410), snapshot3of volume1(428), snapshot2of volume1(420), and snapshot1of volume1(412) at an offset of zero would reveal that the contents of volume1(410), snapshot3of volume1(428), snapshot2of volume1(420), and snapshot1of volume1(412) are identical, as each entity includes block A (402) at an offset of zero. At an offset of 1 MB, however, an examination of the blocks of data stored within volume1(410), snapshot3of volume1(428), snapshot2of volume1(420), and snapshot1of volume1(412) would reveal that the contents of volume1(410), snapshot3of volume1(428), and snapshot2of volume1(420) are not identical to the contents of snapshot1of volume1(412), as snapshot1of volume1(412) includes block B (404) at an offset of 1 MB and volume1(410), snapshot3of volume1(428), and snapshot2of volume1(420) include block B′ (418) at an offset of 1 MB. As such, the total amount of data stored within the storage system (502) at an offset of 1 MB for this family of snapshots and volumes would be equal to 2 MB, as there are two distinct 1 MB blocks stored at an offset of 1 MB by the members of this family. Readers will appreciate that although the preceding paragraphs generally relate to embodiments where data stored within the storage system is attributed to snapshots only when the data is unique to snapshots and data stored within the storage system is attributed to volumes (or other user-visible entities) only when the data is included within the volume (even if it is not unique to the volume), other embodiments are contemplated so long as the total amount of data attached to a particular family of volumes and snapshots is correctly calculated. For example, in some embodiments, data stored within the storage system may be attributed to a volume only when the data is unique to the volume and data stored within the storage system may be attributed to snapshots only when the data is included within any of the snapshots (even if it is not unique to the snapshots). In such embodiments, identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity can include determining (702), at a plurality of logical offsets, whether a block of data at the logical offset within a particular snapshot is identical to a block of data at the logical offset within the user-visible entity. Readers will further appreciate that although the preceding paragraphs relate to embodiments where a determination is made as to whether, at a plurality of logical offsets, a block of data at the logical offset within a particular snapshot is identical to a block of data at the logical offset within the user-visible entity, other embodiments are possible. For example, in other embodiments, identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity and identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity may take the logical offsets into account as part of a process of determining what data is unchanged within a user-visible entity, as the data that is unchanged will not be separately stored as part of taking snapshots of the user-visible entity. Likewise, identifying (504) an amount of data stored within the storage system (502) that is associated with a user-visible entity and identifying (506) an amount of data stored within the storage system (502) that is associated with all snapshots of the user-visible entity can occur in a way where the logical offsets are not into account when determining what data is shared between user-visible entities, shared between a user-visible entity and one or more snapshots, and so on. For further explanation,FIG.8sets forth a flow chart illustrating an example method of sizing resources for a replication target in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system800depicted inFIG.8may be similar to the storage systems described above, as well as storage systems that include any combination of the components described in the preceding Figures. Additionally, although the following description describes a storage system as performing the method, the reader will understand that the method may be performed by a system other than the storage system on which the replication source is stored. For example, an external analytics application may perform the method and report the resources required to replicate the replication source. Resources, as used in the following description, are system attributes for hosting a storage object. A storage object, as used in the following description, is an item of storage such as a storage volume, a dataset, a file system, data object, or other item of storage. The system attributes may be physical such as a physical hardware for hosting the storage object, or they may be virtual, such as a cloud computing instance or other virtual system. In some examples, the system attributes may include storage space, compute resources, network resources, and/or other system attributes. The following description will generally use a size of data storage when describing a storage resource, but the reader will recognize that other resources may be dependent on the storage space. For example, after determining a size of a replication target, resources for supporting the target may be determined based on the size of the replication target. Additionally, other resources may be sized based on a recovery point objective and an amount of data that is expected to be transmitted between the replication source and the replication target when updating the replication target. Replication, as used in the following description, is the process of making a copy of a replication source at a replication target. The replication source is a storage object that is to be replicated. A replication target is a destination for storing the copy of the replication source. The replication source may be stored at a source storage system and the replication target may be located at the source storage system or at a target storage system remote from the source storage system. The source storage system may include additional data related to the replication source such as snapshot data, transactional data, and/or other information. This additional data contributes to the footprint of the replication source and consumes resources of the source storage system but may not be required for the replication target. Therefore, the resources required by the footprint of the replication source may be different than the resources required to replicate the replication source to the replication target. For example, sizing the resources at the replication target to match the footprint of the replication source may result in unutilized resources at the replication target since the additional data may not be needed at the replication target. On the other hand, if the target resources are sized according to the replication source and not the footprint of the replication source, the target resources may be too low to accommodate ongoing replication operations of the replication source. The example method depicted inFIG.8provides a method for sizing the target resources that may avoid under sizing and/or oversizing the target resources for ongoing replication of the replication source. The example method depicted inFIG.8includes determining802an initial resource requirement for a replication source. Determining802an initial resource requirement for a replication source may be carried out by determining a size of the replication source. For example, a storage system800may maintain a record of the current size (e.g., the amount of storage capacity that is used to hold user data, metadata, and so on) of the replication source. The storage system800may then access the record to determine the current size of the replication source. In other examples, the storage system800may periodically report the size of the replication source to a data collector. For example, the storage system800may communicate the size of the replication source to an analytics application as part of the normal operation and management of the storage system800. The analytics application may then be queried to find the current size of the replication source800. Or in some examples, the analytics application may use the reported size of the replication source to determine a resource requirement for the replication source. The example method depicted inFIG.8also includes determining804a retention resource requirement for the replication source. A retention resource requirement, as used in the following description, is an amount of resources required to accommodate a data retention policy of the replication target. For example, the replication target may be required to store not only a copy of the current replication source, but also copies of the replication source as the replication source existed at specified time intervals (e.g., snapshots) for a length of time described in the data retention policy. The source storage system800may have a replication schedule that replicates the replication source to the replication target at intervals according to a recovery point objective (RPO). For example, the RPO policy may specify that the storage system800should replicate the replication source every four hours and the data retention policy may specify that the replication target maintains seven days' worth of data. In such an example, the replication target would be required to store the past seven days of replicated data with six versions per day, or forty-two different versions of the replicated data. The resources required to store the forty-two different versions of the replication source may be much less than storing forty-two actual copies of the replication source, since the replication source may not change significantly between adjacent time instances and unchanged data does not need to be duplicated. The amount of data that changes between time instances may be estimated by examining the historical throughput for writing to the replication source. For example, the historical write throughput may be logged by the storage system800or reported to an analytics application. The historical write throughput may then be aggregated to determine the total amount of data written to the replication source for the length of time specified in the data retention policy. However, the total amount of data written to the replication source may overestimate the retention resource requirement since the data may be written to the same location in the replication source between time instances. To compensate for the possibility of writing to the same locations between time periods, the total amount of data written to the replication source may be discounted. For example, the total amount of data written to the replication source may be discounted by sixty percent to determine the retention resource requirement. Sixty percent is used only as an example and other percentages may be used based. The example method depicted inFIG.8also includes reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source. The total resource requirement, as used in the following description, is the estimated total amount of resources needed to host the replication target. The total resource requirement may be a function of the initial resource requirement and the retention resource requirement. In some examples, the total resource requirement may be the sum of the initial resource requirement and the retention resource requirement. For example, if the initial resource requirement was determined to be eight terabytes of storage and the retention resource requirement was determined to be two terabytes of storage, the total resource requirement would be ten terabytes of storage. Reporting806the total resource requirement for replicating the replication resource may be carried out by communicating the total resource requirement to a user or system. For example, the storage system800may send a message to a user interface causing the user interface to display a message communicating the total resource requirement. Or in another example, the storage system800may send a message to another computing system which may then present the total resource requirement to a user or system. In yet another example, the storage system800may transmit a message including information describing the total resource requirement to an application, which may use the information to automate the creation of the replication target. In other instances, an application separate from the storage system800may report the total resource requirement. For example, an analytics application may present the total resource requirement to a user of the analytics application. For further explanationFIG.9sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.9is similar to the method ofFIG.8in that that the method ofFIG.9also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source. The method ofFIG.9differs from the method ofFIG.8in that, in the method ofFIG.9, determining the initial resource requirement for the replication source includes determining902a size of a footprint of the replication source. Determining902the size of the footprint of the replication source may be carried out by finding the total storage space occupied by data associated with the replication source. The total storage space occupied by the replication source may include information of than the replication source such as snapshots, transaction data, and/or other information. In some examples a storage system800may track the size of the footprint of a replication source and periodically report the size of the footprint to an analytics application. The method ofFIG.9further differs from the method ofFIG.8in that, in the method ofFIG.9, determining802the initial resource requirement for the replication source further includes adjusting904the size of the footprint to compensate for existing snapshots. Adjusting the size of the footprint to compensate for existing snapshots may be carried out by subtracting the size of existing snapshots from the size of the footprint. For example, a storage system800may track the size of snapshot data for a storage system800. Or in some examples, the storage system800may report the size of snapshot data to an analytics application. The storage system800, or analytics application, may adjust the size of the footprint by subtracting the reported size of the snapshot data from the footprint of the replication source. It may be beneficial to determine the initial resource requirement for a replication source using the adjusted footprint compared to finding the initial resource requirement of the replication source as reported by the storage system800because the size of the replication source reported by the storage system800may not accurately reflect the actual amount of data required to store the replication source. For example, data reduction techniques such as compression and deduplication may result in the space required to store a replication source being smaller than the reported size of the replication source. Determining the footprint of the replication source and adjusting the footprint to compensate for existing snapshot data may provide a better estimate of the amount of space required to host the replication source. For further explanationFIG.10sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.10is similar to the method ofFIG.8in that that the method ofFIG.10also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source. The method ofFIG.10differs from the method ofFIG.8in that, in the method ofFIG.10, determining802a retention resource requirement for the replication source includes aggregating1002historical write throughput over a time interval for the replication source, wherein the time interval corresponds to a retention period for the replication target. Write throughput of the storage system800may be monitored by the storage system800and information describing the write throughput may be saved in a log. In other examples, the write throughput of the storage system800may be reported to an analytics application which may store data describing the write throughput. The time interval is a period of time that corresponds to a retention period for the replication target. For example, if a storage policy for the replication target indicates that seven days of data should be maintained, the time interval would be seven days. In some examples the historical write throughput may be estimated based on characteristics of the storage system800hosting the replication source. For example, if data segments are a known size and the storage system800or analytics application tracks when segments are activated, the amount of write throughput may be estimated by multiplying the size of the data segment by the rate at which the data segments are activated. Using segment activation data may be more accurate compared to write throughput to the storage system since the segment activation may account for compression and deduplication at the replication source. Aggregating1002historical write throughput over a time interval for the replication source may be carried out by analyzing the logged write throughput to determine a typical throughput for any given time and multiplying the typical throughput by the time interval. For example, the storage system800or analytics application may access the write throughput log for the past seven days and find the average write throughput. Or, in another example, the storage system800or analytics application may find a percentile, such as a 90thpercentile, of the throughput. The aggregated historical write throughput may be used to estimate an amount of data that will need to be retained at the replication target to meet the requirements of the storage policy. The aggregated historical write throughput may be adjusted to compensate for other factors that may change the amount of data stored at the replication target. For example, the aggregated historical write throughput may be reduced by a set percentage to determine a retention resource requirement. For further explanationFIG.11sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.11is similar to the method ofFIG.10in that that the method ofFIG.11also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source, wherein determining804a retention resource requirement for the replication source further comprises aggregating1002historical write throughput over a time interval for the replication source, wherein the time interval corresponds to a retention period for the replication target. The method ofFIG.11differs from the method ofFIG.10, in that in the method ofFIG.10the time interval comprises a plurality of time periods and aggregating1002historical write throughput over a time interval for the replication source comprises compensating1102for overwrites during each time period of a plurality of time periods. A time period is a period of time corresponding to a replication schedule for replicating the replication source. The replication schedule may correspond to a recovery point objective. The total write throughput for each time period may overestimate the total amount of data that needs to be replicated to the replication target. If a write operation writes to the same location multiple times during a time period not every write operation will result in data being replicated to the replication target. For example, if a portion of data were rewritten five times during a time period, only the last write operation would need to be replicated to the replication target since the portion of data resulting from the first four write operations would not be current at the time of the replication. Compensating for overwrites during each time period of a plurality of time periods may be carried out by adjusting the historical write throughput for that time period to account for overwrites. In some examples, the adjustment may be done by a multiplying the throughput for that period by a set percentage or by a variable percentage depending on the amount of data that was written. For example, if it were determined that high write throughput generally corresponds to new data as opposed to overwrites, the percentage may be increased. On the other hand, if it were determined that high write throughput generally corresponded to increased overwrites, then the percentage may be decreased. For further explanationFIG.12sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.12is similar to the method ofFIG.11in that that the method ofFIG.12also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source, wherein determining804a retention resource requirement for the replication source further comprises aggregating1002historical write throughput over a time interval for the replication source, wherein the time interval corresponds to a retention period for the replication target, and wherein aggregating1002historical write throughput over a time interval for the replication source comprises compensating1102for overwrites during each time period of a plurality of time periods. The method ofFIG.12differs from the method ofFIG.11, in that in the method ofFIG.12compensating1102for overwrites includes estimating1202an amount of overwrites based on historical data. For example, a storage system800may track which storage operations overwrite existing data and which storage operations are new data. The storage system800may store the information and/or transmit the information to an analytics application. The stored information may then be used to determine how much data is historically overwritten. The stored information may be fine grained to provide a highly accurate estimate of the amount of data overwritten for a time period, or in some examples the stored information may be coarse to provide a rough estimate of the amount of data overwritten. Estimating an amount of overwrites based on historical data may be carried out by the storage system800or analytics application examining historical for a time period and determining a percentage of data that was overwritten of the total amount of writes for a time period. The write throughput for that time period may then be adjusted based on the historical data. In some examples in which overwrite data is nonexistent, the storage system800or analytics application may estimate the amount of data that is overwritten based on the historical overwrites of similar replication sources. For further explanationFIG.13sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.13is similar to the method ofFIG.8in that that the method ofFIG.13also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source. The method ofFIG.13differs from the method ofFIG.8in that, the method ofFIG.13further includes determining1302an amount of potential deduplication between the replication source and at least one other data source. Determining1302an amount of potential deduplication between the replication source and at least one other data source may be carried out by examining the replication source and at least one other data source to identify opportunities for deduplication. For example, hashes of data blocks of the replication source may be compared with hashes of other data stored at a storage system800hosting the replication target to determine if there are opportunities for deduplication. In some examples, potential opportunities for deduplication may only be considered if the replication source and the replication target belong to the same entity. In another example, no actual comparison of the replication source and the at least one other data source may be performed. Instead, the amount of deduplication may be estimated based on historical information considering characteristics of the data such as whether similar data is stored at the storage system hosting the replication target, the confidentiality of information, the type of information, and other characteristics. For further explanationFIG.14sets forth a flow chart illustrating another example method of sizing resources for a replication target according to embodiments of the present disclosure. The method ofFIG.14is similar to the method ofFIG.8in that that the method ofFIG.14also includes: determining802an initial resource requirement for a replication source, determining804a retention resource requirement for the replication source, and reporting806, in dependence on the initial resource requirement and the retention resource requirement, a total resource requirement for replicating the replication source. The method ofFIG.14differs from the method ofFIG.8in that, in the method ofFIG.14reporting806a total resource requirement for replicating the replication source is further in dependance on an expected growth rate of the replication source1402. An expected growth rate may be estimated on a past growth rate of the replication source or may be manually selected based on an expectation of a user. For example, a storage system800or analytics application may analyze the past growth of the replication source and determine an expected growth rate. Or a consumer of the replication source may input a growth rate based on business needs. Sizing the replication target in dependence of an estimated growth rate may allow for the replication source to be replicated to the replication target in the future when the replication source may have grown. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 265,289 |
11861171 | DETAILED DESCRIPTION This disclosure relates to integrated circuits (ICs) and, more particularly, to a regular expression, or “regex”, processing system for an IC. In accordance with the inventive arrangements described within this disclosure, a circuit-based regular expression processing system is described. The regular expression processing system is capable of providing improved performance compared to other regular expression processing solutions that rely on central processing units (CPUs) executing program code. Further, the example implementations described herein leverage improved compiler functionality to support a larger number of regular expressions than are supported by existing regular expression processing circuits. In one or more examples, the regular expression processing system provides an improved data path that achieves greater throughput for determining matches in a data stream for a given regular expression compared to other regular expression processing circuits. In one or more examples, the complexity of a dynamic scheduler is avoided. That is, a dynamic scheduler is not required to dispatch work in parallel. The example implementations described herein utilize pipelining to consume multiple streams per processing element (e.g., engine), thereby improving timing and throughput of the regular expression processing system. In one aspect, the regular expression processing system is implemented as a type of non-deterministic finite automata (NFA) machine. An NFA machine can be mapped one-to-one with a unique finite automaton. An NFA machine, unlike other technologies that use backtracking, matches input strings in a beginning to ending fashion. Once an NFA machine is built from a regular expression, any new character of an input string transitions the NFA machine from a current set of active states to a next set of active states. When the NFA machine encounters, or hits, one of the final states, the NFA machine indicates that the regular expression is matched. In accordance with the inventive arrangements described herein, the regular expression processing system utilizes a table of state transition instructions to detect matches in received data for a given regular expression. The regular expression processing system is capable of processing a data stream (e.g., an input string) by transitioning through the state transition instructions of the table, which are stored in a multi-port memory. The table is specific to a particular regular expression. The multi-ported nature of the memory is leveraged to provide improved, e.g., faster, processing of the streaming input strings. The regular expression processing system is runtime configurable in that different tables of state transition instructions corresponding to different regular expressions may be loaded into the multi-port memory over time and during runtime (e.g., in real time) of the IC to begin applying such other regular expressions to received input data to detect matches in the input data. In one or more other example implementations, the regular expression processing system is capable of tracking active paths of the regular expression while processing the data stream. Different paths may be created and stored in the regular expression engines along with priority data for the paths. This allows the regular expression processing system to implement and follow, in hardware, path preferences that are included in the regular expression language. As such, the regular expression processing system is capable of indicating the particular path taken in cases where matches are determined from a data stream for a given regular expression. In one or more example implementations, the regular expression processing system is capable of performing matching operations as described herein and capture operations. In addition to detecting whether a particular portion of a data stream (e.g., a string) matches a specified regular expression, the regular expression processing system is capable of capturing sub-strings of the data stream that match capture sub-expressions of the regular expression. The inventive arrangements described within this disclosure provide a hardware implementation of a regular expression processing system capable of performing capture that efficiently utilizes memory and other circuit resources of the IC in which the hardware is implemented. FIG.1illustrates an example compilation flow performed by a regular expression compiler100. The regular expression compiler100may be implemented as computer-executable program code that may be executed by a data processing system. An example of a data processing system is described herein in connection withFIG.15(e.g., data processing system1500). In the example ofFIG.1, regular expression compiler100includes a lexical analyzer104, a parser108, an NFA builder112, a hardware deterministic finite automata (HFA) builder116, and an NFA Rules Register (NRR) generator120. In the example, a regular expression 102 is provided to lexical analyzer104. Lexical analyzer104operates on the regular expression 102 to generate a token stream106, e.g., a stream of lexical tokens. Parser108consumes and operates on the token stream106to generate a plurality of syntax nodes110. A “syntax node” is an abstraction of an element of the regular expression language. For example, in accordance with the inventive arrangements, for regular expressions, a “GenericChar” syntax node is created that represents a generic character. The generic character may be either “.” or a bracket-enclosed expression such as “[a-z]”. Other examples of syntax nodes include operators such as “*” and “?”. A syntax node may be represented in a high-level programming language (e.g., C++) as a class object and has members specific to the type of syntax node. For example, a generic character syntax node would have a list of the characters that are included. An operator syntax node has a field indicating whether the operator is greedy or lazy. In one example, the parser108is implemented as a recursive descent parser with a single production encompassing all expressions and a second production to collect character class (e.g., “[a-zA-z0-9]”) tokens into a single syntax node. A recursive descent parser is a type of top-down parser that uses a parsing strategy that first looks at the highest level grammar rule and works down through the non-terminal of the grammar rules. In an example, the parser108uses a simple grammar rule that accepts a sequence of general regular expression tokens from the token stream106, and a second grammar rule that accepts regular expression tokens that make up a character class, also from the token stream, to output a sequence of infix syntax nodes. The parser108then uses the Shunting-yard Algorithm to convert that sequence of infix syntax nodes into a Reverse Polish Notation (RPN) vector of syntax nodes (e.g., syntax nodes110). In general, the Shunting-yard Algorithm is a method of parsing a mathematical expression specified in infix notation that is capable of producing either a postfix notation string, also known as RPN, or an abstract syntax tree (AST). In general, the lexical analyzer104and the parser108of the regular expression compiler100operate according to standard computer science practices. NFA builder112operates on the syntax nodes110to build an NFA graph114. That is, syntax nodes are the inputs to the fragment-building process performed by NFA builder112. When the NFA builder112sees a GenericChar type of syntax node, for example, NFA builder112creates a corresponding fragment, an example of which is illustrated inFIG.3. A “fragment” is a portion of a graph having states and edges. A fragment also has a start state and a list of the end edges. NFA builder112is capable of generating an NFA graph114from the fragments created from the syntax nodes110using a modified version of a technique described in Cox, “Regular Expression Matching Can Be Simple And Fast,”2007, which is incorporated herein by reference. In general, NFA builder112is capable of incrementally accreting fragments together into larger fragments until all of the syntax nodes110have been consumed and just one fragment remains that represents the entire NFA graph114. During fragment building, as performed by NFA builder112, a fragment stack holds constructed fragments. In one aspect, the NFA graph114that is built is one that is better suited to a hardware or a circuit-based implementation (e.g., as opposed to software executed by a processor). Rather than building each state of the NFA graph114to have at most one outbound edge for a character, in the instant case, the NFA builder112builds states to have multiple outbound edges. That is, the NFA builder112is capable of building states having multiple outbound edges, e.g., one outbound edge for each character. The edge is labeled with the character. Additionally, states of NFA graph114may include self-edges. A self-edge is an edge having a destination that is the same state as the start state. In addition, the use of empty, or epsilon, edges is minimized. Epsilon edges are typically expensive in terms of performance. The structural differences to the NFA graph114described herein to support a hardware implementation lead to further differences in how the NFA graph114is processed. An example implementation of the process used by NFA builder112to generate NFA graph114is illustrated below as pseudo code in Example 1. Example 1 For each syntaxNode in syntaxNodeVector switch syntaxNode.type case Literal: //patch operations=new State( )e=new Edge(from=s, to=null, char=syntaxNode.char)f=new Fragment(startState=s, endEdges={e})fragStack.push(f)case GenericChar:s=new State( )edgeVec={ }foreach c in syntaxNode.charsedgeVec.append(new Edge(from=s, to=null, char=c))f=new Fragment(startState=s, endEdges=edgeVec)fragStack.push(f)case Concat: //concatenation operationarg2=fragStack.pop( )arg1=fragStack.pop( )patch(edges=arg1.endEdges, targetState=arg2.startState)f=new Fragment(startState=arg1.startState, arg2.endEdges)fragStack.push(f)case Or: //Or operationarg2=fragStack.pop( )arg1=fragStack.pop( )if arg1.startState.hasInboundEdges( )arg1.split( )if arg2.startState.hasInboundEdges( )arg2.split( )e=arg1.addPlaceholderEdge( )patch(edges={e}, targetState=arg2.startState)arg1.endEdges+=arg2.endEdgesfragStac.push(arg1)case Question: //? operationarg=fragStack.pop( )if arg.startState.hasInboundEdges( )arg.split( )arg.addPlaceholderEdge( )fragStack.push(arg)case Star: /1*operationarg=fragStack.pop( )f=new Fragment(startState=arg.startState, endEdges={ })arg.addPlaceholderEdge( )patch(edges=arg.endEdges, targetState=f.startState)fragStack.push(f)case Plus: //+ operationarg=fragStack.pop( )f=new Fragment(startState=arg.startState, endEdges={ })s=arg.startStatearg.split( )e=s.addPlaceholderEdge( )f.endEdges+=epatch(edges=arg.endEdges, targetState=s)fragStack.push(f) The process illustrated in Example 1 causes NFA builder112to loop over the syntax nodes of the RPN vector. Each syntax node110is handled according to type. There are two different types which include operand syntax nodes and operator syntax nodes. Operand syntax nodes include literals (e.g., “a” in a regular expression), generic characters (e.g., “.”), and character classes (e.g., “[a-z]”). Operator syntax nodes combine operands. Examples of operator syntax nodes include “*”, “+”, and concatenation (adjacency of two operands, as in “a[a-z]”). The process of Example 1 is capable of translating each operand into an equivalent fragment. The fragment is pushed onto the top of the fragment stack. For a given operator, the NFA builder112takes one or two operands (e.g., the “arguments” to the operator) from the stack, combines the operands according to the type of operator, and pushes the resulting combined fragment onto the stack. When the end of the syntax node vector has been reached, there will be only one remaining fragment on the stack. To that final NFA fragment, the NFA builder112concatenates a “match state” thereto to produce the complete NFA graph114. A match state is a state with a flag (e.g., a “match flag”) that is set to indicate that reaching that state amounts to matching the entire regular expression. FIGS.2-8illustrate the handling of various fragments as generated by the NFA builder112. In the figures, each circle represents a state, while each arrow appended to a circle represents an edge. Literals may be annotated on edges. For purposes of illustration start states of fragments are shown with dashed lines. Edges that are considered part of an end edge set are shown as bolded or thicker lines. Edges of an end edge set of a fragment point away from a state of the fragment and are not attached to a destination state. FIG.2illustrates an example of fragment processing corresponding to a literal character. In the example ofFIG.2, for a literal character “x”, the fragment produced by NFA builder112includes a single state having a single outbound edge labeled with the character “x”. The state becomes the start state of the fragment and the edge becomes the end edge set of the fragment. FIG.3illustrates an example of fragment processing corresponding to generic characters and/or a character class. In general, NFA builder112handles generic characters as a set of literal characters. The fragment ofFIG.3generated by the NFA builder112contains one state with multiple outbound edges. More particularly, there is one outbound edge per character. Referring to the Perl Compatible Regular Expression (PCRE) Standard, the dot (“.”) generic character means “any possible character from \x00 through \xff and has edges for each of those possibilities. For a character class there is an outbound edge for each of the characters in the character class. For example, for the character class “[a-z]”, there is an outbound edge for each character from “a” through “z”. As with the literal character in the example ofFIG.2, the resulting start state of the fragment is the newly created state. The end edge set of the fragment is the set of all created outbound edges. FIG.4illustrates an example of fragment processing for a concatenation operator. This example demonstrates the concatenation of a fragment representing the regular expression “ab” and a fragment representing “c”. A concatenation operator takes two operands and chains the two operands together. The end edges of the first operand become connected to the start state of the second operand using a “patch” operation referred to in Example 1 and described hereinbelow. The new or resulting fragment formed by NFA builder112has a start state which is the start state of the first operand and has an end edge set that is the end edge set of the second operand. The edge with literal “b” is no longer considered part of the end edge set of the resulting fragment. FIG.5illustrates an example of fragment processing for an “OR” operator. This example demonstrates the “OR” operator applied to a fragment representing the regular expression “ab” and a fragment representing “c”. An “OR” operator combines two operands in such a way that the start state of the resulting fragment combines edges from both operands. The NFA builder112further adds a “placeholder edge,” which is illustrated with a dashed line, to the start state of the first operand. A placeholder edge is a meta-edge which does not represent a character. The placeholder edge serves to hold a place for a later “merge” operation. The “merge” operation is described as part of the patch operation described below. Next the NFA builder112patches the placeholder to the second operand. As illustrated, the literal “c” is added or patched to the placeholder. Because the edge being patched is a placeholder, the patch process understands that a merge operation is to be performed instead of a normal patch operation. During a merge operation, all of the outbound edges of the state to merge (in this example the “c” edge) are copied to the placeholder edge's state. Once merged, the placeholder edge is deleted. The result is that the start state of the first operand now contains the original edges (e.g., “a”) plus the edges of the start state of the second operand (e.g., “c”). In addition to the operation described for this example, the NFA builder112is capable of checking the start state of each operand to determine whether the start states have any inbound edges. If so, the state is “split” according to a “split” operation described hereinbelow in greater detail. The NFA builder112performs splits to prevent false paths when merging edges from the start states of the two operands into one state. FIG.6illustrates an example of fragment processing for a question (?) operator. This example demonstrates the question operator applied to a fragment representing the regular expression “ab”, resulting in a fragment representing “(?:ab)?”. Here, the “(?:)” operator simply groups “ab” into a single expression. The question operator indicates that there should be a choice of paths that include the original path(s) through the operand fragment or a “bypass” path around the whole fragment. For example, the regular expression “c?d” means either match “c” followed by “d” or just bypass “c” and match only “d”. That is, the regular expression matches input strings “cd” and “d”. In the example ofFIG.6, the start state does not have an inbound edge. Next the NFA builder112adds a placeholder edge to the start state. The resulting fragment has the same start state and end edge set as the operand. FIG.7illustrates an example of fragment processing for a star (*) operator. This example demonstrates the star operator applied to a fragment representing the regular expression “ab”. The star operator repeats its argument 0 or more times. To produce this behavior, the NFA builder adds a placeholder edge to the start state as an escape from the loop. The end edge(s) of the fragment are patched back to the start state to form the loop. FIG.8Aillustrates an example of fragment processing for a plus (+) operator. This example demonstrates the plus operator applied to a fragment representing the regular expression “ab”. The plus operator repeats its argument 1 or more times. For purposes of illustration, consider a fragment represented symbolically as “X”. The NFA builder112generates the fragment by taking the operand fragment and producing the equivalent of “XX*”. The NFA builder112is capable of first performing a split of the start state. This operation effectively duplicates the start state. One copy of the state will serve as the fragment start state and implement “X”, while the other copy of the start state will implement the “X*”. The transformation described for the star operator is performed on the second copy, leaving it with a placeholder edge and a loop edge. Referring to the examples ofFIGS.5,6,7, and8A-8Ethe placeholder edges are included in the edge set and, as such are shown bolded. An example implementation of a patch operation used by NFA builder112to generate NFA graph114is illustrated below as pseudo code in Example 2. The patch operation is used by NFA builder112to combine two fragments into one fragment. In general, the patch operation combines pairs of fragments according to a two-pass processing technique wherein non-placeholder edges (e.g., regular edges) are processed during a first pass through the plurality of fragments and placeholder edges are processed during a second pass through the plurality of fragments. Example 2 patch(edges, targetState) for each e in edgesif !e.isPlaceholder( )//non-placeholder edge processinge.to =targetStatefor each e in edgesif e.isPlaceholder( )//placeholder edge processinge.from.edges+=targetState.edges.clone( )edges.remove(e)if (targetState.isMatch)e.from.isMatch=true The NFA builder112, per the pseudo code of Example 2, makes two passes through the list of edges. The first pass processes “normal” or “non-placeholder” edges. The second pass processes placeholder edges. To patch a normal edge, NFA builder112sets the destination state of the edge to the given target state. To patch a placeholder edge, the NFA builder112copies the edges emanating from the target state into the source state of the edge. This operation effectively merges the target state into the source state of the edge. The NFA builder112may then remove the placeholder since the purpose of the placeholder edge has been achieved. As part of the processing performed, if a target state is a match state, the source state of the edge also becomes a match state. An example implementation of a split operation used by NFA builder112to generate NFA graph114is illustrated below as pseudo code in Example 3. Example 3 Fragment::split( ) s=new State( )s.edges+=this.startState.edges.clone( )this.startState=s The NFA builder112, per the pseudo code of Example 3, performs the split operation by creating a copy of the start state of the fragment as a new state. The NFA builder112adds copies of the outbound edges and self-edges of the original start state to the newly created state. When an edge is copied, the destination end of the copy is set to the same state as the original edge, and the source end of the copy is set to the new state. As a result, any self-edges of the original start state are copied as outbound edges from the newly created state to the original start state. Additionally, when an edge is copied, if the original edge was in the fragment's end edge set, the copy is added to the fragment's end edge set. The split operation prevents false paths in the presence of loop edges such as in the example illustrated inFIG.8Bbelow. FIGS.8B,8C,8D, and8E, taken collectively, illustrate an example of a split operation as performed in the context of an OR operation. For purposes of illustration,FIGS.8B-8Eillustrate processing of the regular expression “a*|b”. In the example ofFIG.8B, the fragment802represents “a*”. The fragment804represents “b”. Fragment802has a self-edge that is determined to be an inbound edge by NFA builder112. NFA builder112, in response to determining that either fragment802or fragment804has an inbound edge to a start state, splits that state. As shown, fragment802has an inbound edge which causes NFA builder112to initiate a split operation. The example ofFIG.8Cillustrates the result of NFA builder112performing a split operation. In the example, NFA builder112creates a copy806of fragment802. The start state of copy806becomes the start state of the resulting fragment808formed of fragments802and806. NFA builder112, in copying or cloning a state, clones all outbound edges. If an original edge is a fragment end, the cloned end is also a fragment end. In the example ofFIG.8C, the NFA builder112does not create a self-edge for the copy806. Rather, the NFA builder112sets or creates the edge “a” of the copy806to have a destination end that is set to what was the start state of fragment802and a source end of the edge “a” of the copy806set to the start state of the copy806, in accordance with the edge-copying rules discussed above in connection with Example 3. The split operation distinguishes between visiting a node for the first time and revisiting the node: copy806provides the behavior for visiting the first time, while state802provides the behavior for revisiting. Further, by NFA builder112using the split operation, use of epsilon states may be avoided. Epsilon states can result in less efficient hardware implementations. In the example ofFIG.8D, NFA builder112continues the OR operation as previously described.FIG.8Dillustrates that a new placeholder edge810is added as a fragment edge to the start state of fragment808. As noted, placeholder edge810does not represent a character. In the example ofFIG.8E, the merge operation is illustrated where fragment804is merged with fragment808. As shown, the character “b” is added or patched to the placeholder edge. Because the edge being patched is a placeholder, the patch process understands that a merge operation is to be performed instead of a normal patch operation. During a merge operation, all of the outbound edges of the state to merge (in this example the “b” edge) are copied to the placeholder edge's state. In the example ofFIG.8E, edges from the start state of fragment804are cloned at the position of the placeholder edge810. Once merged, the placeholder edge810is deleted. The result is that the start state now contains the original edges (e.g., “a”) plus the edges of the start state of the second operand (e.g., the end edge “b”). Referring again to the example ofFIG.1, the HFA builder116is capable of operating on the NFA graph114to generate an HFA graph118. The HFA builder116effectively transforms the NFA graph114into an HFA graph118, which is a format that complies with certain constraints to be observed to implement the regular expression processing system130in hardware. The HFA graph118facilitates generation of a compact instruction table122, thereby conserving memory resources in hardware, while also supporting parallelism. An NFA graph may be in multiple states at one time. This aspect of an NFA graph may require too much in terms of hardware resources to express all possible states that may exist at the same time. The constraints observed in generating the HFA graph118provides for parallelism while imposing limitations on the number of possible concurrent states that may exist. To illustrate the differences between the HFA graph118and the NFA graph114, in the NFA graph114, each state may have any number of outbound edges of a particular character. That is, for a given state and character such as “a”, the state of the NFA graph114may have one or more such outbound edges. Accordingly, the state may have 1, 2, 3, or more outbound edges each labeled “a”. By comparison, each state of the HFA graph118has at most one outbound edge for a given character and at most one self-edge for that same character. Thus, for a given state and character such as “a”, the state of the HFA graph118may have at most one outbound edge labeled “a” and at most one self-edge labeled “a”. A self-edge refers to an edge having the same state as the start state and end state.FIG.9illustrates an example of a node of an HFA graph having at most one outbound edge and at most one self-edge for a same character. In one aspect, HFA builder116is capable of using a variation of the known “power set construction” algorithm to convert NFA graph114to the HFA graph118. In the theory of computation and automata theory, the powerset construction or subset construction is a standard method for converting an NFA into a deterministic finite automaton (DFA). Whereas an NFA graph may be in multiple states at one time, a DFA graph may be only one state at a time. This aspect of DFAs, however, does not permit the parallelism that is desired from a hardware implementation. Accordingly, by modifying aspects of the power set construction algorithm, an HFA graph may be generated from the NFA graph (as opposed to generating a DFA graph). An example of the processing performed by HFA builder116is illustrated below as pseudo code in Example 4. The variations to the power set construction algorithm allow HFA builder116to fold multiple outbound edges for a given character into a single outbound edge. Still, the variations allow HFA builder116to take advantage of hardware support and provide separate self-edges. Example 4buildXfa(nfaStartState)xfaStates = {new XfaState({nfaStartState})}unprocessedStates = xfaStateswhile unprocessedStates != {}xfa = unprocessedStates.pop_front( )for each edgeSet in xfa.getEdgeSets( )outboundStates = {}loopStates = {}foreach edge in edgeSet.edgesif edge.to in xfa.nfaStatesloopStates += edge.toelseoutboundStates += edge.toif loopStates == xfa.nfaStatesxfa.edges += new Edge(from=xfa, to=xfa,char=edgeSet.char)elseoutboundStates += loopStatesif outboundStates != {}destXfa = get from xfaStates an XfaState x wherex.nfaStates == outboundStatesif destXfa == nulldestXfa = new XfaState(outboundStates)unprocessedStates += destXfaxfa.edges += new Edge(from=xfa, to=destXfa,char=edgeSet.char)return xfaStates[0] For purposes of describing operation of the HFA builder116, an “HFA state” is a unique set of one or more NFA states. The HFA builder116, per the pseudo code of Example 4, may begin by initializing a list of HFA states to a new HFA state consisting of just the start NFA state. Newly created HFA states are assigned a “state number,” which is a unique integer identifier (ID) that may start from 0 and increase sequentially. Each HFA state in the list that has not yet been processed is removed from the list and processed until there are no more HFA states left to process. An “edge set” or “edgeSet” in Example 4 is a set of all the NFA graph edges originating from all the NFA states of an HFA state for a specific character. Each HFA state has a set of edge sets, one element of the outer set per character present among the edges of the NFA states of the HFA state. To process an unprocessed HFA state, the HFA builder116is capable of processing each edge set of that HFA state in turn. The destination state (e.g., NFA state) for each edge of the edge set is considered in turn and placed into a “loop state” (self-edge) set if the state is one of the NFA states of the HFA state. Otherwise, the destination state is placed into an “outbound state” set. After all edges have been sorted into the two sets, the HFA builder116checks the loop state set to see whether the loop state set matches the NFA state set of the HFA state. In response to determining that the loop state matches, HFA builder116forms a new edge on the HFA graph118from the HFA state to itself. In response to determining that the loop state does not match, the “loop state” set of NFA states are added to the “outbound state” set. Next, in processing the outbound state set, the HFA builder116searches the set of HFA states to see if one HFA state with exactly the set of NFA states in the outbound state set exists. In response to determining that one such state does exist, the HFA builder116uses the pre-existing HFA state as the destination of a new edge in the HFA graph118that originates from the current HFA state. Otherwise, the HFA builder116creates a new HFA state consisting of the NFA states in the outbound state set. The HFA builder116uses the new HFA state as the destination of the new edge. If a new HFA state is created, that new HFA state is put on the list of HFA states to process. Once all of the HFA states have been processed, the first HFA state in the HFA state list serves as the start state of the HFA graph118. Referring again toFIG.1, the NRR generator120operates on the HFA graph118and creates the instruction table122. The instruction table122may be implemented as a vector having an index formed as a {character, state} pair and having element values that are a {state, diff} pair. The “cliff” field, also “DIFF” herein, is described in greater detail below. Example 5 generateNrr(xfaState) if xfaState.visitedreturnxfaState.visited=truefor each edge in xfaStateif edge.from ==edge.toNRR[edge.char, edge.from].diff=falseelseNRR[edge.char, edge.from].state=edge.togenerateNrr(edge.to) The NRR generator120, per the pseudo code of Example 5, is passed the start state of the HFA graph118. NRR generator120is capable of generating instruction table122from the HFA graph118assuming a table pre-initialized with {FailState, true} values. FailState is a reserved state number (0xFF) that indicates to the regular expression processing system130that the match failed. MatchState is a reserved state number (0xFE) that indicates to the regular expression processing system130that the match succeeded. Because the HFA graph118may include one or more loops, NRR generator120may utilize a “visited” flag. The NRR generator120is capable of adding the visited flag to those HFA states that have already been visited (e.g., processed). When the NRR generator120is passed an HFA state with the visited flag set, the NRR generator120may exit since the state has already been visited. Otherwise, the NRR generator120is capable of marking the state as visited and processing the edges of the state. Per the pseudo code of Example 5, for each edge, the NRR generator120is capable of checking the source and destination states of the edge to determine whether the edge is a self-edge (e.g., a self-edge has same source and end states). In response to determining that the edge is a self-edge, the NRR generator120clears the DIFF flag to indicate that the edge is a self-edge leaving the state field intact. In response to determining that the edge is not a self-edge, the NRR generator120sets the state field to the destination state of the edge leaving the DIFF flag intact. This two-phased approach ensures that for a state with both an outbound edge and a self-edge on the same character, the entry in the instruction table122being generated is set up properly over the course of two assignments. In the example ofFIG.1, it should be appreciated that each of the elements such as the token stream106, syntax nodes110, NFA graph114, HFA graph118, instruction table122, and/or configuration data124may be specified as a data structure as defined within this disclosure hereinbelow. FIG.10illustrates an example implementation of the instruction table122. The example ofFIG.10illustrates an instruction table122for the regular expression “{circumflex over ( )}.*ba$”. The term “$chars” denotes all possible values of the input characters IN and the term “$term” denotes a special character indicating string termination that matches “$” in the regular expression. It should be appreciated that regular expression compiler100can process any of a variety of regular expressions of varying complexity and that the particular regular expression provided herein is for purposes of illustration only. For example, regular expression compiler100may process regular expressions including any one or more of the operations described in connection withFIGS.2-8. For purposes of discussion and with reference toFIG.10, the next input character to be processed in a stream of input data is denoted as “IN” (e.g., the first column moving left to right). The current state is denoted as “CS” (e.g., second column), while the next state is denoted as “NS” (fourth column). A set of states that are active for a given moment are called active states and are denoted as “AS.” In addition, the flag called “DIFF” (e.g., third column) is defined that indicates, by virtue of being set to 0, whether an edge is a self-edge and whether a given CS should remain in the set of AS after a current transition is completed. Within instruction table122, each partial row formed of the data from columns DIFF and NS corresponds to a state transition instruction. The portion of each row formed by the IN column and the CS column specifies an address at which each respective state transition instruction is stored in a memory. For example, referring to the first row, the state transition instruction {0, S0} is stored at address {b, SI} within a memory.FIG.10is provided for purposes of illustration. In an actual implementation, the various rows (e.g., the second row) would be expanded with additional entries corresponding to all the possible characters that can be received for that row. The compute flow performed by regular expression processing system130using an instruction table122may start when regular expression processing system130receives a new IN. Initially the set of active states consists of only the starting state which is “state initial”, which may be denoted as SI. The SI becomes the current state CS for the first transition. The pair {IN, CS} is used as an input address to the instruction table122to lookup the data that is output from instruction table122, e.g., the particular DIFF and NS specified by the address {IN, CS}. After each lookup, the set of active states may be updated. In processing a received data stream using the inventive arrangements described herein using a particular regular expression, a subset of active states may exist at any current moment. When input data is received, each active state in the set of active states may be transitioned to a next active state. The regular expression processing system130is capable of processing each state in the set of active states by performing a lookup using the instruction table122. For each state in the set of active states, the CS is concatenated with the current input data (e.g., character) received to form an address. The address is used to lookup a state transition instruction in the instruction table122. From each lookup, a given output is generated. In the output, if the DIFF flag is set (e.g., is equal to 1), the current state CS used to perform the lookup is removed from the set of active states. Next, regardless of the value of the DIFF flag, the next state NS that was determined by the state transition instruction is added to the set of active states. The regular expression processing circuit performs this processing for each of the current states present in the set of active states. When all states of the set of active states have been processed for the received input data to generate a new set of active states, one transition for the received input has been performed. This processing may be performed until the input data is exhausted. Upon exhaustion or termination of the input data, a determination may be made as to whether the regular expression has been matched. Referring again to the example ofFIG.1, the regular expression processing system130may be implemented in an integrated circuit (IC)126. In one aspect, the IC126may be implemented as a programmable IC. A programmable IC refers to an IC that includes programmable circuitry (e.g., programmable logic). A field programmable gate array (FPGA) is an example of a programmable IC. In the case of a programmable IC implementation, the programmable IC may be initially configured to implement the regular expression processing system130by loading configuration data124into IC126. Loading the configuration data124into IC126may implement regular expression processing system130in IC126, e.g., by configuring programmable logic or other circuitry included therein. Further, instruction table122may be loaded into a memory of the regular expression processing system130. It should be appreciated that once regular expression processing system130is implemented in IC126, different ones of instruction table122may be loaded over time, where each instruction table122may correspond to a different regular expression to be applied to received data streams. The different instruction tables may be loaded at runtime (e.g., in real time) without reconfiguring the programmable circuitry of the IC126by loading different configuration data (e.g., a different configuration bitstream) to process data according to a different regular expression. In another aspect, the regular expression processing system130may be implemented as hardened circuitry. For example, the regular expression processing system130can be implemented in a System-on-Chip (SoC), an Application Specific Integrated Circuit (ASIC), or other IC. In another example, the regular expression processing system130may be implemented as a combination of programmable circuitry and hardened circuitry. In any case, regardless of whether the regular expression processing system130is implemented using programmable logic, hardened circuitry, or a combination thereof, different instruction tables122may be loaded over time (e.g., during runtime) and in real time to match different patterns from data streams as specified by different regular expressions. FIG.11illustrates an example implementation of regular expression processing system130. In the example, regular expression processing system130includes an NRR memory1102, a regular expression engine1104, and a controller1130. In one or more example implementations described hereinbelow, the regular expression processing system130may include a plurality of regular expression engines1104that are coupled to a single NRR memory1102as described herein in connection withFIG.14. In the example, NRR memory1102may be implemented as a multi-ported memory. The memory may be a random-access memory (RAM). For example, NRR memory1102may be implemented as a dual-port memory such as a block-random access memory (BRAM). The multi-port architecture of NRR memory1102allows two or more memory accesses to be performed concurrently, e.g., on the same clock cycle. In the case of a dual-port memory, for example, NRR memory1102is capable of performing up to two read operations each clock cycle. Results from the read operations are output or available on the next clock cycle. NRR memory1102may be loaded with instruction table122to apply a given regular expression to a received data stream shown as input data1118. In an example, input data1118may be ASCII encoded data. In another example, regular expression processing system130may be language agnostic in that any of a variety of different types of input data may be processed. For example, the input data1118may be UNICODE encoded data. Regular expression engine1104includes a decoder circuit1106, a register1108, a register1110, switching circuitry1120, active states memories1114, and a register1112. As illustrated, each active states memory1114includes a register1116. The architecture of regular expression engine1104implements a pipelined data path that alleviates timing criticality by reducing logic path lengths. The pipelining allows the circuit architecture to utilize multiple cycles to perform next NFA state computations as discussed in greater detail below. In the example, the NRR data path is cyclic in nature in that the output of the NRR memory1102is used to produce address(es) for the next or subsequent lookup(s) into NRR memory1102. The architecture of regular expression engine1104leverages the dual-port architecture of NRR memory1102by using two separate and distinct data paths. The pipelining effectively subdivides the dual data paths into four different stages generally indicated as 1, 2, 3, and 4 in the example ofFIG.11. In stage 1, a set of up to two {character, state} pairs are looked up from instruction table122stored in NRR memory1102with the indicated state transition instructions being output to decoder circuit1106. Up to two lookups may be performed in the same clock cycle with both results being output on the next clock cycle. The two {character, state} pairs are determined using the addr1 (e.g., address 1) and addr0 (e.g., address 0) formed by concatenating the output from activate states memories1114and a character from input data1118. Operations, e.g., reads, performed by NRR memory1102incur a 1 clock cycle delay. Accordingly, for an address provided to NRR memory1102on clock cycle 1, the output is available at the output ports of NRR memory1102on the next clock cycle, i.e., clock cycle 2. Within the example ofFIG.11, the particular port from which a next state is read is indicated as “i”. Thus, next state 0 is the state read from port 0 and next state 1 is the state read from port 1. The particular active state memory1114from which a previous state is read is indicated as “i”. Thus, previous state 0 is the state read from active state memory1114-0, and previous state 1 is the state read from active state memory1114-1. As can be seen in the example, the states output from output ports 0 and 1 of NRR memory1102, as provided to decoder circuit1106, include, or are used to generate, next state 0 and next state 1. Decoder circuit1106also receives two states output from register1112that include, or are used to generate, previous state 0 and previous state 1. These values are the states output from active states memories1114that were used to generate addr1 and addr0 and that were used to lookup next state 0 and next state 1. In stage 1, decoder circuit1106decodes the output from NRR memory1102and from register1112. The {state, diff} pair outputs are used to set valid bits for the 4 possible output states. For purposes of illustration, the output from output port i of NRR memory1102is denoted as {state_i, diff_i}. Decoder circuit1106is capable of determining whether any of the states received as inputs is/are valid and, as such, are to be written back into the set of active states stored in the active states memories1114. Decoder circuit1106is configured to determine validity of each of the received {state, diff} pairs according to the following rules as implemented in logic and/or circuitry.next_state_0_valid=(state_0≠FailState) & (diff_0|(state_0≠prev_state_0))next_state_1_valid=(state_1≠FailState) & (diff_1|(state_1≠prev_state_1))prev_state_0_valid=˜diff_0prev_state_1_valid=˜diff_1 Per the above rules, the next_state_0 is valid if (the state_0 is not equal to a failed state) AND (the diff_0 flag is 1 (e.g., set) OR the state_0 is not equal to the previous_state_0). The next_state_1 is valid if (the state_1 is not equal to a failed state) AND (the diff_1 flag is 1 (e.g., set) OR the state_1 is not equal to the previous_state_1). The prev_state_0 is valid if the diff_0 flag is 0 (e.g., not set). The prev_state_1 is valid if the diff_1 flag is 0 (e.g., not set). In stage 2, the first half (e.g., two) of the four states output from decoder circuit1106may be written to active states memories1114. For example, the next_state_0 and/or the prev_state_0 may be written to active states memories1114via switching circuitry1120. In the example, switching circuitry1120may be implemented using switches1120-1,1120-2,1120-3,1120-4,1120-5, and1120-6. Switching circuitry1120may be implemented as multiplexers that are controlled by controller1130. That is, controller1130, or other logic included in regular expression engine1104, may generate select signals (not shown) to switching circuitry1120. Controller1130, for example, may be coupled to decoder circuit1106to receive the validity information determined for the states. In response to receiving validity information from decoder circuit1106, controller1130is capable of generating select signals to switches1120-1,1120-2,1120-3,1120-4,1120-5, and1120-6to pass the correct state(s). Operation of switching circuitry1120is described in greater detail hereinbelow. As described in greater detail below, controller1130, or other logic included in regular expression engine1104, may be coupled to active states memories1114to determine status information, to read enable, and/or to write enable such memories. In stage 3, the second half (e.g., two) of the four states output from decoder circuit1106may be written to active states memories1114via switching circuitry1120. For example, the next_state_1 and/or the prev_state_1 may be written to active states memories1114via switching circuitry1120. In the example, in terms of physical implementation in IC126, stage 2 and stage 3 may have substantially the same path lengths. In stage 4, each of active states memories1114is capable of outputting an active state. Each active states memory1114includes a registered output indicated by register1116. In the example, the active state output from each respective active states memory1114is paired with a value/character from the input data1118and used to form addr1 and addr2, respectively, that may be provided to NRR memory1102to perform lookup operations in instruction table122. As illustrated, the output of each active states memory1114is also provided to register1112and routed to decoder circuit1106. The inclusion of register1112allows the outputs from active states memories1114(prev_state_0 and prev_state_1) to be provided to decoder circuit1106in the same clock cycle as the next_state_0 and next_state_1 as output from NRR memory1102. Thus, decoder circuit1106is capable of receiving the two states used to lookup the next states as well as the next states each clock cycle. By including two active states memories1114-0and1114-1, throughput may be enhanced as each of the active states memories1114may write one value therein as received from switching circuitry1120each clock cycle. Thus, two states may be stored each clock cycle. The two active states memories1114may behave as a single larger first-in-first-out (FIFO) memory. In another aspect, controller1130may include check match circuitry that is configured to determine whether a received character in input data1118is a termination character. In response to determining that the input character is a termination character, the check match circuitry is capable of determining whether either of active states memories1114includes a final state (e.g., SF1) indicating that the regular expression has been matched. In response to determining that the SF1 state (e.g., a final state) is contained in one or both active states memories1114, the check match circuitry is capable of outputting an indication (e.g., a match signal) indicating that the regular expression has been matched. In an example implementation, active states memories1114may operate according to a load balancing technique implemented by controller1130controlling operation of switching circuitry1120by providing control signals (e.g., select signals) thereto. For each of the ports of the NRR memory1102, the addresses provided (e.g., addr0 and addr1) may be generated by concatenating an input character IN of the input data1118with the particular state output from each respective active states memory1114. In the example ofFIG.11, the same input character is used to generate both addr0 and add1. The same input character is used until all active states are transitioned to their next states. The DIFF flag value and the next state determined from each lookup may be output to decoder circuit1106. In the example, since NRR memory1102is dual-ported, NRR memory1102is capable of performing both the lookups, e.g., corresponding to addr0 and addr1, concurrently. Thus, the resulting states are output concurrently on the next clock cycle to decoder circuit1106. Initially, e.g., at the start of processing a string specified by input data1118, active states memory1116-1is empty and active states memory1116-0stores the start state. As regular expression engine1104starts consuming input data1118and the top values are read from each of active states memories1114(e.g., from registers1116therein) the top values (e.g., active states) are concatenated with the input stream to form addr0 and addr1. To match the one cycle latency of the NRR memory1102in providing the output of active states memories1114to decoder circuit1106, register1112is added. As processing continues, decoder circuit1106is capable of outputting four valid states as data is received from NRR memory1102and register1112. The extra cycle delay in the data path between decoder circuit1106and active states memories1114incurred due to register1110over the data path including only register1108allows up to four states to be stored in active states memories1114every two clock cycles. Regular expression engine1104is capable of performing two lookups from NRR memory1102once every four clock cycles. This provides at most ¼ bytes per clock cycle. FIG.12illustrates a more detailed example of switching circuitry1120in which the control signals driven from controller1130are shown with dashed lines. In the example, the wr_state signals to switches1120-5and1120-6and the c00, c01, c10, and c11control signals provided to switches1120-1,1120-2,1120-3, and1120-4, respectively, implement the load balancing for the active states memories1114. Controller1130further is coupled to active states memories1114-0and1114-1to obtain status information from each of active states memories1114-0and1114-1, for example, as to the number of entries stored in each. In general, the load balancing dictates which active states are passed through switching circuitry1120and written to each of active states memories1114-1and1114-0. The controller1130is capable of implementing the following logic to perform the load balancing.If both active states memories1114-0and1114-1have the same number of entries and both the next_state_i and the prev_state_i are valid, then the next_state_i is written to active states memory1114-0and the prev_state_i is written to the active states memory1114-1.If both active states memories1114-0and1114-1have the same number of entries and only the next_state_i or only the prev_state_i is valid, then active states memory1114-0has a higher priority than active states memory1114-1and the valid state is written to active states memory1114-0.If active states memory1114-0has one more entry than active states memory1114-1and next_state_i and prev_state_i are both valid, then next_state_i is written to active states memory1114-1and prev_state_i is written to active states memory1114-0.If active states memory1114-0has one more entry than active states memory1114-1and only next_state_i or only prev_state_i is valid, then the valid state is written to active states memory1114-1. The load balancing technique described above ensures that the number of entries in active states memory1114-0will be either the same as, or at most one more than, the number of entries in active states memory1114-1. FIG.13illustrates additional features of the active states memories1114. The example ofFIG.13illustrates an example implementation of each active states memory1114-0and1114-1. In the example ofFIG.13, each active states memory1114includes a FIFO memory1302, a switch1304(e.g., a multiplexer), and a register1306. Controller1130is capable of providing control signals to switch1304(e.g., a select signal) and to FIFO memory1302. Each active states memory1114includes a single cycle registered output. In one or more example implementations, each active states memory1114is capable of implementing a “first word fall through” feature. The first word fall through feature uses an internal bypass signal1308that writes the received input directly to the register1306in response to determining that FIFO memory1302is empty. For example, in response to controller1130reading status registers of FIFO memory1302and determining that FIFO memory1302is empty, controller1130causes switch1304to pass the value from bypass signal1308instead of the value read from FIFO memory1302to register1306. The value passed by switch1304is stored in register1306. Controller1130, for example, may write enable register1306. Further, controller1130does not enable FIFO memory1302to store the value thereby preventing the value stored in register1306from also being stored in FIFO memory1302. In response to controller1130determining that FIFO memory1302is not empty, controller1130write enables FIFO memory1302so that the value received at the input is stored in FIFO memory1302. Further, controller1130controls switch1304so that the value passed to register1306is the value read from the top of the FIFO memory1302and not the value on the bypass signal1308. Controller1130may write enable register1306to store the value from FIFO memory1302. In the example architecture illustrated inFIG.13, the value written to register1306will not disappear from the output (e.g., from the register) until a new read signal is received, e.g., from controller1130.FIG.13illustrates an example where the output of FIFO memory1302is not registered. Rather, a register is added following switch1304. Within this disclosure, certain operative features are attributed to the controller1130. In one or more other example implementations, dedicated logic may be included in various components of the regular expression engine1104itself or components thereof, e.g., the active states memories1114, that are capable of performing the monitoring functions and/or control signal generation described. For example, referring to the active states memories1114, such logic may control operation of switch1304. FIG.14illustrates another example implementation of regular expression processing system130that includes multiple regular expression engines1104. In the example ofFIG.14, the regular expression processing system130includes 4 regular expression engines1104each coupled to the same, e.g., a single, NRR memory1102. In the example ofFIG.14,4concurrent data streams are illustrated as input data1118-1,1118-2,1118-3, and1118-4, each being provided to a respective regular expression engine1104-1,1104-2,1104-3, and1104-4. The physical active states memories are replicated for each corresponding input stream. In the example, each of the input data streams1118-1,1118-2,1118-3, and1118-4represents a segment of a single, larger data stream that has been split into the respective segments shown. Each segment may represent a contiguous portion of the larger data stream to be processed by a particular one of the regular expression engines1104shown. The example ofFIG.14also illustrates the clock cycle timing of each regular expression engine1104. For example, regular expression engine1104-1submits addr0 and addr1 on clock cycle 1 and receives results from the output ports of NRR memory1102(data0 and data1) on clock cycle 2. Regular expression engine1104-2submits addr0 and addr1 on clock cycle 2 and receives results from the output ports of NRR memory1102on clock cycle 3. Regular expression engine1104-3submits addr0 and addr1 on clock cycle 3 and receives results from the output ports of NRR memory1102on clock cycle 4. Regular expression engine1104-4submits addr0 and addr1 on clock cycle 4 and receives results from the output ports of NRR memory1102on clock cycle 5. As each regular expression engine1104is capable of processing results every four clock cycles, the process may repeat. The outputs specifying addr0 and addr1 from each of regular expression engines1104are provided to multiplexers1402and1404. For example, the addr0 from each of regular expression engines1104is provided to multiplexer1402. The addr1 from each of regular expression engines1104is provided to multiplexer1404. Based on the particular clock cycle, multiplexers1402,1404pass the address from a different one of regular expression engines1104. For example, during clock cycle 1, addr0 and addr1 from regular expression engine1104-1is passed. During clock cycle 2, addr0 and addr1 from regular expression engine1104-2is passed, and so on. In the example implementations described herein, a single controller1130is illustrated that may be used to control operation of each regular expression engine1104and/or multiplexers1402,1404. In one or more other example implementations, each regular expression engine1104may include its own dedicated controller1130, wherein additional logic is used to control operation of multiplexers1402,1404. The inventive arrangements described herein are not intended to be so limited. It should be appreciated that the regular expression processing system130may include fewer or more regular expression engines1104than shown so long as the operation of such regular expression engines1104is coordinated with operation of the NRR memory1102. FIG.15illustrates an example computing environment including a data processing system1500and an accelerator1550. As defined herein, “data processing system” means one or more hardware systems configured to process data, each hardware system including at least one processor programmed to initiate operations and memory. In the example, data processing system1500is also an example of a “host computer” in that data processing system1500is communicatively linked to accelerator1550. The components of data processing system1500can include, but are not limited to, a processor1502, a memory1504, and a bus1506that couples various system components including memory1504to processor1502. Processor1502may be implemented as one or more processors. In an example, processor1502is implemented as a central processing unit (CPU). As defined herein, the term “processor” means at least one circuit capable of carrying out instructions contained in program code. The circuit may be an integrated circuit or embedded in an integrated circuit. Processor1502may be implemented using a complex instruction set computer architecture (CISC), a reduced instruction set computer architecture (RISC), a vector processing architecture, or other known architecture. Example processors include, but are not limited to, processors having an x86 type of architecture (IA-32, IA-64, etc.), Power Architecture, ARM processors, and the like. Bus1506represents one or more of any of a variety of communication bus structures. By way of example, and not limitation, bus1506may be implemented as a Peripheral Component Interconnect Express (PCIe) bus. Data processing system1500typically includes a variety of computer system readable media. Such media may include computer-readable volatile and non-volatile media and computer-readable removable and non-removable media. Memory1504can include computer-readable media in the form of volatile memory, such as random-access memory (RAM)1508and/or cache memory1510. Data processing system1500also can include other removable/non-removable, volatile/non-volatile computer storage media. By way of example, storage system1512can be provided for reading from and writing to a non-removable, non-volatile magnetic and/or solid-state media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus1506by one or more data media interfaces. Memory1504is an example of at least one computer program product. Program/utility1514, having a set (at least one) of program modules1516, may be stored in memory1504. Program/utility1514is executable by processor1502. By way of example, program modules1516may represent an operating system, one or more application programs, other program modules, and program data. Program modules1516, upon execution, cause data processing system1500, e.g., processor1502, to carry out the functions and/or methodologies of the example implementations described within this disclosure. Program/utility1514and any data items used, generated, and/or operated upon by data processing system1500are functional data structures that impart functionality when employed by data processing system1500. As defined within this disclosure, the term “data structure” means a physical implementation of a data model's organization of data within a physical memory. As such, a data structure is formed of specific electrical or magnetic structural elements in a memory. A data structure imposes physical organization on the data stored in the memory as used by an application program executed using a processor. In one or more examples, one or more program modules1516may implement regular expression compiler100ofFIG.1. In cases where data processing system1500executes regular expression compiler100, an accelerator1550need not be included in order to perform the compilation operations described in connection withFIG.1. In one or more other examples, one or more of program modules1516may be runtime software intended to interact with accelerator1550and regular expression processing system130(e.g., one or more of such systems) as may be implemented in IC126. One or more program modules1516may include software and/or drivers for communicating with peripheral devices including accelerator1550, or the like, to offload processing jobs (e.g., provide data streams) and receive results from the pattern matching operations performed by the regular expression processing system(s)130implemented in accelerator1550. In another aspect, program modules1516also may include software that is capable of performing an implementation flow on a circuit design or portion thereof. In this regard, data processing system1500serves as an example of one or more Electronic Design Automation tools or a system that is capable of processing circuit designs through a design flow (e.g., including synthesis, placement, routing, and/or bitstream generation). Data processing system1500may include one or more Input/Output (I/O) interfaces1518communicatively linked to bus1506. I/O interface(s)1518allow data processing system1500to communicate with one or more external devices such as accelerator1550and/or communicate over one or more networks such as a local area network (LAN), a wide area network (WAN), and/or a public network (e.g., the Internet). Examples of I/O interfaces1518may include, but are not limited to, network cards, modems, network adapters, hardware controllers, etc. Examples of external devices also may include devices that allow a user to interact with data processing system1500(e.g., a display, a keyboard, and/or a pointing device) and/or other devices. Data processing system1500is only one example implementation. Data processing system1500can be practiced as a standalone device (e.g., as a user computing device or a server, as a bare metal server), in a cluster (e.g., two or more interconnected computers), or in a distributed cloud computing environment (e.g., as a cloud computing node) where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As used herein, the term “cloud computing” refers to a computing model that facilitates convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, ICs (e.g., programmable ICs) and/or services. These computing resources may be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing promotes availability and may be characterized by on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The example ofFIG.15is not intended to suggest any limitation as to the scope of use or functionality of example implementations described herein. Data processing system1500is an example of computer hardware that is capable of performing the various operations described within this disclosure. In this regard, data processing system1500may include fewer components than shown or additional components not illustrated inFIG.15depending upon the particular type of device and/or system that is implemented. The particular operating system and/or application(s) included may vary according to device and/or system type as may the types of I/O devices included. Further, one or more of the illustrative components may be incorporated into, or otherwise form a portion of, another component. For example, a processor may include at least some memory. Data processing system1500may be operational with numerous other general-purpose or special-purpose computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with data processing system1500include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Some computing environments, e.g., cloud computing environments and/or edge computing environments using data processing system1500or other suitable data processing system, generally support the FPGA-as-a-Service (FaaS) model. In the FaaS model, user functions are hardware accelerated as circuit designs implemented within programmable ICs operating under control of the (host) data processing system. Other examples of cloud computing models are described in the National Institute of Standards and Technology (NIST) and, more particularly, the Information Technology Laboratory of NIST. In an example implementation, the I/O interface1518through which data processing system1500communicates with accelerator1550is a PCIe adapter facilitating communication by way of a PCIe communication channel. Accelerator1550may be implemented as a circuit board that couples to data processing system1500. Accelerator1550may, for example, be inserted into a card slot, e.g., an available bus and/or PCIe slot, of data processing system1500. Accelerator1550may include IC126coupled to a volatile memory1554and a non-volatile memory1556. IC126may be implemented as previously described herein and implement one or more regular expression processing systems130. Volatile memory1554may be implemented as a RAM. In the example ofFIG.15, volatile memory1554is external to IC126, but is still considered a “local memory” of IC126, whereas memory1504, being within data processing system1500, is not considered local to IC126. Non-volatile memory1556may be implemented as flash memory. Non-volatile memory1556is also external to IC126and may be considered local to IC126. FIG.16illustrates an example of a software fallback feature that may be implemented using the computing environment described in connection withFIG.15or another similar computing environment. As described herein, the regular expression processing system130utilizes FIFO memories1302disposed inside of the active states memories1114to store active states of the NFA during operation. Each active states memory1114has a fixed amount of FIFO memory space available that may become full during operation. While increased sizes of FIFO memories1302may be implemented to avoid memory overflow conditions, such increases may consume significant resources of the IC126. This is particularly true since each regular expression engine1104includes two active states memories1114and each regular expression processing system130includes a plurality of regular expression engines1104. Moreover, a given IC may include multiple instances of the entire regular expression processing system130. Accordingly, in an example implementation, the size of each FIFO memory1302may be set to a size that is capable of storing a predetermined maximum number of states possible or desired given the processing task. In cases where a FIFO memory1302becomes full, the output string may be marked with a special or predetermined value (e.g., a marker or flag) indicative of an error (e.g., an overflow) condition. Referring to the example ofFIG.16, data processing system1500may execute an application1602. Application1602may send data, shown as string1604, to regular expression processing system130for processing. In another aspect, IC126may receive the string1604from another system, e.g., via a network (e.g., Ethernet or the like) connection from a device other than data processing system1500. For purposes of illustration, string1604may be an 8 MB block of data. During the course of operating on string1604, one or more of the FIFO memories1302of the active states memories1114of the regular expression processing system130may become full and experience an overflow condition. The overflow condition may be detected by controller1130by reading status registers of the active states memories1114. In response to detecting the overflow condition, controller1130is capable of logging the error condition by storing the predetermined indicator in the output (e.g., result of the processing of string1604) of regular expression processing system130that is made available to data processing system1500and application1602. For example, controller1130is capable of marking the output generated by regular expression processing system130. The example ofFIG.16illustrates a marked result1606being provided from regular expression processing system130to application1602. Marked result1606is the result from processing string1604. In response to detecting that the result is marked, application1602is capable of invoking regular expression processing application1608and providing string1604thereto as an input for processing. The regular expression processor application1608, having access to the computer-based resources of data processing system1500, is capable of processing string1604. Accordingly, in those cases where regular expression processing system130is unable to complete processing of a given string without error, data processing system1500may be notified and process the string. This allows the size of the FIFO memories1302to be tuned to reduce memory usage to conserve resources of IC126falling back on software-based regular expression processing. In one aspect, the FIFO memories1302may be implemented using lookup-tables (LUTs) implemented in IC126. The fallback processing described facilitates a significant reduction in the number of LUTs required to implement each regular expression processing system130. Because the software is invoked only in cases where an error occurs in the hardware, the software processing load placed on data processing system1500remains small in most cases. FIG.17illustrates an example method1700of operation for the regular expression compiler100described in connection withFIG.1. Method1700may be performed by a data processing system (e.g., “system”) such as the example data processing system1500ofFIG.15. In block1702, the system is capable of generating an NFA graph from a regular expression. In block1704, the system is capable of transforming the NFA graph into an HFA graph. Each node of the HFA graph, for any particular character, has at most one self-edge and at most one outbound edge. Further, the HFA graph has one or more nodes that have a self-edge and an outbound edge. In block1706, the system is capable of generating, from the HFA graph, an instruction table including state transition instructions. The state transition instructions are decoded by a regular expression engine implemented in hardware to apply the regular expression to a data stream received by the hardware. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the method can include loading the instruction table into a multi-port memory coupled to one or more of the regular expression engines, wherein each regular expression engine is configured to process the data stream through execution of the state transition instructions. In another aspect, the generating the NFA graph includes processing the regular expression using a lexical analyzer to generate a plurality of lexical tokens, parsing the plurality of lexical tokens into a plurality of character syntax nodes (e.g., literal and generic) and a plurality of operator syntax nodes, transforming the plurality of character syntax nodes into a plurality of fragments, and joining the plurality of fragments based on the plurality of operator syntax nodes. In another aspect, the generating the NFA graph includes combining the plurality of fragments by, for at least one selected fragment of the plurality of fragments, creating a placeholder edge for a start node, wherein the placeholder edge is unaffiliated with a character. In another aspect, the generating the NFA graph includes combining pairs of fragments according to a two-pass processing technique wherein non-placeholder edges are processed during a first pass through the plurality of fragments and placeholder edges are processed during a second pass through the plurality of fragments. In another aspect, the generating the NFA graph includes performing a split operation. Performing a split operation includes creating a copy state of a selected state and, for each outbound edge and each self-edge of the selected state, creating a corresponding and equivalent edge for the copy state. For each equivalent edge, a source end of the equivalent edge connects to the copy state and a destination end of the equivalent edge connects to a same state as a destination end of the corresponding edge of the selected state. FIG.18illustrates an example method1800of operation of the example computing environment described in connection withFIG.15. In block1802, the regular expression processing system130is capable of receiving a string. The regular expression processing system130is implemented in hardware within IC126. The regular expression processing system130may be programmed with an instruction table122to detect a pattern defined by a regular expression within the string. In block1804, the regular expression processing system130is capable of detecting an error condition occurring in the regular expression processing system130during processing of the string. In block1806, the regular expression processing system130is capable of notifying data processing system1500, which is communicatively linked to the IC126, that the error condition occurred during processing of the string. In block1808, in response to the notifying, the data processing system1500is capable of invoking a software-based regular expression processor to process the string. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the error condition includes one or more active states memories of the regular expression processing system being full during processing of the string. In another aspect, the string is initially provided from the data processing system to the integrated circuit for processing. In another aspect, the notifying the data processing system of the error condition includes setting a predetermined marker indicating that the error occurred for the string. In another aspect, the method includes processing the string using the software-based regular expression processor as executed by the data processing system. In one or more example implementations, a system includes a multi-port random-access memory (RAM) (e.g., NRR memory1102) configured to store an instruction table122, wherein the instruction table122specifies an NFA that applies a regular expression to a data stream (e.g., input data1118). The system can include a regular expression engine1104configured to process the data stream based on the instruction table122. The regular expression engine1104can include a decoder circuit1106configured to determine validity of active states output from the multi-port RAM. The regular expression engine1104can include a plurality of active states memories1114operating concurrently. Each active states memory1114may be configured to initiate a read from a different port of the multi-port RAM using an address formed of an active state output from the active states memory1114and a portion of the data stream. The regular expression engine1104can include switching circuitry1120configured to selectively route the active states to the plurality of active states memories1114according, at least in part, to a load balancing technique and validity of the active states. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the multi-port RAM is a dual-port RAM. In another aspect, the system can include a plurality of regular expression engines1104each configured to receive a data stream and operate in parallel. In another aspect, the plurality of regular expression engines1104can include N regular expression engines1104, wherein each of the N regular expression engines is configured to output, to the address ports of the multi-port RAM, a plurality of addresses for concurrently looking up a plurality of state transition instructions to process a plurality of active states in a single clock cycle. Each of the N regular expression engines1104is capable of outputting the plurality of addresses every N clock cycles. In another aspect, the regular expression includes at least one of a concatenation regular expression operator or an alternation regular expression operator. In another aspect, the regular expression includes at least one of a question regular expression operator, a star regular expression operator, or a plus regular expression operator. In another aspect, the load balancing maintains a difference between a number of active states stored in a first active states memory1114-0of the plurality of active states memories1114and a number of active states stored in a second active states memory1114-1of the plurality of active states memories1114to be less than two. In another aspect, the regular expression engine1104is configured to generate a flag in response to detecting an overflow condition in at least one of the plurality of active states memories1114while processing a string of the data stream. A host computer system1500in communication with the system, in response to reading the flag, is configured to initiate processing of the string using a software-based regular expression processor. In another aspect, the regular expression engine is pipelined such that each active states memory generates the address every N clock cycles. In another aspect, N is equal to four. As discussed, N may be equal to a value that is smaller or greater than four. While matching an input string to a regular expression, more than one path may be taken through the regular expression to determine a match. For example, given a choice of two such paths, the rules of the regular expression language specify which path should be preferred. In accordance with the inventive arrangements described herein, a regular expression processing system is provided that is capable of tracking these multiple paths and their respective priorities. A regular expression processing system so adapted is capable of indicating the particular path taken in cases where matches are determined from a data stream for a given regular expression. There are a variety of different regular expression constructs that utilize the notion of path priority. Examples of these constructs that require a preference of one path over another include alternation and quantifiers. Alternation supports matching a single regular expression out of several possible regular expressions. For purposes of illustration, consider a regular expression such as “A|B|C”, where A, B, and C are sub-patterns. This regular expression requires that the alternative choices be preferred in order from left to right. That is, A should be preferred over B, which should be preferred over C. In another example, consider the regular expression “abc|ab” which is semantically equivalent to “abc?”. When given the input “abc”, after matching the initial “ab”, the regular expression engine1104needs to choose between matching the “c” to follow the left alternative (“abc”) or consider the match done to follow the right alternative (“ab”). The path priority rule for alternation states that the left path should be preferred. Accordingly, the regular expression matches the full input string, “abc”. If the regular expression is reversed, “ab|abc”, then the regular expression would match the “ab” part of the input string. If, for example, the alternative choices are mutually exclusive, as is the case with the regular expression “a|b”, the path priority rules are irrelevant because given a particular input, there is only one path that can be taken. That is, when given the input “a”, only the “a” alternative qualifies; there is no other path. In general, quantifiers specify how many instances of a character, group, or character class must be present in the input for a match to be found. The regular expression quantifier operators “?”, “*”, “+”, and “{n,}”, for example, require a choice between the path through the operand and the path that bypasses the operand. For example, for the regular expression “ab?” given the input “ab”, after input “a” is matched, there is a choice between matching “b” or bypassing the “b?” sub-pattern and calling the match complete having matched just “a”. As another example, for the regular expression “ab*” given the input “abb”, after input “a” is matched, there is a choice between matching the first “b” or bypassing the “b*” sub-pattern and calling the match complete after input “a”. If the path matching the “b” is taken, then there is another path choice between matching the second “b” or bypassing the “b*” sub-pattern at that point, calling the match complete after input “ab”. Quantifiers are defined to be “greedy” by default. This means that the path through the operand should be preferred over bypassing the operand. In the case of “ab?”, an input of “ab” should match the full input instead of just “a”. In the case of “ab*”, an input of “abb” should match the full input instead of just “a” or “ab”. The quantifiers can be made “lazy” by appending a “?” to the quantifier operator, such as in “ab*?”. If an operator is lazy, the path that bypasses the operand should be preferred over the path through the operand. In contrast to the previous examples, the regular expression “ab??” given input “ab” should match only “a”. Similarly, the regular expression “ab*?” given input “abb” should also match only “a”. The examples in the previous section represent a special case. In the foregoing, the regular expression ends with a choice either to continue accepting characters or match the full regular expression thereby terminating the processing of the input for that instance of the regular expression. Within this disclosure, this scenario is referred to as “match-or-continue”. An example of a path priority case that is not the special case would be “ab?b” for input “abb”, where there would still be a choice between matching the first “b” input to the “b?” sub-pattern or bypassing that quantifier sub-pattern and matching instead with the “b” sub-pattern at the end of the regular expression. The input may match as either “abb” or “ab”. This scenario, however, is not considered a match-or-continue case since after taking the decision to match or bypass “b?”, unlike the prior examples, the regular expression has not finished. In accordance with the inventive arrangements described herein, the HFA graph described herein is capable of supporting a hardware implementation that explores multiple paths through the HFA graph simultaneously. The HFA graph supports these multiple simultaneous paths only for “self-edges” and “epsilon splits”. Self-edges have been described herein, where an example of a self-edge is illustrated inFIG.9. FIG.19illustrates another example of a self-edge as an HFA graph. In the example ofFIG.19, the state 0 has both an outbound edge that leads away from the state to another state and a “self-edge” that loops from the state back to itself. Both edges are labeled with the character “a”, which means that when the regular expression engine1104is on state 0 and receives an input of “a”, both edges must be taken, leaving the hardware in both state 0 and state 1. The example ofFIG.19corresponds to a regular expression of “a*a”. In general, regular expression engine1104supports states that, for a particular character, have up to one outbound edge and up to one self-edge. A state can support any number of characters with these single-character configurations. FIG.20illustrates another example of an HFA graph. In the example ofFIG.20, there is an outbound edge and a self-edge for “a”, only a self-edge for “b”, and only an outbound edge for “c”. Regular expressions producing multiple paths via self-edges are readily created using single-character loops with the star operator. The example ofFIG.20corresponds to a regular expression of “[ab]*[ac]”. The regular expression compiler100is capable of supporting states that have one or more epsilon edges and no other kind of edge. Such a state is said to be an “epsilon state” (or “eps state”). In the context of an NFA graph, when a state with epsilon edges is encountered, all epsilon edges are traversed immediately without consuming another input character. In the context of an HFA graph, if a state has “n” epsilon edges, then “n” paths must be explored leaving the hardware in “n” different states. FIG.21illustrates an example of an epsilon split. In the example ofFIG.21, when input of “a” is seen, all epsilon edges at state 1 are taken, leaving the hardware in states 2, 3, and 4 simultaneously. The regular expression for the HFA graph ofFIG.21is “a(b|c|d)”. If the automaton for regular expression “a*a” encounters the input “a”, two paths are taken. As previously noted, the self-edge has the higher priority. If the regular expression were “a*?a”, then the outbound edge would have the higher priority. Similarly, for regular expression “a(b|c|d)”, after input “a” has been matched, three paths are taken. The top path to state 2, referring toFIG.21, has the highest priority because that path is the leftmost alternative. The handling and representation of path priority for various cases may be represented using an updated version of the instruction table as described in greater detail below. FIG.22illustrates another example of an instruction table2200that is capable of supporting the tracking of multiple paths and path priorities. In one aspect, the instruction table2200is updated to include epsilon support. A flag referred to as “EPS” (e.g., for epsilon) is added making the output of the instruction table2200adhere to the format of {state, DIFF, EPS}. That is, given a received character and state, the character and state may be used to lookup a next state, DIFF, and EPS that may be output to the decoder circuit1106. In using the instruction table2200within the NRR memory1102to process an input character, the regular expression engines1104use instruction table2200to determine how to move through the HFA graph. For example, if the regular expression engine1104is currently at state 5 and sees an “a” input character, then the element value at index {“a”, 5} is fetched, resulting in, for example, {6, DIFF=true, EPS=false}, meaning that the state to which the regular expression engine1104should move is state 6. The “true” value for the DIFF flag indicates that the regular expression engine1104is moving away from state 5 via an outbound edge. A value of “false” for the DIFF flag indicates that the regular expression engine1104is moving away from state 5 via an outbound edge and is also traversing a self-edge back to state 5. The EPS field is described in greater detail below. When a state has a (non-epsilon) edge to an epsilon state, where that state may be defined as a “pre-eps state”, that EPS flag of the state transition instruction (also referred to as an entry) in the instruction table2200, which is normally set to false, is set to true to indicate that the next state transition instructions of the instruction table2200to be inspected collectively form an “epsilon sub-table” within the instruction table2200. If that edge is traversed, the hardware, e.g., the regular expression engine1104, is placed in “epsilon operating mode,” for which additional state transition instructions are read until the end of the epsilon sub-table is reached. While in the epsilon operating mode, the regular expression engine1104does not consume any further input characters. Upon reaching the end of the epsilon sub-table, the hardware resumes “normal mode.” In the example ofFIG.22, the epsilon sub-table is shaded. For the input “ab” the regular expression engine1104begins at state 0. The regular expression engine1104then looks up the entry for index {“a”, 0} to find a value of {1, DIFF=true, EPS=true}, which indicates that the regular expression engine1104should switch to epsilon operating mode. In epsilon operating mode, the epsilon sub-table to be read consists of all entries whose index is {n,1}, where n is 0 through a maximum of 255 (in this example) as opposed to a character. In other words, the character field of the index is repurposed for the index of the epsilon sub-table. The epsilon sub-table ends on the entry of instruction table2200having an EPS flag set to “false.” The example ofFIG.22illustrates a single epsilon sub-table. It should be appreciated that a given instruction table2200may include a plurality of different epsilon sub-tables. Referring to the examples ofFIGS.21and22, to process the epsilon sub-table, the regular expression engine1104begins by reading the value at index {0, 1}, returning {2, DIFF=true, EPS=true}. The value indicates that the edge to state 2 should be taken, and EPS being set to true indicates that the next entry in the epsilon sub-table should be read. Next, the hardware reads the value at {1, 1}, returning {3, DIFF=true, EPS=true}. This value indicates that the edge to state 3 should be taken concurrently with the previous edge to state 2, and again the EPS flag indicates that the next entry of the epsilon sub-table should be read. Finally, the value at index {2, 1} is read, returning {4, DIFF=true, EPS=false}, which indicates that the edge to state 4 should be taken concurrently with the other two states. With the EPS flag set to false for the {4, DIFF=true, EPS=false}, the regular expression engine1104is done reading the epsilon sub-table. Accordingly, the regular expression engine1104switches back to the normal (e.g., non-epsilon) mode of operation. In the example ofFIGS.21and22, the epsilon sub-table is started after taking an edge from state 0 to state 1. In another example, it is also possible that state 0 itself could be an epsilon state, in which case, the regular expression compiler100produces an extra flag, outside the instruction table2200indicating whether state 0 is an epsilon state. If that flag is set, the regular expression engine1104is put into epsilon operating mode reading from state 0 immediately upon initialization. To implement path priority in the instruction table2200as applied to greedy and lazy quantifiers, the EPS flag in instruction table2200may be repurposed when the DIFF flag is false (indicating a self-edge). When the EPS flag is false, the self-edge on the state is taken to be a greedy self-edge, whereas when the EPS flag is true, the self-edge is lazy. Because the EPS flag is repurposed for a state with a self-edge, a state cannot both have a self-edge and be a pre-eps state. Whenever such a case arises, the regular expression compiler100gives preference to the pre-eps status of the state by eliminating the self-edge, either by re-writing the edge as an outbound edge to the same state or (in the case of the state having both an outbound edge and a self-edge for a character) by “merging” the two edges into one, e.g., DFA-style, using the powerset construction algorithm previously described. In order to support path priority, the regular expression compiler100may be adapted with various changes to the operations performed and data structures used. For example, within the regular expression compiler100, the data structure for a state contains, among other data, an unordered collection of edge objects. To implement path priority, the unordered collection may be replaced with an ordered collection with the ability to add an edge to either end of the collection efficiently. A double-ended queue, referred to as a “deque”, may be used for the path priority-enhanced processing techniques implemented by the regular expression compiler100. In one aspect, path priority among edges may be represented by the relative order of the edges in the collection. That is, for two given edges, whichever edge is nearer the head end of the collection has higher priority than the other edge. Since quantifiers can be greedy or lazy, resulting in higher or lower priority, respectively, the regular expression compiler100may build the NFA fragments for a quantifier with such caveats in mind to implement the correct priority. For instance, when adding the loop and bypass edges of a quantifier, the regular expression compiler100uses their placement within the state's edge collection to determine whether the quantifier is greedy or lazy. As an example, the pseudo code in Example 6 illustrates an algorithm for the star operator with the necessary positioning of the bypass edge achieved by the code “at Front=isLazy”. Example 6 arg=fragStack.pop( )f=new Fragment(startState=arg.startState, endEdges={ })arg.addPlaceholderEdge(atFront=isLazy)patch(edges=arg.endEdges, targetState=f.startState)fragStack.push(f) In one aspect, the function for adding a placeholder edge to the fragment's start state is modified to take an argument indicating whether the new edge should be added at the front of the state's edge collection or the back. If the quantifier is lazy, the placeholder edge is added to the front of the list of edges, prioritizing bypassing the quantifier sub-pattern over the path running through that sub-pattern. For example, referring toFIG.23, which depicts an NFA graph for regular expression “(ab)*?”, the bypass path (dashed edge) comes before the quantifier sub-pattern path corresponding to character “a” in the start state's edge collection so the bypass path has higher priority. In the example, the start of the state's edge collection may be located starting in the north direction with priorities decreasing going clockwise around the state. For a greedy quantifier, the placeholder edge is added to the end of the list of edges, prioritizing the edge into the sub-pattern over the bypass edge. In accordance with the inventive arrangements, the path operation may be modified to accommodate for the position of the placeholder edge. The pseudo code of Example 7 illustrates a modified version of the patch operation that accounts for position of the placeholder edge. Example 7 patch(edges, targetState)for each e in edgesif !e.isPlaceholder( )e.to =targetStatefor each e in edgesif e.isPlaceholder( )e.from.edges.insert(edges=targetState.edges.clone( ) at=e)edges.remove(e)if (targetState.isMatch)e.from.isMatch=true Referring to Example 7, instead of adding the cloned edges at an arbitrary location in the destination state's edge list, the cloned edges are added at the location of the placeholder edge. Alternation paths are prioritized in a similar fashion to that of quantifiers. Regular expression compiler100is capable of building a fragment for an alternation by merging the constituent pieces into a single state. For example, in the regular expression “ab|cd|ef”, the regular expression compiler100merges the NFA fragments for “ab” and “bc” into the “ab” fragment's start state. The regular expression compiler100may then merge the fragment for “ef” into the combined fragment's start state as illustrated in the example ofFIG.24. The placement of the placeholder edges (dashed lines) determines the priority order of the alternatives by ensuring that the placeholder edge is always added at the tail end of the edge list; the alternatives end up prioritized in the regular expression's left-to-right order. The alternation fragment-building operation illustrated in the pseudo code of example 8 is modified to ensure that the placeholder edge is added to the tail of the edge list. Example 8 arg2=fragStack.pop( )arg1=fragStack.pop( )if arg1.startState.hasInboundEdges( )arg1.split( )if arg2.startState.hasInboundEdges( )arg2.split( )e=arg1.addPlaceholderEdge(atFront=false)patch(edges={e}, targetState=arg2.startState)arg1.endEdges+=arg2.endEdgesfragStac.push(arg1) FIG.25illustrates an example NFA graph for the regular expression “abc?” illustrating the match-or-continue scenario. The match-or-continue scenario, as previously described, applies to cases where the path choice is between concluding the match or taking a path to continue the regular expression. Referring to the example ofFIG.25, state 2 has a double ring to indicate that the match could complete at that point, e.g., after receiving input “ab”, but also has an outbound edge to state 3. For an input of “abc”, the choice is between stopping at state 2, thereby matching the substring “ab”, or continuing to state 3 to match the whole string. In this case, because the Question (?) operator is greedy, the edge to state 3 should be prioritized over stopping at state 2. FIG.26illustrates an example of an HFA graph as generated by regular expression compiler100. Because the regular expression engines1104do not support match states with outbound edges, as the regular expression compiler100transforms the NFA graph ofFIG.25into the HFA graph ofFIG.26, any HFA state that has outbound edges and is a match state is constructed using epsilon edges, as shown. The example ofFIG.26shows that NFA state 2 fromFIG.26has turned into a complex of 3 states including an epsilon “master” state (state 2), a normal “sub-state” (state 3) containing all the outbound edges from the NFA state, and a “pure” match state (state 5) with no outbound edges. By ordering the entries of the epsilon sub-table for the master state so that the edge to the sub-state comes before the edge to the match state, the edge to the sub-state is made to be a higher priority than the edge to the match state. Regular expression compiler100may be adapted to transform the NFA graph into the HFA graph in order to handle greedy versus lazy self-edges and the match-or-continue scenario. The pseudo-code of Example 9 illustrates the example operations performed by regular expression compiler100in transforming an NFG graph into an HFA graph. Example 9buildXfa(nfaStartState, enableSelfEdges)xfaStates = {new XfaState({nfaStartState})}unprocessedStates = xfaStateswhile unprocessedStates != {}xfa = unprocessedStates.pop_front( )for each edgeSet in xfa.getEdgeSets( )outboundStates = {}greedyStates = {}lazyStates = {}isGreedyRangeValid = truefor each edge in edgeSet.edgesif edge.to in xfa.nfaStatesif isGreedyRangeValidgreedyStates += edge.toelselazyStates += edge.toelse // outbound edgeoutboundStates += edge.toisGreedyRangeValid = falseif lazyStates != {}error(unsupported)if edge.to.isMatchbreakisPreEps = isPreEps(nfaStates)hadGreedySelfEdge = falseif greedyStates != {}if !isPreEps AND greedyStates == xfa.nfaStates ANDenableSelfEdgesxfa.edges += new Edge(from=xfa, to=xfa,char=edgeSet.char)hadGreedySelfEdge =trueelseoutboundStates.insertFront(greedyStates)hasLazySelfEdge = falseif lazyStates != {}if !hadGreedySelfEdge AND !isPreEps AND lazyStates ==xfa.nfaStates ANDenableSelfEdgeshasLazySelfEdge = trueelseoutboundStates.insertBack(lazyStates)if outboundStates != {}destXfa = get from xfaStates an XfaState x where x.nfaStates ==outboundStatesif destXfa == nulldestXfa = new XfaState(outboundStates)unprocessedStates += destXfaxfa.edges += new Edge(from=xfa, to=destXfa,char=edgeSet.char)if hasLazySelfEdgexfa.edges += new Edge(from=xfa, to=xfa, char=edgeSet.char)return xfaStates[0] In Example 9, new variables are introduced to track greedy and lazy edges. The “loopStates” set is split into two: one for greedy edges and one for lazy edges. The “isGreedyRangeValid” keeps track of where in the ordered list of edges the greedy self-edges end. The limitations of the regular expression processing system130impose restrictions on the contents of a state's edge list in the HFA. The state's edge list must contain, in priority order, all greedy self-edges first, then outbound edges, and then lazy edges. Any deviation from these requirements is handled by converting hardware-supported self-edges into “outbound” edges that loop back to the same state, or failing that possibility to convert, by erroring out as an unsupported case. If the edge list has both greedy and lazy self-edges, since only one form is supported, the lazy edges are converted into outbound edges. It should be noted that if all self-edges are converted to outbound edges by regular expression compiler100, then the regular expression compiler100is performing powerset construction without modification and the resulting graph is a pure DFA graph. In one or more example implementations, the regular expression compiler100supplies an option to disable self-edges by setting the property “enableSelfEdges” to false, which causes the regular expression compiler100to process all self-edges as outbound edges. For some regular expressions, this option can increase the length of input strings that the hardware can process by reducing the number of simultaneous paths being explored to fit within the hardware's limit on the number of paths. In other cases, this option can make a large regular expression fit within the existing hardware limitations (e.g., 254 states in this example) since the differing set of edges changes the outcome of the powerset construction operation. Continuing with the pseudo code of Example 9, for a self-edge to be greedy, the self-edge must by definition be in the edge list before any outbound edges. For a self-edge to be lazy, it must be in the edge list after all outbound edges. When looping in priority order over the edges for a character, if the edge is a self-edge (that is, if edge.to is in xfa.nfaStates) and the regular expression compiler100is still in the greedy range of edges (not having seen an outbound edge yet), the regular expression compiler100adds the edge to the set of greedy edges. If instead the regular expression compiler100has already seen an outbound edge, the regular expression compiler100adds this self-edge to the set of lazy edges with the expectation that the only edges remaining for this character are all lazy self-edges. If instead the current edge is an outbound edge, the regular expression compiler100adds the edge to the set of outbound edges. Because the regular expression compiler100has seen an outbound edge, there must no longer be any greedy edges. Accordingly, regular expression compiler100set “isGreedyRangeValid” accordingly. Also, if the regular expression compiler100has already seen a lazy self-edge, the regular expression compiler100determines that the construct being operated on is unsupported since the current outbound edge is of lower priority than the lazy self-edge. While looping through the edges of the current character, if the regular expression compiler100encounters an edge whose destination is a match state, the regular expression compiler100breaks from the loop to ignore the remaining edges. Any edges of lower priority than an edge to a match state need not be explored as the edge to the match state is guaranteed to match for the current character and will always be chosen over any lower priority possibility for that character. Next, after all edges of the current character have been categorized (e.g., as a lazy self-edge, an outbound edge, or a greedy self-edge), the former logic of the regular expression compiler100for creating HFA states is replaced with a four-part operation. The four-part operation first processes greedy self-edges, then checks the validity of lazy self-edges, then processes outbound edges, and then processes lazy self-edges. In the first operation, if there are any edges collected as greedy self-edges and those edges truly constitute an HFA self-edge (e.g., the state is not a pre-eps state, the edges cover all the NFA states of the XFA state, and the creation of self-edges is enabled), then the regular expression compiler100creates the greedy self-edge and logs that a greedy self-edge has been created. Otherwise, regular expression compiler100moves all the edges to the front of the outbound edge collection to make the moved edges higher priority than the outbound edges in order to process such edges as “outbound” edges. In the second operation, in a manner similar to checking the validity of the greedy self-edges, the regular expression compiler100validates the lazy self-edges, with the additional requirement that for a self-edge to exist, a greedy self-edge must not exist. If the lazy self-edges fail validation, the regular expression compiler100moves the lazy self-edges to the back of the outbound edge set, as those self-edges are of lower priority than the outbound edges. In the third operation, the regular expression compiler100processes the outbound edges as was done in and described in connection with Example 4 by looking up or creating an HFA node for the destination of the HFA outbound edge. In the fourth operation, if the lazy self-edges had passed validation, the regular expression compiler100creates an HFA lazy self-edge. Example 9 and the accompanying description relating to sorting edges illustrates an example of maintaining priority among edges to indicate properties such as left-to-right alternation in the regular expression and/or lazy-greedy edges. The pseudo code of Example 10 illustrates an example technique used by the regular expression compiler100for determining whether the current HFA state is a pre-eps state. Example 10 isPreEps(nfaStates)hasOutboundEdge=falsehasMatchState=falsefor each nfaState in nfaStatesif nfaState.edges !={ }hasOutboundEdge=trueif nfaState.isMatchhasMatchState=truereturn hasOutboundEdge AND hasMatchState Example 10 illustrates that the regular expression compiler100is capable of determining that an HFA state is a pre-eps state if at least one of its NFA states has outbound edges and at least one of the NFA states is a match state. The regular expression compiler100is capable of operating by looping through each of the NFA states looking for outbound edges and match flags. In response to detecting both for a given NFA state, the regular expression compiler100determines that the state is a pre-eps state. FIG.27illustrates certain operative features relating to path priority processing as performed by the regular expression processing system130. In the example ofFIG.27, the regular expression that is implemented in the instruction table2200is “.*?(?:(abcd)|(ab)|(cef)))”. The regular expression begins with a lazy quantifier matching any character followed by an alternation of three sub-patterns “abcd”, “ab”, and “cef”. The HFA graph for the regular expression is shown. In the HFA graph, region 5 illustrates the match state for sub-pattern “abcd”. Region 4 illustrates the match state for sub-pattern “ab”. Region 3 illustrates the match state for sub-pattern “cef”. In the HFA graph ofFIG.27, where path choices are available, the encircled “+” indicates a higher priority path while the encircled “-” indicates a lower priority path. For example, because the quantifier corresponding to region 1 is lazy, an input of “a” or “c” should prefer the outbound edges leading to states 1 or 5 over the self-edge leading back to state 0. The HFA graph ofFIG.27also illustrates the match-or-continue scenario that arises for this regular expression with the path ending in region 4 that ends the regular expression, which is also a prefix of the regular expression sub-pattern “abcd”. In the HFA graph, the “ab” common part of both sub-patterns “abcd” and “ab” has been merged into a single path corresponding to region 2 due to powerset construction. Where the paths diverge at state 2, region 5 represents the continuation for the “cd” sub-pattern, while region 4 ends the regular expression. Because state 2 needs both an outbound edge and a match flag, the regular expression compiler100has split state 2 with epsilon edges as previously described. Region 5 has a higher priority than region 4 since the portion of the HFA graph represented by region 5 is the leftmost operand of the alternation. As such, the epsilon edge leading to region 5 has a higher priority than epsilon edge leading to region 4. The table illustrated inFIG.27shows the changes that occur in the regular expression engine1104on a state-by-state basis as each of input characters x, a, b, c, e, and f are received and processed. In the table, each column represents a snapshot in time of the ordered list of graph paths that the hardware is actively exploring that are represented or stored in the priority FIFO memories described hereinbelow in connection withFIG.28. The topmost row is the highest priority path. The bottommost row is the lowest priority path. The input characters are shown across the top of the table as received at different points in time. Operation begins with the list initialized with a single path at state 0, the starting state. As the regular expression engine1104receives the “x”, the only possible edge is the self-edge back to state 0, after which the list contains just state 0. In response to receiving the “a”, there are two available paths which include the self-edge back to state 0 and the outbound edge to state 1. Both states are added to the table. Because the outbound edge has higher priority, the resulting list has state 1 above state 0. For example, state 1 is shown in the first row, while the state 0 is shown in the second row. In response to processing the “b”, the regular expression engine1104encounters the match-or-continue scenario. In receiving the “b”, the outbound edge to state 2 is taken. As previously discussed, both of the epsilon edges are taken immediately so that state 3 is reached in region 5 and the match state (shown as a double ring) is reached in region 4. As shown in the table in the column “b”, in the first row the state advances from 2 to 3, while the match state is shown in row 2. The epsilon edges are traversed without the hardware consuming another input character. At this point, the regular expression engine1104has received the characters “xab”, which may be a complete match via the sub-pattern corresponding to region 4. Alternatively, regular expression engine1104may be partially done matching the sub-pattern continuing in region 5. In this example, since the sub-pattern corresponding to region 5 is higher priority than that of region 4, the regular expression engine1104continues processing further input characters to determine whether the sub-pattern corresponding to region 5 is matched. Only if the path corresponding to region 5 fails is the path corresponding to region 4 accepted. As shown, the match state reached in region 4 is added to the list beneath state 3 (e.g., in column “b”). Having reached the match state, the remainder of the list maintained in the hardware is discarded since entries of a lower priority than a matched state will never be accepted. For purposes of illustration, the shaded block in column “a” in the second row containing the state 0 is discarded. Additionally, to accommodate the next search in the input after “xab” may be matched, the regular expression engine1104starts a new path (e.g., list) in row 3 where the starting state 0 is added in column “b”. The example ofFIG.27illustrates that whenever the regular expression engine1104determines that a path completes, the list(s) below that path are discarded. Further, the path's match state is added to the list and a new path at the starting state is placed at the bottom of the list. Continuing with the processing performed by the regular expression engine1104, the input “c” may be received. Accordingly, in row 1, state 4 is reached corresponding to region 5. The match state corresponding to region 4 remains in the list maintained in the second row. The list in the row 3 has two edges for the character “c” that can be taken. One is the outbound edge to state 5 in region 3 and the other is the self-edge back to state 0. Accordingly, the higher priority outbound edge to state 5 is placed in row 3, while a new list is started in row 4 corresponding to the self-edge back to state 0. Thus, under column “c”, row 1 includes state 4, row 2 includes the match state, row 3 includes state 5, and row 4 includes state 0. In response to receiving the next input character “e”, the regular expression engine1104determines that the input character is not valid for region 5 as there is no edge corresponding to “e”. Accordingly, the path is terminated as indicated by the “Fail” in row 1 column “e”. Once the path in row 1 is terminated, the match in row 2 corresponding to region 4 is the highest priority path and is accepted as a match result indicating that the input “xab” was a match. Meanwhile, the remaining paths, which now represent a second potential match starting with input “ce” continue to advance where state 5 in row 3 advances to state 6 in region 3 and state 0 in row 4 takes the self-edge back to state 0. In response to receiving the input character “f”, the path specified in row 3 in region 3 matches. As previously discussed, the list corresponding to row 4 may be discarded and a new path at state 0 is started. In the latter portion of the example ofFIG.27, because the path corresponding to region 3 was the highest priority path in the table, the match determined in response to receiving the input character “f” is immediately accepted and output indicating that the input “cef” was a match. The path prioritization described and implemented by the regular expression compiler100may be implemented by the regular expression processing system130disclosed herein and through incorporation of a modified version of the active states memories1114previously described. In one aspect, each of the active states memories1114of the regular expression engines1104may be implemented using a priority FIFO memory that is capable of storing the paths of the HFA graph that are currently active. The priority FIFO memories are capable of storing all active paths while traversing the HFA graph. The priority FIFO memories are capable of maintaining, or storing, active paths in correct priority order as illustrated in the example table ofFIG.27. Further, the priority FIFO memories are capable of clearing the entries, e.g., lists or paths, that constitute lower priority paths in response to detecting the various conditions described herein. Regarding match-or-continue operation, the hardware is capable of storing one or more match states in one or both of the priority FIFO memories until each higher priority path and/or active state has failed. The match states can be stored in one or both of the priority FIFO memories as a mechanism for dealing with the uncertainty of whether a given match completes (e.g., a shorter path is accepted as a match) or will be extended (e.g., a longer path is accepted as a match). The match state is not accepted until each higher priority path and/or active state fails. This capability alleviates the need for the hardware to iterate back over, or re-read, characters of the input stream when handling match-or-continue scenarios. FIG.28is a block diagram illustrating an example implementation of a priority FIFO memory2800. In general, priority FIFO memory2800is capable of operating as a FIFO memory as generally understood by one skilled in the art. Priority FIFO memory2800does include additional features as described in greater detail below that facilitate the storage and tracking of priority paths. In the example ofFIG.28, the priority FIFO memory2800includes a FIFO memory2802, a switch2804(e.g., a multiplexer), and a register2806. The priority FIFO memory2800also may include logic2808(e.g., shown as2808-1and2808-2) that is capable of either passing certain signals into and out from the priority FIFO memory2800unchanged, modifying certain signals provided to or from the priority FIFO memory2800, and/or generating new signals based on a combination of two or more signals provided to or from the priority FIFO memory2800. In the example, the particular control signals illustrated may be coupled to a controller2810. Controller2810may be implemented similar to controller1130to receive and/or provide the particular signals illustrated. In one aspect, controller2810may be implemented in logic/circuitry that is distributed over a plurality of different circuit blocks in the examples described herein. In terms of operation of the priority FIFO memory2800, in one aspect, the order in which entries, or active states, are stored in priority FIFO memory2800represents the priority of the entries. For example, at the start of processing a new character from a received data stream, for the set of active paths stored in priority FIFO memory2800, the entries are stored in decreasing order of priority according to the path priorities described herein for regular expression processing. In another aspect, the priority FIFO memory2800may be partially cleared to support discarding of lower priority entries in certain conditions. The partial clearing supports discarding of a contiguous subset of the entries stored in the priority FIFO memory2800. The signal partial_discard may be used to trigger the partial clearance operation while the signal discard_count may be used to set or determine the number of entries to discard starting from the top of the priority FIFO memory2800. The partial discard may be performed as part of a configuration operating state described in greater detail below. In an example implementation, the partial discard operation may be performed by updating a head pointer (address) of the FIFO memory2802that points to a top of the priority FIFO memory2800as head=head+discard_count. This functionality is used to discard all lower priority paths from the priority FIFO memory2800. In response to determining that a match state is reached for an active path in priority FIFO memory2800for a particular input character, the remaining active states for that character may be discarded from the priority FIFO memory2800. As illustrated, the data_out signal from the priority FIFO memory2800is registered by register2806. The latency of the priority FIFO memory2800is guaranteed to be 1 clock cycle. That is, if the priority FIFO memory2800is not empty, a read request received in a clock cycle will be served in the next clock cycle. This means that any pending writes and partial discards will be handled appropriately as discussed in greater detail below. In the example ofFIG.28, the priority FIFO memory2800uses the signal add_is_flag to indicate that a new active path should be started by adding the starting state (referred here as initial state “IS”) to the priority FIFO memory2800as illustrated in the example ofFIG.27. The writing of the IS may be performed in addition to a regular write that is performed by asserting the signal wr. Accordingly, if both signals wr and add_is_flag are asserted at the same time, two entries are written to the priority FIFO memory2800. This condition only occurs if a partial discard is initiated according to the path priority process techniques previously described. This means that the latency of completing a partial discard operation is 2 clock cycles. In the example, the FIFO memory2802implements the partial discard and initial state write functionality. The priority FIFO memory2800implements a bypass register using switch2804and register2806to achieve a 1 clock cycle read latency with registered output (e.g., the “first word fall through” feature). The 1 clock cycle read latency achieved using the switch2804and register2806may be implemented substantially as described in connection withFIGS.11,12, and13. FIG.29illustrates an example implementation of a state machine2900. In one aspect, state machine2900may be implemented as part of controller2810. The example state machine2900is capable of controlling how many entries currently exist in the priority FIFO memory2800. In the example, state machine2900includes 4 states. All states except the reconfiguration operating state (shown as RECONFIG) represents the number of entries in the priority FIFO memory2800. Referring toFIG.29, RECONFIG is entered when partial discard is triggered from any state. RECONFIG performs the partial discard and brings back the priority FIFO memory2800into the correct state based on the number of remaining entries after the partial discard operation. As discussed, the partial discard operation of the RECONFIG state takes 2 clock cycles to complete. While the priority FIFO memory2800is in the RECONFIG state, the signal data_out is not valid (e.g., is invalid). The transitions from RECONFIG to one of the other valid states is discussed in greater detail below. The transitions from the RECONFIG state to each of the other three states are described in greater detail in connection withFIG.30below. The following discussion describes the signals ofFIGS.28and29with respect to the priority FIFO memory2800and internal signals for the FIFO memory2802. Within the following described signal relationships, the term “PFIFO” refers to priority FIFO memory2800, while FIFO refers to the FIFO memory2802. As generally understood, “˜” means negation.PFIFO depth=FIFO depth+1. The depth of the priority FIFO memory2800is equal to the depth of the FIFO memory2802plus 1. This is due to the inclusion of register2806providing storage for one additional entry at the top of FIFO memory2802.wr_tx=˜full & wrrd_tx=˜empty & rddata_out_reg_valid=(discard_count==0) & (entries !=EMPTY). “Entries” is a state machine register denoting which state the priority FIFO memory2800is in.data_out_reg_valid indicates whether data_out_reg (i.e., register2806) has valid data.fifo_partial_discard=(entries !=EMPTY) & partial_discard. The partial discard operation need only discard entries from the FIFO memory2802when there may be entries stored in the FIFO memory2802. Otherwise only the data_out_reg (i.e., register2806) needs to be managed or cleared in this case.fifo_discard_count=(discard_count==0) ? 0: discard_count-1. The partial discard operation removes entries from the top of the priority FIFO memory2800, i.e., the entry in data_out_reg (register2806) will always be removed if one exists. Accordingly, the number of entries to be removed from the FIFO memory2802is 1 less (e.g., decremented by 1). The FIFO internal head address pointing to top of FIFO is updated as head=head+fifo_discard_count.fifo_almost_empty=fifo_empty+1. The signal fifo_almost_empty is asserted when there is exactly one entry in the FIFO memory2802.bypass is asserted whenever either data_out_reg (register2806) is already empty or will become empty in the next cycle due to the signal rd_tx in the current cycle.fifo_add_is_flag=add_is_flag. The initial state is added to the FIFO memory2802if the initial state cannot be written to data_out_reg (register2806). Using the relationships described above, operation of the state machine ofFIG.29may be further described as follows. If partial_discard is asserted, each of the EMPTY, ONE_ENTRY, and TWO_OR_MORE operating states transition to the RECONFIG state regardless of any other transition criteria being met. The EMPTY operating state means that the priority FIFO memory2800is empty. More particularly, both the FIFO memory2802and the register2806are empty. In the EMPTY operating state, the following conditions are observed.If the signal empty=1, data cannot be read and the signal rd has no effect.If the signal wr is asserted, data is written to data_out_reg (register2806) via the bypass or “fall-through” functionality where the input to the FIFO memory2800is written directly to register2806. The ONE_ENTRY operating state means that the FIFO memory2802is empty and that the data_out_reg (register2806) is not empty. In the ONE_ENTRY operating state, the following conditions are observed.The signal empty=0.If both the signals wr and rd are asserted, data is read and written into data_out_reg (register2806) via the bypass functionality.If only signal wr is asserted, then set signal fifo_wr=wr and write data to FIFO memory2802.If only the signal rd is asserted, then set fifo_rd=rd, read data from data_out_reg (register2806) and the FIFO memory2802, and the FIFO memory2802output is written to the data_out_reg (register2806). The TWO_OR_MORE operating state means that both the FIFO memory2802and the register2806are not empty (e.g., both have data stored therein). In the TWO_OR_MORE operating state, the following conditions are observed.bypass=0 so that if signal wr is asserted, then set fifo_wr=wr and data is always written into the FIFO memory2802.If signal rd is asserted, then set fifo_rd=rd, data is read from data_out_reg (register2806) and FIFO memory2802, and the FIFO memory2802output is written to the data_out_reg (register2806). In the RECONFIG operating state, the following conditions are observed.empty=1 and the data_out is not valid.Read is not allowed.Depending upon the signals wr and add_is_flag, the priority FIFO memory2800updates internal signals and moves back to one of the other 3 operating states as discussed in connection withFIG.30. FIG.30illustrates a more detailed illustration of the RECONFIG operating state ofFIG.29. In the example ofFIG.30, the reconfiguration operating state includes a plurality of substates.FIG.30illustrates the conditions that cause the state machine to transition from RECONFIG to each of the other three operating states of the state machine ofFIG.29. In general, when in the RECONFIG operating state, the priority FIFO memory2800enters an internal reconfiguration state. In the RECONFIG operating state, a partial discard is performed. Based on how many entries exist in priority FIFO memory2800, the data_out_reg (register2806) is cleared and the number of entries to be removed from the FIFO memory2802is determined. For example, the FIFO head address (e.g., the head address of FIFO memory2802) is determined as head=head+fifo_discard_count. If both signals wr and add_is_flag are asserted, then two writes are enqueued at the end of the priority FIFO memory2800. Depending upon how many entries remain in the priority FIFO memory2800after the partial discard operation, one or both of data_in entry and the initial state (IS) entry will be written in the FIFO memory2802. Referring toFIG.30, scenarios 1-8 are shown that lead from various substates of the RECONFIG operating state to other ones of the EMPTY, ONE_ENTRY, and TWO_OR_MORE operating states. The particular scenario that is followed depends on which of the four substates arise based on the signals wr_tx and add_is_flag. The different cases, e.g., or substates of the RECONFIG operating state, are as follows.NO_NEW_ENTRY: wr_tx==0 & add_is_flag==0. In this case, no new entries are to be written. There is no new input data and no IS state is to be written.WR_NEW_ENTRY: wr_tx==1 & add_is_flag==0. In this case, one new entry is to be written. New input data is to be written, but on new IS state is to be written.ADD_IS_ENTRY: wr_tx==0 & add_is_flag==1. In this case, one new entry to be written. No new input data is to be written, but a new IS state is to be written.TWO_NEW_ENTRIES: wr_tx==1 & add_is_flag==1. In this state, two new entries are to be written. New input data and a new IS state are to be written. In one aspect, the controller determines whether a new state and an IS state are to be written considering the two active states FIFOs as a single memory. Load balancing, as described herein, determines which particular active states FIFO memory receives the new state and which active states FIFO memory receives the IS state. The discussion below elaborates on each of the scenarios 0-7 and how the RECONFIG state transitions to either the EMTPY, ONE_ENTRY, or TWO_OR_MORE entries states. Scenario 1: NO_NEW_ENTRY→EMPTY. In this scenario, the following conditions are observed.˜data_out_reg_valid & fifo_empty. Accordingly, the data_out_reg (register2806) does not have valid data. One or more entries were discarded and the FIFO memory2802became empty after that operation. The whole priority FIFO memory2800is now empty. No new entries are being written so the priority FIFO memory2800will be empty at the end and goes to the EMPTY operating state. Scenario 2: NO_NEW_ENTRY→ONE_ENTRY. In this scenario, the following conditions are observed.˜data_out_reg_valid & fifo_almost_empty. One or more entries were discarded and the FIFO memory2802has 1 entry. This 1 entry will be moved to the data_out_reg (register2806) and the FIFO memory2802will become empty. Accordingly, the priority FIFO memory2800has 1 entry remaining. Since no new entries are being written, the priority FIFO memory2800moves to the ONE_ENTRY operating state.data_out_reg_valid & fifo_empty. No entries were discarded and the data_out_reg (register2806) still has valid data. The FIFO memory2802is empty. Accordingly, the priority FIFO memory2800has 1 entry. Since no new entries are being written, the priority FIFO memory2800moves to the ONE_ENTRY operating state. Scenario 3: NO_NEW_ENTRY→TWO_OR_MORE. In this scenario, the following conditions are observed.˜data_out_reg_valid & ˜fifo_almost_empty. The FIFO memory2802has more than 1 entry after the discard operation and since no more entries are being written, the priority FIFO memory2800moves to the TWO_OR_MORE operating state.data_out_reg_valid & ˜fifo_empty. Both the data_out_reg (register2806) and the FIFO memory2802have entries so that the priority FIFO memory2800moves to the TWO_OR_MORE operating state. Scenario 4: WR_NEW_ENTRY→ONE_ENTRY. In this scenario, the following conditions are observed.˜data_out_reg_valid & fifo_empty. The FIFO memory2802became empty and the data_out_reg (register2806) has no data. Since data_in is being written as wr is asserted, the priority FIFO memory2800will have 1 entry that will be written to the data_out_reg (register2806) directly via the bypass functionality. Accordingly, the priority FIFO memory2800moves to the ONE_ENTRY operating state. Scenario 5: WR_NEW_ENTRY→TWO_OR_MORE. In this scenario, the following conditions are observed.˜data_out_reg_valid & ˜fifo_empty. No data in stored in the data_out_register (register2806), the FIFO memory2802has at least 1 entry, and a new wr will add another entry. The top entry of the FIFO memory2802is moved to the data_out_reg (register2806) as the data_out_reg is empty and data_in will be written to the FIFO memory2802. The priority FIFO memory2800will have at least two entries and moves to the TWO_OR_MORE operating state.data_out_reg_valid. The data_out_reg (register2806) has valid data and a new data_in entry will be written to the FIFO memory2802resulting in at least two entries in the priority FIFO memory2800. Accordingly, the priority FIFO memory2800moves to the TWO_OR_MORE operating state. Scenario 6: ADD_IS_ENTRY→ONE_ENTRY. In this scenario, the following conditions are observed.˜data_out_reg_valid & fifo_empty. This is the same as the WR_NEW_ENTRY case. The priority FIFO memory2800is currently empty and a new default initial state (IS) will be written to the data_in_reg (register2806). Accordingly, the priority FIFO memory2800ends up with 1 entry and moves to the ONE_ENTRY operating state. Scenario 7: ADD_IS_ENTRY→TWO_OR_MORE. In this scenario, the following conditions are observed.˜data_out_reg_valid & ˜fifo_empty. The data_out_reg (register2806) is empty but the FIFO memory2802is not. An entry from the FIFO memory2802is moved to the data_out_reg (register2806) and an initial state (IS) entry is written to the FIFO memory2802by asserting the signal fifo_add_is_flag. Accordingly, the priority FIFO memory2800moves to the TWO_OR_MORE operating state.data_out_reg_valid. The data_out_reg (register2806) still has valid data and the initial state (IS) is added to the FIFO memory2802via assertion of signal fifo_add_is_flag. Since the priority FIFO memory2800has at least two entries, the priority FIFO memory2800moves to the TWO_OR_MORE operating state. Scenario 8: TWO_NEW_ENTRIES→TWO_OR_MORE. In this scenario, the following conditions are observed.Regardless of the current status of the priority FIFO memory2800, a new wr entry and the initial state (IS) entry are written to the priority FIFO memory2800. Accordingly, the priority FIFO memory2800will always move to the TWO_OR_MORE operating state. The data_in entry is always written first and then the initial state (IS) is written second. The data_in either goes directly to the data_out_reg (register2806) via the bypass functionality or into the FIFO memory2802if the priority FIFO memory2800has at least one entry. The initial state (IS) always goes into the FIFO memory2802. FIG.31is an example illustration of the discard operation as performed by the priority FIFO memory2800. In the example, the priority FIFO memory2800initially stores 3 entries pertaining to a particular character. The head of the FIFO memory2802is shown pointing to entry 1. Entry 1 is read out of the priority FIFO memory2800and processed through circuitry of the regular expression engine1104. For example, the entry 1 is used to perform a lookup in the NRR memory1102with the output of the NRR memory1102, e.g., entry 4, flowing through the decoder circuit1106, and through the switching circuitry1120. The entry 4 is written to the priority FIFO memory2800as shown. For purposes of illustration, entry 4 is a match state. As discussed, the entries are stored in the priority FIFO memory2800in decreasing priority. Accordingly, entry 1 is the highest priority, followed by entry 2, and then entry 3. As the match state (entry 4) is written back to priority FIFO memory2800, that state has a higher priority than entries 2 and 3 remaining the priority FIFO memory2800. The priority FIFO memory2800may be cleared by incrementing the head to point to the match state (entry 4), which effectively clears the lower priority entries 2 and 3 from the priority FIFO memory2800. The updating the head for the priority FIFO memory2800was previously described. FIG.32illustrates an example method3200of implementing a regular expression processing system such as the regular expression processing system130ofFIG.1as adapted for tracking paths and path priorities. The method3200may be implemented by a data processing system (system) as described herein in connection withFIG.15(e.g., data processing system1500). In block3202, system generates an NFA graph114from a regular expression. In block3204, the system transforms the NFA graph114into an HFA graph118. The HFA graph118has nodes with edges. The HFA graph118, for any particular character, has at most one self-edge and at most one outbound edge. In block3206, the system generates, from the HFA graph118, an instruction table2200including state transition instructions. The instruction table2200includes an epsilon sub-table configured to specify epsilon edges of the HFA graph118. In block3208, the system searches a data stream for matches specified by the regular expression using a regular expression processing system130implemented in hardware by, at least in part, decoding the state transition instructions of the instruction table2200and selectively decoding the state transition instructions of the epsilon sub-table. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the state transition instructions specify a next state and a flag indicating that an outbound edge of a state is being processed or both an output edge of the state and a self-edge of the state are being processed. In another aspect, the state transition instructions specify a flag, wherein the flag specifies whether the epsilon sub-table is used for decoding. In another aspect, the instruction table includes an address portion formed of a received character and a state. In another aspect, the edges of the nodes of the HFA graph are ordered to indicate path priority. In another aspect, each state transition instruction of the epsilon sub-table has a same state associated therewith. Each state transition instruction of the epsilon sub-table also may be ordered according to path priority. In another aspect, the method includes sorting edges of the HFA graph into categories including lazy self-edges, outbound edges, and greedy self-edges. FIG.33illustrates an example method3300of certain operative features of a regular expression processing system (system) such as the regular expression processing system130ofFIG.1adapted for tracking paths and path priority. In block3302, the system receives a data stream. The system may be implemented in an IC126. The system may be programmed with an instruction table2200including state transition instructions and an epsilon sub-table configured to specify epsilon edges. In block3304, the system searches the data stream for matches specified by the regular expression using the regular expression processing system130, at least in part, by decoding the state transition instructions of the instruction table2200and selectively decoding the state transition instructions of the epsilon sub-table. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the method includes tracking a plurality of active paths for the regular expression and a priority for each active path while searching the data stream for the matches by, at least in part, ordering entries within one or more priority FIFO memories2800of the regular expression processing system130in decreasing order of priority. In another aspect, the method includes, in response to detecting a match state for a selected active path of the plurality of active paths, discarding a selected number of entries of lower priority than the priority of the match state from at least one of the priority FIFO memories2800. In another aspect, the selected number of entries are discarded by, at least in part, incrementing a head pointer of the priority FIFO memory2800by the selected number of entries. In one or more example implementations, a system includes a multi-port RAM, e.g., NRR memory1102, configured to store an instruction table2200, wherein the instruction table2200specifies a regular expression for application to a data stream. The system includes a regular expression engine (e.g., regular expression engine1104and/or regular expression engine4150described herein below) configured to process the data stream by tracking active paths for the regular expression and a priority of each active path while processing the data stream by, at least in part, storing entries corresponding to active states in a plurality of priority FIFO memories2800in decreasing priority order. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the regular expression engine1104,4150a decoder circuit configured to determine validity of active states output from the multi-port RAM. The plurality of priority FIFO memories operate concurrently, wherein each priority FIFO memory is configured to initiate a read from a different port of the multi-port RAM using an address formed of an active state output from the priority FIFO memory and a portion of the data stream. The regular expression engine1104,4150includes switching circuitry configured to selectively route the active states from the decoder circuit to the plurality of priority FIFO memories according to the priority order. In another aspect, in response to detecting a match state for a selected active path, the at least one of the plurality of priority FIFO memories continues storing the match state therein until each higher priority path has failed. In another aspect, in response to detecting a match state for a selected active path, at least one of the priority FIFO memories2800is configured to discard each entry having a lower priority than the priority of the match state. In another aspect, the entries are discarded by incrementing a head pointer of the priority FIFO memory2800by a selected number of entries. In another aspect, the entries discarded from the at least one of the plurality of priority FIFO memories2800are contiguous entries and are discarded from a top of the at least one of the plurality of priority FIFO memories2800. In another aspect, the at least one of the plurality of priority FIFO memories2800, in response to detecting the match state, enters a configuration operating state in which output of the at least one of the plurality of priority FIFO memories2800is invalid for a plurality of clock cycles. In another aspect, in response to detecting the match state, the at least one of the plurality of priority FIFO memories2800is configured to write at least one of a new entry corresponding to an initial state or a new entry corresponding to a new active state. In another aspect, in response to detecting a match state for a selected active path, at least one of the priority FIFO memories is configured to discard a selected number of entries having a lower priority than the priority of the match state. Each priority FIFO memory2800includes a FIFO memory2802having a data input port coupled to a data input signal, a switch2804coupled to the data input signal and an output of the FIFO memory2802, and a register2806coupled to an output of the switch2804, wherein an output of the register2806is an output of the priority FIFO memory2800. In another aspect, priority FIFO memory2800is configured to discard the selected number of entries by clearing contents of the register2806, decrementing the selected number of entries by one, and incrementing a head pointer of the FIFO memory2802by the decremented number of entries. In another aspect, each priority FIFO memory2800operates according to a state machine (e.g.,FIGS.29,30) including an empty state, a one entry state, a two or more entries state, and a reconfiguration state, wherein the reconfiguration state includes a plurality of substates indicating a number of entries to be made during the reconfiguration state. The regular expression language supports “capture groups” using the round bracket operator “0”. A capture output is a portion of an input string being processed that matches a portion of the regular expression referred to as the “capture group”, that is contained in the round bracket operator(s). Any non-overlapping portion of the input string that matches an expression inside the round brackets (e.g., the capture group), qualifies as a capture output for the capture group. For purposes of illustration, consider the regular expression “a(.*)b” given an input string of “sdwefkafsdkwebewefjaefjafejb”. In this example, the capture group is “.*” and the capture output is “fsdkwebewefjaefjafej” which corresponds to the characters received between the first occurrence of “a” and the second or last occurrence of “b”. The capture output, in terms of the received data stream or string, may be referred to by way of the offset 7-27, where the first character of the capture output “f” has a starting position of “7” when starting from the first character having a starting position of 0. In specifying the last character of the capture output “j”, the end position is specified as the location of the character+1, which is 27 in this example. The capture output is assigned a capture group identifier (ID), referred to as a “group identifier” or a “group ID” of 0 since there is only one capture group in the regular expression. In another example, consider the regular expression “a(.*?)b” given the same input string “sdwefkafsdkwebewefjaefjafejb”. In this example, the capture group is “.*?” and the capture output is “fsdkwe” corresponding to group ID 0, offset 7-13 and “efjafej” corresponding to group ID 0 and offset 20-27. Again, though there are two capture outputs, e.g., two instances of the capture group found in the input string, there is a single group ID deriving from the single capture group in the regular expression. As may be observed, the number of instances of each capture group in a given input string is not known ahead of time. In another example, consider the regular expression “(abcd)|(ab)I(cef)”, which includes the “OR” operator. In this example, since 3 different capture groups are specified, there are group IDs of 0 corresponding to the capture group “abcd”, 1 corresponding to the capture group “ab”, and 2 corresponding to the capture group “cef”. Given an input string of “ejabcefheabcder”, the capture output should be “ab” (group 1, offset 2-4), “cef” (group 2, offset 4-7), and “abcd” (group 0, offset 9-13). In many CPU-based regular expression processing systems, generating correct output for certain regular expressions, e.g., those including the “OR” operator, may require backtracking on input data or multiple passes over the input data. Referring to the “(abcd)|(ab)I(cef)” example, the first capture output “ab” cannot be resolved in response to receiving the “b” character since the “c” character following the “b” character may be part of group 0 or the start of group 2. In this example, only when group 0 fails, can it be determined that the “c” is the start of group 2. Within CPU-based regular expression processing systems that perform capture, this type of processing relies on either backtracking or multiple passes on input data. In a non-CPU hardware implementation, both backtracking and multiple passes require significant hardware resources since data needs to be maintained as valid in buffers for longer periods of time and must be read multiple times. In accordance with the inventive arrangements described within this disclosure, one or more example implementations are provided that are capable of concisely expressing capture rules on an NFA graph. The NFA graphs, with the capture rules annotated thereto, may be compiled and implemented in hardware, e.g., an IC. The hardware implementation provides parallel processing while consuming reduced resources than other conventional hardware-based regular expression processing systems capable of performing capture operations. A regular expression may be converted into an NFA graph as previously described herein. The regular expression compiler100is capable of implementing capture functionality in the resulting hardware by augmenting, or annotating, the NFA graph114used for determining matches with additional information that may be used by the hardware to implement capture groups. This information may be carried forward by the regular expression compiler100to the HFA graph118. For example, states of the HFA graph118may be annotated with “capture commands”. As the capture-enabled hardware described herein processes, or enters, a marked state of the HFA graph118, as implemented as an instruction table2200and a corresponding capture table to be described herein, the hardware is capable of decoding, or executing, the capture command for that state. The hardware is capable of maintaining one capture register per active path. If, for example, a capture group is encountered on that path, then, in response to determining a match completed successfully, the register contains position information for the capture output. The position information may include a start position, an end position, and a capture ID. In cases where the hardware encounters no capture commands for a given active path, the capture register values remain in a default state indicating a lack of, or no, capture group for the active state. Table 1 below illustrates example capture commands that may be added to states of the NFA graph114and the HFA graph118. The “effect when executed” in column2specifies the actions taken by a decoder circuit implemented in a capture engine portion of the hardware to be described herein in greater detail below. The decoder circuit may include a capture register that may be loaded with an offset entry. The decoder circuit acts on the contents of the capture register (e.g., an offset entry) by executing the capture commands. Each offset entry, to be described in greater detail hereinbelow, specifies a start position, an end position, and a group ID. TABLE 1Capture CommandEffect when ExecutedReset (R)SP = EP = Current, ID = N/AAdd (+)EP = Current, Set IDShift ( -->)SP = EP, EP = Current, Set ID In Table 1, “SP” stands for the “start position”, “EP” stands for the “end position”, and “ID” represents the group ID of the capture register. “Current” is the position of the current input character being processed within the input string. Though not specified in Table 1, a null or “blank” capture command may be specified that results in the contents of the capture register being left intact or unmodified. Referring to Table 1, the reset command is used at the beginning of a capture group or for multiple capture groups started simultaneously. The reset command sets each of the start position and the end position to that of the current input character. The ID at this point is not determined. By setting the end position to equal the current input character, the add command extends the capture output, or range, to include the current input character without disturbing or changing the start position. The add command may be encountered one or more times on a given path. The ID is set to that of the capture group being completed and/or extended. The shift command moves the start position to the end position and then moves the end position to the current input character. The shift command is used only in cases of restarting capture groups. The shift command also sets the ID to that of the capture group being completed. FIGS.34-40illustrate examples of HFA graphs annotated with capture commands as generated by the regular expression compiler100described herein. FIG.34illustrates an example of an HFA graph for the regular expression “a(bc)d” having one capture group. In the example, the HFA graph is annotated with capture commands. As shown, state 0 is annotated with a reset command and state 3 is annotated with an add command. For purposes of illustration, consider the input string “abcd” as applied to the HFA graph ofFIG.34. When state 1 is reached after receiving an “a”, the current input position is 1 having matched “a” at position 0. At state 1, the hardware executes the reset command, which sets the start position SP and the end position EP both to 1. After matching characters “b” and “c”, the hardware is on state 3 at input position 3. The add command is executed to set the EP to 3 and the ID to 0 and leaves the SP at 1. FIG.35illustrates an example of an HFA graph for the regular expression “a((?:bc)*)d” having one capture group. The HFA graph is annotated with capture commands. The example ofFIG.35demonstrates the repeated application of the add command. The “(?:)” operator, per the regular expression language, has no functionality beyond grouping and is not a capture group in this example. This regular expression matches an “a” followed by 0 or more “bc” followed by “d”. For purposes of illustration, the hardware may be provided with the input string “abcbcd”. At state 1, by execution of the reset command, SP and EP are set to 1, which is the position of the first “b”. After processing input characters “b” and “c”, the hardware is at state 4, where through execution of the add command, the EP is set to 3 and the ID to 0. Another “bc” input sends the engine back through states 2 and 4 again. Arriving at state 4 again and through a second execution of the add command, EP is set to 5, which adds to the range of the capture output. With input “d”, the hardware reaches state 5, which is a match state. In response to reaching the match state, the hardware has determined position information specifying the capture output with reference to the received string as SP=1, EP=5, and ID=0. The hardware may parse the input string using the position information to provide the capture output. Referring again to the example ofFIG.35, if a different input string of “ad” is received, the hardware does not encounter the add command at state 4. Instead, at state 1, through execution of the reset command, SP and EP are set to 1 as before. The match completes, however, at state 3. In this example, the final state for the capture group is SP=1, EP=1, and ID being undefined. A host application in communication with the hardware is configured to interpret this state, e.g., where EP-SP=0, as an empty-capture where no ID is used. FIG.36illustrates an example of an HFA graph for the regular expression “a(bc)*d” having one capture group. The HFA graph is annotated with capture commands. The example ofFIG.36illustrates the shift command. In the example ofFIG.36, the capture group is repeatedly restarted. For example, in the regular expression “a(bc)*d”, the “*” quantifier causes the capture group to repeat. The HFA graph ofFIG.36is the same as the HFA graph ofFIG.35, with the exception that state 4 has a shift command instead of an add command. For the input “abcbcd” provided to the hardware, as before, upon reaching state 1, SP and EP are set to 1 corresponding to the position of the first “b”. After processing the additional characters “bc” the hardware is at state 4 where the shift command sets SP to the current value of EP, which is 1, sets EP to 3, and sets the ID to 0. After the second “bc”, the hardware is again at state 4 where the shift command sets SP to the current value of EP which is 3, sets EP to 5, and maintains the ID at 0. In executing the shift command the second time, the capture of the first “bc” has been replaced with the position information for the second “bc” in the input string. The final “d” input ends the match successfully at state 5. The position information determined for the capture group is SP=3, EP=5, and ID=0. FIG.37illustrates an example of an HFA graph for the regular expression “.*?(?:(a)b|(c)d|(e))f”. The regular expression “.*?(?:(a)b|(c)d|(e))f” includes 3 capture groups. The HFA graph is annotated with capture commands. In accordance with the inventive arrangements described herein, the ID for a capture group is not assigned until the end of the capture.FIG.37illustrates the practicality of this technique. For an input of “cd”, the matching path through the HFA graph is state 0 to state 2, to state 4. At state 0, through execution of the reset command, SP and EP are set to 0 and the ID remains unspecified. At state 2, through execution of the add command, EP is set to 1 and ID is set to 1 for the second capture group. Within the figures, the superscripted numbers after the commands (e.g., + in this example) indicate the ID to be set. In the example ofFIG.37, all three capture groups begin at state 0. Also, state 0 starts a loop for matching any character. This loop, or self-edge, at the start of the regular expression is typical for regular expressions that do “partial matching”. Partial matching refers to a matching process that skips over irrelevant input characters until the start of a desired pattern is found. Leaving the group ID ambiguous until the end of the capture allows the start state to be shared among the capture groups. Without this approach, the state would need to be split with epsilons. Because epsilon paths require more resources to implement in hardware, and because every input character will cause multiple epsilon paths to be executed, sharing the capture start state improves the efficiency of the search and resulting hardware implementation. The following description details adaptations to elements of the regular expression compiler100to support capture groups. The adaptations may be applied to the parser108, the NFA builder112and to the HFA builder118to generate an HFA graph118annotated with the capture commands as described in connection withFIGS.34-37. Further, the NFA graph114and HFA graph118, both being implemented as data structures, may be adapted so that states and/or edges may specify the capture information. With respect to states and edges of the NFA graph114and HFA graph118, a capture tuple is added. The capture tuple includes a capture command and a group ID. The capture command may be one of the 3 capture commands of Table 1 or left blank to indicate no capture command. The group ID may be “NoID” to indicate an unspecified ID. In one aspect, the parser108is adapted to generate a capture group operator. The capture group operator is used to indicate, to the NFA builder112, which syntax nodes110(e.g., NFA fragments) are to be included in a capture group. In one aspect, the standard shunting-yard algorithm can be modified to produce and process the capture group operator by inserting the capture group operator in the token stream after processing the close parenthesis indicating the end of a capture sub-expression. In an example implementation, the process used by the NFA builder112, which is illustrated in Example 1 above, may be adapted to include the pseudo code from Example 11 below. Example 11 case CaptureGroup:arg=fragStack.pop( )if arg.startState.hasInboundEdges( )arg.split( )arg.startState.capture={Reset, NoID}id=getUnusedCaptureld( )for each edge in arg.endEdges edge.capture={Add, id}if edge.isPlaceholder( )AND edge.from.capture.command==Blankedge.from.capture=edge.capturefragStack.push(arg) Referring to Example 11, the NFA builder112is capable of operating on a fragment by setting the start state of the fragment to contain a reset command and all fragment edges to an add command with the ID set to a unique integer. The NFA builder112may begin by splitting, via a split operation, the fragment start state if the fragment start state has any inbound edges. Inbound edges indicate a loop of some sort as may be produced by a quantifier such as, for example, “*”, “+”, etc. The split operation is used because the capture group should be started only for the first time entering the loop (as the entire loop is enclosed in a capture group). A separate state is needed to start the capture group as being distinct from a loop return point, hence the split operation. The new start state serves as the entry point into the capture group with the reset command. The original state is the loop return point. Next, regular expression compiler100marks each edge with the add command and the ID (e.g., group ID) for the capture group. The group ID is assigned the next available integer starting from 0. The capture information annotated on edges does not propagate to the hardware directly. Rather, the capture information on edges propagates to states during fragment building as performed by the NFA builder112. In terms of capture information propagation for edges, as a special case, the regular expression compiler100immediately propagates capture information on placeholder edges to the source state of the placeholder edge if the source state does not already have a capture command. The regular expression compiler100then pushes the capture-marked fragment back onto the fragment stack. The patch operation may be updated to support capture. With the ability to annotate the edges of a fragment with one or more capture group ends (e.g., add commands), during the patching operation, the NFA builder112may need to split the target state in multiple ways so that each copy can be assigned a separate group ID. FIG.38illustrates an example of a patch operation as performed by the NFA builder112supporting capture. In the example, the NFA builder112is building the expression “(?:(a)|(b)|c)d”, which includes 2 capture groups corresponding to “a” and “b”. The regular expression also has a non-capture path “c”. Each of the paths leads to a same target node “d”. In this example, to propagate the capture information, the target node “d” must be split three ways to accommodate all path endings separately. In the example, for hardware efficiency, the “c” path creates an empty capture. In another aspect, the regular expression compiler100may split the target state if the target state has an inbound edge. An example where the target state includes an inbound edges is for the regular expression “(?:(a)|(b)|c)d*e”. The patch examples above includingFIG.38illustrate non-loop cases wherein the patch to the target state is in the “forward” direction from the fragment to a state not in that fragment. FIG.39illustrates an example of a loop case with the target state within the fragment. The example ofFIG.39illustrates the loop case in building the fragment corresponding to the regular expression “(?:(a)|(b)|c)*”. The original goal of the star operator was to add a placeholder edge to the fragment start state (dashed line state) and loop all the fragment edges shown in bold on the left to the start state. In the example ofFIG.39, since there are three different edge types (ID=0, ID=1, ID=none), the regular expression compiler100creates three copies of the target state 0. Each copy has duplicates of edges “a”, “b”, and “c”. All 9 edges are looped back to the now-duplicated start state in the graph on the right so that each edge connects to the state whose group ID matches the edge's group ID. The state 0″ is the fragment start and the placeholder edges, which have also been duplicated, are the exit from the loop. Example 12 provides pseudo code illustrating the path operation supporting loop and non-loop target splitting that may be performed by the regular expression compiler100. Example 12patch(edges, targetState, isLoop)capEndDestStates = {}normDestState = targetStateif isLoopforeach id in getIds(edges)if id == NoCaptureIdcontinuecapEndDestStates[id] = normDestStatenormDestState = normDestState.split( )elseskipNormalState = !targetState.hasInboundNonEndEdge( ) AND!hasNonCapEdge(edges)curDestState = targetStatefor each id in getIds(edges)if id == NoCaptureIdcontinueif skipNormalStatecapEndDestStates[id] = curDestStateskipNormalState = falseelsecapEndDestStates[id] = curDestState.split( )curDestState = capEndDestStates[id]capEndDestStates[NoCaptureId] = normDestStateforeach e in edgesif !e.isPlaceholder( )e.to = capEndDestStates[getId(e)]capEndDestStates[getId(e)].setCapture(e, isLoop)foreach e in edgesif e.isPlaceholder( )targetState = capEndDestStates[getId(e)]if targetState.hasInboundEdges( )targetState = targetState.split( )e.from.mergeCaptureInfo(targetState)e.from.edges += targetState.edges.clone( )capEndDestStates[getId(e)].setCapture(e, isLoop)edges.remove(e)if (targetState.isMatch)e.from.isMatch = true In Example 12, an “isLoop” argument is added. The argument is set to true when called from a loop operator (e.g., star, plus) and set to false for straight-line patches (e.g., concatenation, OR). The argument determines how the target state is duplicated into multiple target states and stored in capEndDestStates, a map from group ID to target state copy. For loop patches, the regular expression compiler100uses the newest state of the multiple splits to serve as the target for “normal” (non-capture end) edges. The split operation of Example 3 produced two states with identical outbound edges, but was not symmetric in that for a state with inbound edges, the new state created by the split operation had no inbound edges while the original (old) state retained the inbound edges. As such, the new state served as the first entry to the loop while the old state served as the return point for another round of the loop. Returning to Example 12, the regular expression compiler100is capable of iterating through each of the unique group IDs found in the “edges” set. The normal edge case is skipped as the non-capture edges are handled outside the loop. Each time a new target state is needed for a unique group ID, the targetState is split and the old state is assigned to the group ID and the new state is assigned to the normDestState pointer. Accordingly, the new state generated by a split operation is split repeatedly until no more states are needed at which point normDestState points to the newest of all split states and is assigned to the “no capture ID” case. For non-loop patches operations, the oldest of the split states will become the “no capture ID” case, if there is one, if not skipped. A “no capture ID” targetState copy is needed if either (1) the targetState has any inbound edges without capture ends (e.g., the state would need to remain free of capture commands), or (2) there are “no capture ID” edges amongst the “edges” set. If neither of these conditions are true, the regular expression compiler100may skip creating a target state for the “no capture ID” case. In that case, the state that would have been created for that case can be assigned to a different group ID. The regular expression compiler100iterates over the set of capture IDs found among “edges” as in the loop patch case, splitting targetState as needed. The “no capture ID” case is skipped as that case is handled after the loop. In skipping the “no capture ID” case, the regular expression compiler100can assign the original targetState to the first capture ID. Otherwise, the regular expression compiler100splits the state, assigning the new state to the group ID and preparing to split that new state if another target state for the next time through the loop is needed. After the set of target states is produced for each group ID, the regular expression compiler100connects the edges to the states of the set according to group ID. The regular expression compiler100makes the connections by first connecting the non-placeholder edges and then the placeholder edges. During edge connection, the regular expression compiler100transfers the capture information stored on the edge to the destination state as represented by the setCapture(edge, isLoop) function. Table 2 shows the result of applying each possibility of edge capture information to a state already containing each possibility of state capture information. The result replaces the state's capture information. TABLE 2StateEdgeBlankResetAddShiftBlankBlankResetErrorErrorAddAdd, withIf loop: ShiftAddShiftedge's IDsElse: Error In the case of placeholder edges, as targetState is being merged together with the edge's source state (e.from), the regular expression compiler100blends the capture info of targetState and edge source state. Table 3 illustrates the result of such blending, which is applied to the edge's source state. In the example of Table 3, “original” refers to the state receiving the new capture information (the edge's source), while “incoming” refers to the state contributing new capture information (targetState). TABLE 3OriginalincomingBlankResetAddShiftBlankBlankResetErrorErrorResetResetReset*Reset*Reset*AddAddAdd**Add**Add**ShiftShiftShift**Shift**Shift** In the example of Table 3, “*” indicates the capture command may override the previous group ID in the same path since the hardware supports only one capture per path. The “**” indicates that the result is performed only if the group ID is the same for both states, otherwise an error is generated. The HFA builder116is capable of supporting capture groups and may be adjusted with respect to generation of new HFA states given a list of NFA states. Similar to the case with the match-or-continue scenario, an HFA state may need to be split into a complex involving a master state, one or more normal substates, and zero or one pure match states. For capture groups, each unique group ID (including the “no group ID” case) will need one HFA state. If any NFA state is a match state, a pure match state is needed. If the final count of all these states is greater than one, the HFA will be created as a complex. FIG.40illustrates an example of an HFA generated by HFA builder116. The example ofFIG.40corresponds to the regular expression “a(?:(b)c|(b)d|be|b)”, which includes two capture groups, a no group ID, and a match state. In the example, the regular expression contains four alternatives each having a common prefix “b”. Since each alternative has a different ending after the common portion, a 4-way state complex is needed for which each alternative has a dedicated complex sub-state (states 3 through 6). The first two alternatives, “(b)c” and “(b)d”, have capture groups which are assigned group IDs 0 and 1, respectively. Accordingly, the states 3 and 4 have capture add commands for their IDs. The third alternative “be” has no capture group and, as such, state 5 has no capture command. The final alternative “b” is a prefix of the others, meaning that alternative “b” is a match-or-continue scenario. That is, if the “b” alternative matches, the regular expression completes at that point. Otherwise, the regular expression continues to match additional characters for other alternatives. FIG.41illustrates another example architecture for a regular expression processing system130. In the example ofFIG.41, the regular expression processing system130implements two distinct data paths that operate in parallel and in coordination with one another. The first data path is implemented by one or more regular expression engines4150. Each regular expression engine4150may be implemented substantially as previously described in connection withFIGS.11and14. In the example ofFIG.41, however, the active states memories are replaced with the priority FIFO memories2800as described in connection withFIG.28. Other updates to the regular expression engines4150are described in connection withFIG.42. The second data path is implemented by one or more capture engines4104. The capture engines4104are described in greater detail in connection withFIG.43. In the example, the regular expression engine4150operates as a master to capture engine4104in that one or more control signals are generated by regular expression engine4150and provided to capture engine4104. The regular expression engine4150may operate as previously described albeit with the priority FIFO memories2800to support priority tracking. Capture engine4104provides support for implementing capture. In the example, controller1130is capable of providing control signals4106to regular expression engine4150and providing control signals4108to capture engine4104to control, at least in part, operation of each respective engine. Further, regular expression engine4150is capable of providing control signals4110to capture engine4104. In one aspect, control signals4110may be output from decoder1106of regular expression engine4150. The control signals4110, for example, may be used to control operation of certain switching circuitry (e.g., multiplexers) implemented within capture engine4104so that certain aspects of operation of regular expression engine4150and capture engine4104are synchronized. As pictured, a capture rule register (CRR) memory4102is included. CRR memory4102is coupled to capture engine4104. CRR memory4102may store a capture table therein that is used to drive operation of capture engine4104. An example of a capture table is illustrated in Example 13 below. In one aspect, instructions may be looked up from CRR memory4102using one or more states determined by regular expression engine4150that are output to both CRR memory4102and NRR memory1102. While in general, NRR memory1102may receive input characters along with states to be used as addresses, CRR memory4102need only receive states to be used as addresses to perform lookup (e.g., read) operations. In the example, output from regular expression engine4150may be provided to controller1130via signals4114. Controller1130, for example, is capable of detecting whether any received outputs from regular expression engine4150are match states and/or end of string conditions. Similarly, capture engine4104is capable of providing output to controller1130by way of signals4116. Whereas the output of regular expression engine4150are states, the output of capture engine4104is position information corresponding to states output from regular expression engine4150. The position information may specify the location of capture output within the input string being processed by regular expression engine4150in the case of a match condition. As previously discussed, the position information may specify a start position, an end position, and a group ID for each of a plurality of different captures. Example 13 illustrates an example of a capture table that may be stored in CRR memory4102. In Example 13, the capture table may be generated with, or as part of (e.g., an extension of), the instruction table2200ofFIG.22by the NRR generator120ofFIG.1. The capture table illustrated in Example 13 is for the example HFA graph ofFIG.34. Example 13 (Capture Table)AddressInstructionStateGroup IdentifierCapture Command00 × 3 (Ignored)0 (No Command)10 × 3 (Ignored)0 × A (Reset)20 × 3 (Ignored)0 (No Command)30 × 00 (ID = 0)0 × 9 (Add)40 × 3 (Ignored)0 (No Command) The capture table of Example 13 includes a plurality of capture entries. Each capture entry includes an address portion and an instruction portion. The address portion, or index, is the state number alone unlikeFIG.10, which uses {input character, state number}. The instruction portion is formed of two fields: a group ID and a capture command. In accordance with Example 13, the capture commands maybe encoded as follows:Blank: 0x0Reset: 0xAAdd: 0x9Shift: 0xB It should be appreciated that the capture commands may be encoded using other techniques and the examples provided are for purposes of illustration only. The group ID may be specified as a 2-bit value. In the example, for any situation in which the capture ID is not needed, the NRR Generator120sets the group ID to the maximum value which is 0x3 in Example 13. It should be appreciated that any value may be designated as an “ignore” value and the use of the maximum value is for purposes of illustration. The group ID is used for the Add capture command and the Shift capture command, but not for “Blank” or the Reset capture command. Referring to the example ofFIG.34, one can see that states 0, 2, and 4 do not have capture commands. Correspondingly, the rows in the table for states 0, 2, and 4 have a “Blank” command and an arbitrary value for the group ID, which is ignored. State 1 inFIG.34has a Reset command. Correspondingly, the row corresponding to state 1 in Example 13 has a reset command and an arbitrary value for the group ID, which is ignored. State 3 inFIG.34has an Add command. Correspondingly, the row for state 3 in Example 13 has an Add command and the group ID is set to 0. FIG.42illustrates an example implementation of the regular expression engine4150ofFIG.41. In the example ofFIGS.41and42, the active states memories are replaced with the priority FIFO memories2800ofFIG.28, though the priority FIFO memories2800still store active states. The regular expression engine4150is also updated to include additional switching circuitry4202and4204. Further, a plurality of registers4206, e.g.,3, are included that couple the output of decoder1106(e.g., the next state 0 and next state 1) to switching circuitry4204to implement the epsilon operating mode. In one aspect, the example circuit architecture ofFIG.42may be used to implement the regular expression processing system130that is capable of tracking path priorities as previously described herein. In cases where capture is not required, for example, regular expression engines4150may be used to replace regular expression engines1104in the examples ofFIGS.11and14to implement a regular expression processing system130capable of tracking paths and path priorities. In the example, as data is output from the instruction table2200ofFIG.22stored in NRR memory1102, the next states, DIFF, and EPS data is output to decoder1106. In the example, the EPS data is used as the control signal to switching circuitry4204. That is, in response to the epsilon flag being set in the instruction table2200, the signal provided to switching circuitry4204(e.g., multiplexers), causes each of switches4204-1and4204-2to pass the output taken directly from decoder1106rather than take output from the priority FIFO memories2800. This implements processing of the epsilon sub-table (e.g., epsilon processing) within instruction table2200. While processing the entries of the epsilon sub-table, regular expression engine4150does not accept a new input character for processing. Rather, the same input character is used along with the output of each of switches4204-1and4204-2to provide addresses addr0 and addr1 to NRR memory1102. In the example ofFIG.42, the eps_active control signals may be provided by decoder1106. For example, while performing epsilon processing, the circular path that is executed is from decoder1106, to registers4206, to switching circuitry4204, to performing a lookup in NRR memory1102, to decoder1106, and so forth. This cycle continues until decoder1106detects the end of the epsilon sub-table, which is when the eps_active flag is de-asserted and switching circuitry4204resumes reading states from priority FIFO memories2800. In the example, to compensate for the path delay when output from decoder1106is passed by switching circuitry4204, three registers may be added (shown as4206) that account for the registers1108,1110, and2806that were bypassed to maintain timing. Within this disclosure, the switching circuitry4204may be referred to as the epsilon (EPS) switching circuitry or multiplexers, while the switching circuitry4202may be referred to as the lazy switching circuitry or multiplexers. The switching circuitry4202, formed of switches4202-1,4202-2,4202-3, and4202-4(e.g., multiplexers), is capable of implementing path priority processing. That is, the lazy flag used to control each of switches4202is dictated by the state of the DIFF output of instruction table2200from NRR memory1102. Depending on the state of the lazy flag, for example, the previous state or next state is permitted to flow into the priority FIFO memories2800first. In the example ofFIG.42, the lazy_flag control signals may be generated by decoder1106. While operation of the eps_active signal and the lazy_flag signal are generally described, it should be appreciated that each of the “_0” and “_1” versions of the signals operate in the same manner albeit independently of the other to support the concurrent and independent lookup operations supported by the dual-port CRR memory4102and the dual-port NRR memory1102. In processing priorities of active state, priority FIFO memory2800-1is considered of higher priority than priority FIFO memory2800-2. This means that for a given entry number, e.g., entry 1, that entry 1 in priority FIFO memory2800-1is of higher priority than entry 1 in priority FIFO memory2800-2. In this regard, load balancing is varied somewhat from the scheme described previously. Still, the restrictions that the difference in number of entries between the two priority FIFO memories2800be 1 or 0 is maintained. Switches4202ensure that the higher priority active state of a set of two active states is provided or routed to priority FIFO memory2800-1, while the lower priority state of the pair is routed to priority FIFO memory2800-2. FIG.43illustrates an example implementation of the capture engine4104ofFIG.41. In the example ofFIG.43, the data path that is implemented is similar to the data path described in connection with regular expression engine4150. In this regard, certain components illustrated inFIG.42function similar to corresponding components inFIG.43. For example, switching circuitry1120,4202, and4204corresponds to switching circuitry4320,4202, and4204, respectively, and operates similar thereto. The registers4206correspond to registers4306(e.g.,3serial registers). Registers4308,4310, and4314regulate the data path. Register4310, for example, delays a set of data so that the priority FIFO memories2800may each write one value on each clock cycle (e.g., 4 values every 2 clock cycles as previously described). Whereas the regular expression engine4150tracks active states, paths, and priorities to determine whether matches are determined, the capture engine4104tracks position information for the active states. Capture engine4104further is capable of operating in a synchronized manner with regular expression engine4150to perform capture group processing. In the example ofFIG.43, the various control signals such as eps_active and lazy_flag control signals may be provided from decoder circuit1106of regular expression engine4150(e.g., control signals4110). The control signals provided to switches4320may be provided from controller1130and may implement the same routing as implemented in regular expression engine4150. That is, the position information tracked by capture engine4104for a given active state may be routed to the same priority FIFO memory (e.g., either the −1 or −2 instance) as the active state to which the position information in the regular expression engine4150corresponds. The CRR memory4102may be implemented as a multi-port RAM as previously described. In the example ofFIG.43, the CRR memory4102is implemented as a dual-port RAM as described in connection with NRR memory1102. CRR memory4102stores a capture table. In the example ofFIG.43, the switching circuitry4204outputs an active state that is also provided to the CRR memory4102as “state id 0” and “state id 1” specifying states or pointers that may be used to lookup instructions. In response to receiving the state information from the regular expression engine4150, CRR memory4102outputs an instruction including a capture command (which may be blank) and a group ID to CRR decoder4306. CRR decoder4306is capable of creating and modifying offset entries. That is, in the example ofFIG.43, the priority FIFO memories2800are used to store offset entries specifying the position information as opposed to active states or state data. Accordingly, the priority FIFO memories2800used in capture engine4104may be referred to as “capture FIFO memories”. Each offset entry may correspond to a particular active state that is stored in the priority FIFO memories2800of regular expression engine4150. The offset entry specifies position information for any active state and, as such, capture output that corresponds to the active state. As noted, each offset entry, for example, specifies a start position, an end position, and a group ID. The CRR decoder4306receives offset entries from the priority FIFO memories2800and processes the received offset entries based on the instructions provided from CRR memory4102. For example, for a given offset entry received in the −1 data path, the CRR decoder4306processes the offset entry using the instruction received on the corresponding port of CRR memory4102. The instruction includes the capture command that is decoded for the offset entry. The CRR decoder4306updates the start position, the end position, and/or the group ID of the offset entry in accordance with the capture command. The group ID, for example, may be modified or kept the same (e.g., remain unchanged) based on the capture command from the capture table (e.g., as depicted in Example 13). The capture engine4104is capable of implementing an epsilon operating mode akin to the epsilon operating mode of the regular expression engine4150. For example, in response to the control signal eps_active being set, epsilon operating mode is activated. The eps_active control signals may be set independently for each of switches4304-1and4304-2due to the dual-port nature of NRR memory1102. In response to the eps_active control signal being set, switch4304-1and/or4304-2outputs the offset entr(ies) from register4308via register4306directly to CRR decoder4306. In the epsilon operating mode in the regular expression engine4150, active states output from the decoder circuit1106are processed without pulling active states from the priority FIFO memories2800. Similarly, in the capture engine4104, offset entries from the CRR decoder4306are processed without having to pull offset entries from the priority FIFO memories2800. The lazy switching circuitry4302implements the priority processing for offset entries as described in connection with the switching circuitry4202of regular expression engine4150. Similarly, switching circuitry4320implements the same load balancing described in connection with the regular expression engine4150. It should be appreciated that, for example, if a given active state is routed to a particular priority FIFO memory2800of the regular expression engine4150, the offset entry corresponding to, or paired with, the active state will be routed to the same or corresponding FIFO memory2800in capture engine4104. For example, if the active state is routed to the priority FIFO memory2800-1in the regular expression engine4150, the offset entry corresponding to the active state will be routed to the priority FIFO memory2800-1in the capture engine4104. In the epsilon operating mode, while no reads are occurring from the priority FIFO memories2800of either the regular expression engine4150and the capture engine4104, new active states are being generated (along with corresponding offset entries) that are stored in the priority FIFO memories2800of the regular expression engine4150and the capture engine4104. As noted, in the epsilon operating mode, the regular expression engine4150does not consume new characters from the input data stream. Rather, the regular expression engine4150performs sequential lookup operations without consuming an input character to move through the epsilon sub-table of the instruction table2200. In the example ofFIG.43, stages 1, 2, 3, and 4 are shifted relative to the corresponding stages of the regular expression engine4150. That is, similar portions of the data path of the regular expression engine4150are offset with respect to those of capture engine4104to improve overall timing of the regular expression processing system130. Thus, while the regular expression engine4150and the4104operate in a synchronized manner, the timing of the data path implemented by4104may be shifted with respect to the timing of the data path implemented by the regular expression engine4150. One reason for the shift is that the data path for the capture engine4104is somewhat more complex than that of the regular expression engine4150. In the example ofFIG.43, there may be multiple captures in an input string where each is represented by a group ID. The CRR Decoder4306is capable reading capture commands from the CRR memory4102and determining if a new capture group was found and sets the start and end positions in the offset entry depending on the particular capture command received. The NRR decoder1106indicates whether a previously started capture group was continued or died. As discussed, implementing capture requires a priority mechanism along with the epsilon operating mode that supports spontaneous transition in the epsilon sub-table. This priority mechanism introduces additional controls in the data paths (e.g., switching circuitry4204and4304). Control signals (eps_active) are added for processing epsilon states, which are stored as a set of epsilon next states in chain fashion as discussed in connection withFIG.22. The epsilon operating mode may be implemented in hardware by continuously performing lookup from the NRR1102and from the CRR memory4102without consuming an input character. In the epsilon operating mode, both data paths skip the priority FIFO memories2800since no new states from the priority FIFO memories2800are processed until all of the epsilon states are looked up. The switching circuitry4202and4302is introduced to account for lazy vs. greedy qualifiers in the regular expression being implemented. The lazy_flag_0 and lazy_flag_1 within the regular expression engine4150and the capture engine4104control whether the previous state info (and corresponding offset entry) is written into the priority FIFO memories2800first (e.g., greedy) or the new state is written to the priority FIFO memories2800first (e.g., lazy). Logic to generate the lazy_flag control signals was previously described to implement path priority. The control signals provided to switching circuitry1120and4320may be the same with the exception that the control signals provided to switching circuitry4320may be delayed by one clock cycle relative to the control signals provided to switching circuitry1120. FIG.44illustrates the independent and synchronous data paths for performing match and capture. In the example ofFIG.44, structural details of regular expression engine4150and capture engine4104have been removed to better illustrate certain timing features. The two independent data paths for match and capture facilitate optimization in the hardware implementation that results in improved performance and throughput. The regular expression engine4150generally uses data paths that are narrower than those of the capture engine4104. For example, the data path of regular expression engine4150may be 8 bits, while the data path of the capture engine4104may be 27 bits to store position data. The example ofFIG.44illustrates that the pipeline stages are coordinated between the two data paths allowing a continuous stream of input data so that the regular expression engine4150may serve as the master control. Both data paths are generally split into 4 different stages as previously described. The stages are generally illustrated in the example ofFIG.44. In stage 4, the priority FIFO memories of the regular expression engine4150are read to obtain the address to be used (at least in part) for the NRR memory1102and for the CRR memory4102. In stage 1, both of the NRR memory1102and the CRR memory4102output data after a latency of 1 clock cycle. The capture engine4104reads its priority FIFO memories2800to obtain position information of the active states that were read out of the regular expression engine4150priority FIFO memories2800in the previous clock cycle. The CRR decoder4306receives the instructions from the CRR memory4102and the offset entries to be modified in the same clock cycle. Stages 2 and 3 correspond to the priority FIFO memory2800write preparation. In stage 2 and 3, in the regular expression engine4150, the control signals to select which active states are written to the respective priority FIFO memories2800in the two available clock cycles. The capture engine4104takes 3 clock cycles (e.g., corresponding to registers4310,4314and the register in the priority FIFO memories2800) to write to the priority FIFO memories therein. Since the contents are not needed until stage 1, this is permissible. That is, since the state ids provided to the CRR memory4102are provided from the priority FIFO memories of the regular expression engine4150and not from the priority FIFO memories of the capture engine4104, an extra clock cycle is available to process the data. There is an additional pipeline stage that can be inserted. As shown, the register4314is moved between the multiplexers4320. This facilitates partitioning of the capture engine4104to meet timing in view of the larger amount of circuitry required to support the larger bit widths of the signals. In accordance with the inventive arrangements described herein, the output generated by the priority FIFO memories2800of the regular expression engine4150may be monitored by the controller1130. The controller1130, in response to detecting an end condition, may store any matched states and corresponding position information as output from the CRR decoder4306. For example, in the case where the end of string character is seen in the input data1118, the controller1130is capable of pulling the unfinished active states off of the priority FIFO memories2800of the regular expression engine4150until a final state (e.g., SF1) is seen. If no final state is seen, a valid capture output is not found and no capture output is generated. If a final state is seen, the corresponding offset entry from capture FIFO memories is output and may be stored in another memory. If the controller1130determines that the highest priority path finishes (e.g., a match state reaches the top of the priority FIFO memories2800of the regular expression engine4150) prior to reaching the end of string character in the input data118, the controller1130determines that the matched state is the result along with the corresponding offset entry from the capture FIFO memories in capture engine4104. FIG.45illustrates another example implementation of a regular expression processing system130configured to perform match using priority and capture.FIG.45may operate substantially similar to the example ofFIG.14, albeit using regular expression engines4150and capture engines4104. Each regular expression engine4150is capable of initiating two simultaneous lookups from NRR memory1102each clock cycle. Similarly, each capture engine4104is capable of receiving a pair (e.g.,2) of instructions from CRR memory4102each clock cycle. FIG.46is an example method4600of implementing a regular expression processing system130that is capable of performing capture. In the example ofFIG.46, blocks4602and4604may be performed by a data processing system such as data processing system1500ofFIG.15. In block4602, the system is capable of generating an HFA graph118for a regular expression. The system is capable of annotating the HFA graph118with capture commands that, upon execution by hardware, update position information maintained for characters in a data stream that are matched to a capture sub-expression of the regular expression. In block4604, the system is capable of generating, from the HFA graph118, an instruction table2200including state transition instructions and a capture table (e.g., Example 13) including the capture commands. Referring toFIG.46, a regular expression engine circuit is configured, using the state transition table, to determine, from the data stream, one or more characters that match the capture sub-expression. A capture engine is configured, using the capture table, to determine position information for the one or more characters in the data stream. For example, blocks4606and4608may be performed using the regular expression processing system130described herein as adapted for performing capture (e.g.,FIGS.41,45). In block4606, using a regular expression engine (e.g., regular expression engine4150), one or more characters that match the capture sub-expression can be determined. The regular expression engine4150is capable of tracking active states of the regular expression by decoding state transition instructions of the instruction table2200. In block4608, a capture engine4104is capable of determining position information for the one or more characters in the data stream by decoding the capture commands from the capture table in coordination with the active states tracked by the regular expression engine4150. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, each node of the HFA graph118, for any particular character, has at most one self-edge and at most one outbound edge, wherein the capture commands are applied to the HFA graph118. In another aspect, the generating the HFA graph118includes generating an NFA graph114from the regular expression by combining fragments. One or more of the fragments are annotated with the capture commands. The capture commands may be propagated from edges of the fragments to states of the fragments during the combining. The NFA graph114can be transformed into the HFA graph118. In another aspect, the capture table includes a plurality of capture entries. Each capture entry includes an address portion including an active state identifier used as an address and an instruction portion including one of the capture commands and a group identifier for the capture sub-expression to which the capture command applies. In another aspect, the method can include, using the capture engine4104, processing offset entries corresponding to the active states at least in part, by determining capture entries corresponding the offset entries and, for selected ones of the offset entries, modifying at least one of a start position or an end position based on the capture commands of the corresponding capture entries and matching group identifiers of the offset entries to the group identifiers of the instruction portions of the respective capture entries. FIG.47is an example method4700of implementing a regular expression processing system130that is capable of performing capture. Method4700may be performed by such a system. In block4702, the system is capable of storing, within a first multi-port RAM (e.g., NRR memory1102), an instruction table2200specifying a regular expression for application to a string of characters. In block4704, the system is capable of storing, within a second multi-port RAM (e.g., CRR memory4102), a capture table (e.g., Example 13) specifying capture entries that are decodable for generating position information for a sequence of one or more characters of the string of characters matching a capture sub-expression of the regular expression. In block4706, the system is capable of processing, using one or more regular expression engines4150, the string to determine match states by tracking active states for the regular expression and priorities for the active states by, at least in part, storing the active states of the regular expression in a plurality of priority FIFO memories2800in decreasing priority order. In block4708, the system is capable of determining, using one or more capture engine4104each configured to operate in coordination with a selected regular expression engine4150, position information for the one or more characters of the string that match the capture sub-expression based on the active state being tracked by the regular expression engine4150and decoding instructions of the capture entries. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, the method includes outputting the one or more characters of the string that match the capture sub-expression by parsing the string using the position information. In another aspect, determining the position information further includes processing offset entries corresponding to the active states, wherein each offset entry specifies a start position, an end position, and a group identifier for the one or more characters. In another aspect, the method includes updating at least one of the start position or the end position of selected offset entries based on decoding the instructions from the capture entries. In one or more example implementations, a system includes a first multi-port RAM (e.g., NRR memory1102) configured to store an instruction table2200. The instruction table2200specifies a regular expression for application to a data stream. The system includes a second multi-port RAM (e.g., CRR memory4102) configured to store a capture table (e.g., Example 13), wherein the capture table specifies capture entries that are decodable for tracking position information for a sequence of one or more characters of the data stream matching a capture sub-expression of the regular expression. The system includes one or more regular expression engines4150each configured to process the data stream to determine match states by tracking active states for the regular expression and priorities for the active states by, at least in part, storing the active states of the regular expression in a plurality of priority FIFO memories2800in decreasing priority order. The system includes one or more capture engine circuits4104each configured to operate in coordination with a selected regular expression engine4150to determine one or more characters of the data stream that match the capture sub-expression based on the active state being tracked by the regular expression engine4150and decoding the capture entries of the capture table. The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. Some example implementations include all the following features in combination. In one aspect, each capture engine4104includes a plurality of capture FIFO memories (e.g., priority FIFO memories) configured to store offset entries corresponding to the active states stored in the plurality of priority FIFO memories2800of the regular expression engine4150. Each offset entry specifies position information for at least a portion of the sequence of characters of the data stream matching the capture sub-expression. In another aspect, each offset entry includes a start position, an end position, and a capture identifier. In another aspect, each capture entry includes an instruction having a capture command and a group identifier. Each capture engine4104includes a decoder circuit (e.g., CRR decoder4306) configured to update selected offset entries based on decoding the instructions from the capture entries. In another aspect, the decoder circuit (e.g., CRR decoder4306) is configured to perform at least one of updating the start position or the end position of the selected offset entries based on the capture commands decoded from the instructions. In another aspect, the system includes a plurality of multiplexers (e.g., switching circuitry4304) that route offset entries as output from the plurality of capture FIFO memories to the decoder circuit for processing or route offset entries as output from the decoder circuit directly back to the decoder circuit for processing. The plurality of multiplexers perform the routing based on a control signal provided from a respective regular expression engine4150. In another aspect, the control signal indicates that a selected active state processed by the regular expression engine4150corresponds to an epsilon state of the instruction table2200. In another aspect, the respective regular expression engine4150only processes a new character from the data stream while the decoder of the capture engine4104circuit receives offset entries from the plurality of capture FIFO memories. In another aspect, each capture engine4104includes switching circuitry4302configured to selectively route the offset entries from the decoder circuit to the plurality of capture FIFO memories based, at least in part, on control signals specifying a prioritization of corresponding active states processed by respective regular expression engine circuits self-edges or outbound edges. In another aspect, the prioritization is determined based on whether each active state corresponds to a self-edge or an outbound edge. In another aspect, the switching circuitry4320is configured to selectively route the offset entries from the decoder circuit to the plurality of capture FIFO memories based, at least in part, on a load balancing technique. While the disclosure concludes with claims defining novel features, it is believed that the various features described within this disclosure will be better understood from a consideration of the description in conjunction with the drawings. The process(es), machine(s), manufacture(s) and any variations thereof described herein are provided for purposes of illustration. Specific structural and functional details described within this disclosure are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the features described in virtually any appropriately detailed structure. Further, the terms and phrases used within this disclosure are not intended to be limiting, but rather to provide an understandable description of the features described. For purposes of simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers are repeated among the figures to indicate corresponding, analogous, or like features. As defined herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As defined herein, the term “approximately” means nearly correct or exact, close in value or amount but not precise. For example, the term “approximately” may mean that the recited characteristic, parameter, or value is within a predetermined amount of the exact characteristic, parameter, or value. As defined herein, the terms “at least one,” “one or more,” and “and/or,” are open-ended expressions that are both conjunctive and disjunctive in operation unless explicitly stated otherwise. For example, each of the expressions “at least one of A, B, and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. As defined herein, the term “automatically” means without human intervention. As defined herein, the term “computer readable storage medium” means a storage medium that contains or stores program code for use by or in connection with an instruction execution system, apparatus, or device. As defined herein, a “computer readable storage medium” is not a transitory, propagating signal per se. A computer readable storage medium may be, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. The various forms of memory, as described herein, are examples of computer readable storage media. A non-exhaustive list of more specific examples of a computer readable storage medium may include: a portable computer diskette, a hard disk, a RAM, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an electronically erasable programmable read-only memory (EEPROM), a static random-access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, or the like. As defined herein, the term “if” means “when” or “upon” or “in response to” or “responsive to,” depending upon the context. Thus, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “responsive to detecting [the stated condition or event]” depending on the context. As defined herein, the term “responsive to” and similar language as described above, e.g., “if,” “when,” or “upon,” means responding or reacting readily to an action or event. The response or reaction is performed automatically. Thus, if a second action is performed “responsive to” a first action, there is a causal relationship between an occurrence of the first action and an occurrence of the second action. The term “responsive to” indicates the causal relationship. As defined herein, the term “real time” means a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made, or that enables the processor to keep up with some external process. As defined herein, the term “substantially” means that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations, and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide. The terms first, second, etc. may be used herein to describe various elements. These elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context clearly indicates otherwise. A computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the inventive arrangements described herein. Within this disclosure, the term “program code” is used interchangeably with the term “computer readable program instructions.” Computer readable program instructions described herein may be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a LAN, a WAN and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge devices including edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations for the inventive arrangements described herein may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, or either source code or object code written in any combination of one or more programming languages, including an object-oriented programming language and/or procedural programming languages. Computer readable program instructions may include state-setting data. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some cases, electronic circuitry including, for example, programmable logic circuitry, an FPGA, or a PLA may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the inventive arrangements described herein. Certain aspects of the inventive arrangements are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable program instructions, e.g., program code. These computer readable program instructions may be provided to a processor of a computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the operations specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operations to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the inventive arrangements. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified operations. In some alternative implementations, the operations noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In other examples, blocks may be performed generally in increasing numeric order while in still other examples, one or more blocks may be performed in varying order with the results being stored and utilized in subsequent or other blocks that do not immediately follow. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, may be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. | 219,300 |
11861172 | DETAILED DESCRIPTION Systems and methods are described for performing single I/O writes. As noted above, existing file systems may make certain assumptions about the underlying platform hosting the file system. The file system may presuppose, for example, the existence of high-speed non-volatile random access memory (NVRAM) and relatively lower-speed disks consistent with being hosted by a high-end physical storage appliance. Such assumptions result in the file system handling write operations in batches as described below with reference toFIGS.1A-D. When a file system is hosted in an environment (e.g., a cloud platform, a virtual platform, or a commodity hardware platform with no battery backed NVRAM) in which the latency of the journal storage medium is similar (e.g., plus or minus 10%) to that of the block storage medium, the various mechanisms for performing write operations and associated journaling should be reengineered to achieve desired Input/Output operations per second (IOPS) and/or latency efficiencies. While some improvements are provided by the use of Single Instance Data Logging (SIDL), the write latency for SIDL is at least 2× the basic latency of the NVRAM/disk as explained below with reference toFIG.2. Various embodiments described herein seek to mitigate various shortcomings of the aforementioned approaches by providing a single Input/Output (I/O) write feature that brings the write latency for a write operation closer to 1× the basic latency of the NVRAM/disk. As described further below with reference toFIGS.3,6, and7, according to one embodiment, responsive to receipt of a write operation from a client by a file system layer of a node of a distributed storage system and a data payload of the operation having been determined to meet a compressibility threshold, an intermediate storage layer of the node logically interposed between the file system layer and a block storage media is caused to perform a single I/O write operation. The single I/O write operation involves writing a packed block header containing an operation header entry corresponding to the write operation, and the data payload in compressed form to a data block associated with a particular block number within the block storage media. Responsive to completion of the single I/O write: (i) journaling of an operation header containing the particular block number is initiated by the file system; and (ii) without waiting for completion of the journaling, receipt of the write operation is acknowledged to the client by the file system. By allowing the write operation to avoid waiting for completion of the journaling, the 2× or more basic latency of the NVRAM/disk for a write operation by SIDL may be brought down to 1× the basic latency of the NVRAM/disk. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. Terminology Brief definitions of terms used throughout this application are given below. A “computer” or “computer system” may be one or more physical computers, virtual computers, or computing devices. As an example, a computer may be one or more server computers, cloud-based computers, cloud-based cluster of computers, virtual machine instances or virtual machine computing elements such as virtual processors, storage and memory, data centers, storage devices, desktop computers, laptop computers, mobile devices, or any other special-purpose computing devices. Any reference to “a computer” or “a computer system” herein may mean one or more computers, unless expressly stated otherwise. The terms “connected” or “coupled” and related terms are used in an operational sense and are not necessarily limited to a direct connection or coupling. Thus, for example, two devices may be coupled directly, or via one or more intermediary media or devices. As another example, devices may be coupled in such a way that information can be passed there between, while not sharing any physical connection with one another. Based on the disclosure provided herein, one of ordinary skill in the art will appreciate a variety of ways in which connection or coupling exists in accordance with the aforementioned definition. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The phrases “in an embodiment,” “according to one embodiment,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present disclosure and may be included in more than one embodiment of the present disclosure. Importantly, such phrases do not necessarily refer to the same embodiment. As used herein a “cloud” or “cloud environment” broadly and generally refers to a platform through which cloud computing may be delivered via a public network (e.g., the Internet) and/or a private network. The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” P. Mell, T. Grance, The NIST Definition of Cloud Computing, National Institute of Standards and Technology, USA, 2011. The infrastructure of a cloud may cloud may be deployed in accordance with various deployment models, including private cloud, community cloud, public cloud, and hybrid cloud. In the private cloud deployment model, the cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units), may be owned, managed, and operated by the organization, a third party, or some combination of them, and may exist on or off premises. In the community cloud deployment model, the cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations), may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and may exist on or off premises. In the public cloud deployment model, the cloud infrastructure is provisioned for open use by the general public, may be owned, managed, and operated by a cloud provider (e.g., a business, academic, or government organization, or some combination of them), and exists on the premises of the cloud provider. The cloud service provider may offer a cloud-based platform, infrastructure, application, or storage services as-a-service, in accordance with a number of service models, including Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and/or Infrastructure-as-a-Service (IaaS). In the hybrid cloud deployment model, the cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). FIGS.1A-Dare high-level block diagrams conceptually illustrating handling of a write operation110by a storage system120in which the latency of journal media (e.g., NVRAM130) is less than the latency of data media150. Existing file systems may make certain assumptions about the underlying platform hosting the file system, for example, presupposing, the existence of high-speed non-volatile random access memory (NVRAM) and relatively lower-speed disks consistent with being hosted by a high-end physical storage appliance. As such, in the context of a storage solution that handles large volumes of client requests, it may be impractical for the file system to persist data modifications to disk (e.g., block storage) every time a write operation is received from a client (e.g., client115) as disk accesses tend to take a relatively long time compared to storage to other media (e.g., NVRAM130). Therefore, in the context of the present example, storage system120may instead temporarily hold write requests (e.g., write operation110) in memory (e.g., RAM140), which may also be referred to as a buffer cache, and only periodically (e.g., every few seconds) save the modified data to the data media (e.g., mass storage devices). The event of saving the modified data to the mass storage devices may be referred to as a consistency point (CP). As discussed below with reference toFIG.1C, at a CP point, the storage system120saves any data that was modified by write requests to its local mass storage devices and when operating in high-availability (HA) mode triggers a process of updating the mirrored data stored at the destination storage node. In this approach, there is a small risk of a system failure occurring between CPs, causing the loss of data modified after the last CP. Consequently, in at least one approach, the storage system may maintain a log or journal of certain storage operations within NVRAM130that have been performed since the last CP. For example, this log may include a separate journal entry (e.g., including an operation header112) for each storage request received from a client that results in a modification to the file system or data. Such entries for a given file may include, for example, “Create File,” “Write File Data,” and the like. Each journal entry may also include the data to be written according to the corresponding request. The journal may be used in the event of a failure to recover data that would otherwise be lost. For example, in the event of a failure, it may be possible to replay the journal to reconstruct the current state of stored data just prior to the failure. FIG.1Ais a high-level block diagram conceptually illustrating a first stage of handling of a write operation110by a storage system120. Responsive to receipt of the write operation110, the data payload (e.g., data111aand data111b) are stored in a journal entry in NVRAM130and to RAM140. Data111a-bmay represent data blocks having a block size of 4 kilobytes (KB). As noted above, the journal entry may also include an operation header112including an opcode of the write operation110. FIG.1Bis a high-level block diagram conceptually illustrating continued handling of the write operation110by the storage system120in a subsequent stage following the first stage illustrated byFIG.1A. In this subsequent stage, after the data payload has been stored to RAM140and the journal entry for the write operation110has been created within NVRAM130, the storage system120acknowledges the write operation110to the client, for example, in the form of an acknowledgement113. FIG.1Cis a high-level block diagram conceptually illustrating continued handling of the write operation110by the storage system120in a subsequent stage following the stage illustrated byFIG.1B. This subsequent stage is performed responsive to a consistency point (e.g., CP114), which may represent expiration of a timer. Responsive to the CP114, the storage system120saves data in RAM140to the data media150. FIG.1Dis a high-level block diagram conceptually illustrating continued handling of the write operation110by the storage system120in a subsequent stage following the stage illustrated byFIG.1C. This subsequent stage is performed responsive to successful storage of the data temporarily held in RAM140to the data media150. At this point, both the journal in NVRAM130and the data in RAM140may be cleared. Single Instance Data Logging (SIDL) FIG.2is a block diagram conceptually illustrating the use of Single Instance Data Logging (SIDL). In the context of the present example, an environment200hosting a file system220of a storage node is one (e.g., a cloud or virtual platform) in which disk storage media are used for both journal storage and data storage. Those skilled in the art will appreciate when the journal media (e.g., SSD NVRAM230) latency is similar to that of the data disk latency, then there is no benefit to journaling the data payload (e.g., data211a-b) of a write operation (e.g.,210) first and then storing the data payload to the data disk later. Additionally, when the file system220is hosted by cloud compute machines (e.g., virtual machines (VMs)) the VMs may have limitations on the number of disk Input/Output per Second (IOPS) that may be performed and the provider of the distributed storage system (e.g., a storage service provider) of which the storage node is a part may be charged by the cloud service provider on a per IOPS basis. As such, it may be desirable for the storage service provider to implement mechanisms to reduce disk IOPS as will be explained below. Responsive to receipt of the write operation210(at step 1), the file system220bypasses writing the data payload (e.g., data211a-b) of the write operation210to the SSD NVRAM230and instead (at step 2) causes the data payload to be immediately written to disk (at step 3) via an intermediate storage layer230(e.g., one or both of a redundant array of independent disks (RAID) layer and a storage layer). For example, the file system220may issue a RAID I/O to write the data payload to one or more corresponding blocks (e.g., virtual volume block numbers (VVBNs) or physical volume block numbers (PVBNs)). At step 3, the intermediate storage layer230reads the checksum (e.g., within an advanced zoned checksum (AZCS) checksum block) for the block from disk and updates the checksum data for the block, for example, via a read-modify-write (RMW) operation. At step 4, the RAID I/O writes the data to disk and the intermediate storage layer230(at step 5) waits for the write to disk to complete. At this point (at step 6), the file system220may perform a journaling operation by storing the operation header212(which includes the block number(s)) to which the data was stored on disk) to the SSD NVRAM230. The file system220waits (at step 7) for the journaling operation to be completed and then sends an acknowledgement223back to the client215. Notably, the storage of data to disk (at step 4) and the operation header212to SSD NVRAM230(at step 6) cannot be done in parallel because of the potential for intervening system crash situations. For example, if storage of the operation header212to SSD NVRAM230were to complete and a system crash occurred before the data was written to disk, problems would arise during replay as the data is presumed to be correct on the disk. Therefore, SIDL serializes the storage of data to disk and the storage of the operation header212to SSD NVRAM230as shown inFIG.2and described above. In view of the foregoing, it will be appreciated while SIDL is helpful in reducing disk IOPS (e.g., as a result of bypassing storage of the data payload to the SSD NVRAM230), the write latency for SIDL is at least 2× (and maybe 3× if the AZCS checksum operation is included) the basic latency of the NVRAM/disk as a result of the waiting performed at step 5 and step 7. Single Input/Output Write FIG.3is a block diagram conceptually illustrating the use of a single Input/Output (I/O) write feature in accordance with an embodiment of the present disclosure. The single I/O write feature proposed herein seeks to bring the write latency for a write operation (e.g., write operation310) closer to 1× the basic latency of the NVRAM/disk. In one embodiment, a new data bundle format (e.g., data bundle313b) is used that includes a pack header351, compressed data information352, compressed data353, NV-logged operation information354, NV-logged operation355, and a checksum (e.g., AZCS checksum356) for the data bundle. According to one embodiment, the pack header351includes an operation header entry identifying the number of objects contained within the data bundle, the compressed data information352includes information identifying the compression algorithm used to compress the data payload of the write operation, the compressed data353represents the data payload in compressed form, the NV-logged operation information354includes information identifying the CP with which the write operation is associated, and the NV-logged operation355includes an operation header (e.g., operation header312) specifying the operation and the block to which the data payload was stored on disk. In the context of the present example, an environment300hosting a file system320of a storage node is similar to that of environment200in which disk storage media are used for both journal storage and data storage. Similar to SIDL, responsive to receipt of a write operation310(at step 1), the file system320bypasses writing the data payload (e.g., data311a-b) of the write operation310to the SSD NVRAM330and instead (at step 2), assuming the data payload is compressible enough to allow inclusion of the desired metadata in addition to the compressed data payload within one or more data bundles (e.g., data bundle311a-b), causes the data bundles to be immediately written to disk (at step 3) via an intermediate storage layer330(e.g., one or both of a RAID layer and a storage layer). For example, the file system320may issue a RAID I/O to write the data bundles to one or more corresponding blocks (e.g., VVBNs or PVBNs). At step 3, the RAID I/O writes the data to disk and the intermediate storage layer330(at step 4) waits for the write to disk to complete. At this point (at step 5), the file system320may send an acknowledgement to the client315and in parallel may issue a local copy operation to perform an asynchronous journaling operation. That is, without waiting for completion of the journaling operation that stores the operation header312(which includes the block number(s)) to which the data bundle(s) was/were stored on disk) to the SSD NVRAM330, the file system320acknowledges receipt of the write operation310to the client315. As explained further below with reference toFIG.6, this results in a single I/O to the disk before returning the acknowledgement323to the client315. In this manner, the 2× or more basic latency of the NVRAM/disk for a write operation by SIDL is brought down to 1×. As those skilled in the art will appreciate, in this example, due to the asynchronous performance of journaling in parallel with the return of the acknowledgement323, writes of the operation header312to the SSD NVRAM330for successive write operations will occur in order, but their corresponding acknowledgements to the client315may be returned out of order. As described further below with reference toFIGS.6and7, by making use of a pool file that maintains a list of available blocks for single I/O write operations, intervening system crash situations do not result in data compromises. Example Operating Environment FIG.4is a block diagram illustrating an environment400in which various embodiments may be implemented. In the context of the present example, the environment100includes multiple data centers430a-c, a computer system410, and a user412. The data centers430a-c, and the computer system410are coupled in communication via a network405, which, depending upon the particular implementation, may be a Local Area Network (LAN), a Wide Area Network (WAN), or the Internet. User412may represent an administrator responsible for monitoring and/or configuring a distributed storage system (e.g., cluster435a) or a managed service provider responsible for multiple distributed storage systems (e.g., clusters435a-c) of the same or multiple customers via a browser-based interface presented on computer system410. Data center430amay be considered exemplary of data centers430b-cand may represent an enterprise data center (e.g., an on-premise customer data center) that is owned and operated by a company, managed by a third party (or a managed service provider) on behalf of the company, which may lease the equipment and infrastructure, or may represent a colocation data center in which the company rents space of a facility owned by others and located off the company premises. While in this simplified example, data center430ais shown including a distributed storage system (e.g., cluster135), those of ordinary skill in the art will appreciate additional IT infrastructure may be included within the data center130a. Turning now to the cluster435a, which may be considered exemplary of clusters435b-c, it includes multiple storage nodes436a-nand an Application Programming Interface (API)437. In the context of the present example, the multiple storage nodes436a-nare organized as a cluster and provide a distributed storage architecture to service storage requests issued by one or more clients (not shown) of the cluster. The data served by the storage nodes436a-nmay be distributed across multiple storage units embodied as persistent storage devices, including but not limited to HDDs, SSDs, flash memory systems, or other storage devices. A non-limiting example of a storage node436is described in further detail below with reference toFIG.7. The API437may provide an interface through which the cluster435ais configured and/or queried by external actors (e.g., the computer system410and/or storage clients). Depending upon the particular implementation, the API437may represent a Representational State Transfer (REST)ful API that uses Hypertext Transfer Protocol (HTTP) methods (e.g., GET, POST, PATCH, DELETE, and OPTIONS) to indicate its actions. Depending upon the particular embodiment, the API437may provide access to various telemetry data (e.g., performance, configuration, storage efficiency metrics, and other system data) relating to the cluster435aor components thereof. While for sake of illustration, three data center and three clusters are shown in the context of the present example, it is to be appreciated that more of fewer clusters owned by or leased by the same or different companies (data storage subscribers/customers) may be used in different operational environments and such clusters may reside in multiple data centers of different types (e.g., enterprise data centers, managed services data centers, or colocation data centers). FIG.5is a block diagram illustrating another environment500in which various embodiments may be implemented. In various examples described herein, a virtual storage system510a, which may be considered exemplary of virtual storage systems510b-c, may be run (e.g., on a VM or as a containerized instance, as the case may be) within a public cloud provided by a public cloud provider (e.g., hyperscaler520). In the context of the present example, the virtual storage system510amakes use of cloud disks (e.g., hyperscale disks525) provided by the hyperscaler. The virtual storage system510amay present storage over a network to clients505using various protocols (e.g., small computer system interface (SCSI), Internet small computer system interface (ISCSI), fibre channel (FC), common Internet file system (CIFS), network file system (NFS), hypertext transfer protocol (HTTP), web-based distributed authoring and versioning (WebDAV), or a custom protocol. Clients105may request services of the virtual storage system510by issuing Input/Output requests506(e.g., file system protocol messages (in the form of packets) over the network). A representative client of clients505may comprise an application, such as a database application, executing on a computer that “connects” to the virtual storage system510over a computer network, such as a point-to-point link, a shared local area network (LAN), a wide area network (WAN), or a virtual private network (VPN) implemented over a public network, such as the Internet. In the context of the present example, the virtual storage system510ais shown including a number of layers, including a file system layer511and one or more intermediate storage layers (e.g., a RAID layer513and a storage layer515). These layers may represent components of data management software (not shown) of the virtual storage system510. The file system layer511generally defines the basic interfaces and data structures in support of file system operations (e.g., initialization, mounting, unmounting, creating files, creating directories, opening files, writing to files, and reading from files). A non-limiting example of the file system layer511is the Write Anywhere File Layout (WAFL) Copy-on-Write file system (which represents a component or layer of ONTAP software available from NetApp, Inc. of San Jose, CA). The RAID layer513may be responsible for encapsulating data storage virtualization technology for combining multiple disks into RAID groups, for example, for purposes of data redundancy, performance improvement, or both. The storage layer115may include storage drivers for interacting with the various types of hyperscale disks supported by the hyperscaler520. Depending upon the particular implementation the file system layer511may persist data to the hyperscale disks525using one or both of the RAID layer513and the storage layer515. The various layers described herein, and the processing described below with reference to the flow diagrams ofFIGS.6and7may be implemented in the form of executable instructions stored on a machine readable medium and executed by a processing resource (e.g., a microcontroller, a microprocessor, central processing unit core(s), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), and the like) and/or in the form of other types of electronic circuitry. For example, the processing may be performed by one or more virtual or physical computer systems of various forms (e.g., servers, blades, network storage systems or appliances, and storage arrays, such as the computer system described with reference toFIG.8below. Example Single I/O Write Processing FIG.6is a flow diagram illustrating operations for performing single I/O write in accordance with an embodiment of the present disclosure. In the context of the present example, it is assumed a pool of blocks available for single I/O write operations is maintained in a pool file (e.g., pool file621). The pool file may represent a persistent (on-disk) data structure. In one embodiment, upon initialization, a storage node (e.g., one of storage nodes436a-nor one of virtual storage systems510a-c) may proactively identify a list of available (free) data blocks that may be used for single I/O write operations and store that list to the pool file. In this manner, the identification of free data blocks need not be performed during the single I/O write operation. At block610, a write operation (e.g., write operation310) is received by a storage node. More specifically, the write operation may be received by a file system (e.g., file system layer511) of the storage node. The write operation may be issued by an application (e.g., one of clients505). In one embodiment, the storage node may be part of a distributed storage system (e.g., one of clusters435a-c). The write operation may include a data payload having one or more blocks of data of a particular block size (e.g., 4 KB, 8 KB, etc.). At decision block620, a determination may be made by the file system regarding whether the data payload of the write operation meets a compressibility threshold. If so, processing continues with decision block630; otherwise, processing branches to block660. The compressibility threshold depends upon the size of the checksum and metadata to be included in the data bundle format (e.g., data bundle313b). In one embodiment, the checksum is an AZCS checksum (e.g., AZCS checksum356) and the metadata includes a pack header (e.g., pack header351), information regarding the compressed data information (e.g., compressed data information352), information regarding the NV-logged operation (e.g., NV-logged operation information354). Assuming an embodiment in which the metadata size is approximately 300 bytes and the data block size is 4 KB, the compressibility threshold would be approximately 7%. That is, in such an embodiment, a given data block of the data payload of the write operation should be compressible by approximately 7% to allow for the compressed data payload (e.g., compressed data353), the metadata, and the checksum to fit within a 4 KB data bundle. At decision block630, it is determined by the file system whether block number(s) are available for use by single I/O. If so, processing continues with block640; otherwise, processing branches to block660. In the context of the present example, this determination is made with reference to the pool file. For example, the list of blocks in the pool file may be tagged or marked as they are consumed by a single I/O write. At block660, a legacy write path (e.g., a slow write path or a SIDL write path) may be used to store the data payload of the write operation to disk as a reason has been identified that precludes the use of a single I/O write. For example, the data payload of the write operation may not be sufficiently compressible or there may be no VVBNs or PVBNs that are available for use by single I/O write. While only two reasons for avoiding the use of single I/O write are shown in this example, there may be reasons for rejecting the use of single I/O for the write operation. At block640, a data bundle is created, the pool file is updated, and the single I/O write operation is performed. In one embodiment, this may involve the use of an intermediate storage layer (e.g., intermediate storage layer330) interposed between the file system and the disks. In one embodiment, the file system may issue a RAID I/O to a RAID layer (e.g., RAID layer513) to write the populated data bundle including the compressed data payload of the write operation, metadata, and the AZCS checksum. At decision block650, the intermediate storage layer waits until the storage of the data bundle has been completed. For example, in one embodiment, after the RAID I/O has finished, processing continues with blocks660and670in parallel. At block660, the write operation is acknowledged to the client, for example, by the file system, sending an acknowledgement (e.g., acknowledgement323) to the client. At block670, write operation journaling may be initiated. For example, the file system may perform a local copy of the operation header (e.g., operation header312) to the journal media (e.g., SSD NVRAM330). In one embodiment, information regarding the current CP count may be included within the packed block header or the operation header to facilitate crash recovery as described further below. In some embodiments, the storage node may be operating in a high-availability (HA) configuration. For example, the storage node may be part of a local distributed storage system (or local cluster) and may be paired with a partner storage node in a remote distributed storage system (or remote cluster). The local storage node may be designated as a primary node and may be responsible for serving all I/O operations (e.g., read and write operations) made by clients and the HA partner node of the remote distributed storage system may be designated as the secondary node. when operating in the HA configuration, a data backup technique used by storage systems referred to as “mirroring,” involving backing up data stored at one node or storage system by storing a duplicate (a mirror image) of the data to another node or storage system, may be performed. Mirroring may be performed in one direction (e.g., from the primary to the secondary). Alternately, both the local and remote distributed storage systems may be operable to serve I/O, and both may be capable of operating in the role of a primary or secondary with respect to the other. In this configuration, the mirroring may be performed in either direction depending upon the node that is operating as the source storge node for a particular storage request. In one embodiment, when the storage node is configured to perform mirroring, completion of the local write operation journaling may further trigger performing a remote copy operation to transfer a copy of the journal to the HA partner node. In this manner, in the event of a system crash of one of the HA partner nodes, upon restart of the crashed node it may identify those of multiple single I/O write operations performed by the HA partner node prior to performance of the last CP that are to be reconstructed and replayed based on the pool file, information regarding the last CP, and operation headers contained in the journal as described further below with reference toFIG.7. In the manner described above, the write latency for single I/O write is closer to 1× the basic latency of the NVRAM/disk as the only time spent waiting for a write to disk to complete is during decision block650. While in the context of the present example, acknowledgement to the client (block660) is described as being performed prior to initiation of the journaling (block670) of the write operation, in other examples, journaling (block670) may be initiated prior to acknowledgement (block660); however, the acknowledgement should not wait for completion of the journaling. While in the context of the present example, a number of enumerated blocks are included, it is to be understood that examples may include additional blocks before, after, and/or in between the enumerated blocks. Similarly, in some examples, one or more of the enumerated blocks may be omitted and/or performed in a different order. Example Crash Recovery Processing FIG.7is a flow diagram illustrating operations for performing crash recovery in accordance with an embodiment of the present disclosure. In the context of the present example, it is assumed a storage node (e.g., one of storage nodes436a-nor one or virtual storage systems510a-c) of an HA pair has restarted after a system crash. For cases in which journaling of the operation header (e.g., operation header312) was completed prior to the system crash, operation is normal and no reconstruction or replay of single I/O writes need be performed. As such, only cases in which the data bundle has been written to disk but the journaling of the operation header was not completed remain In one embodiment, these cases are addressed using a data pool arrangement in which a pool of blocks available for single I/O write operations is maintained in the form of a persistent (on-disk) pool file (e.g., pool file621). The list of available blocks in the pool file may be stored for each CP. According to one embodiment, the storage node first performs a legacy journal replay to recover data blocks either not associated with single I/O write operations or single I/O write operations for which the journaling was committed to disk. After that replay is completed, those data blocks present in the pool file are walked through to determine which single I/O writes are to be reconstructed and replayed starting at block710. In the context of the present example, after a crash, a given data block present in the pool file may be classified in one of the following categories:1. Valid single I/O data block that is present in the journal.2. Valid single I/O data block that is not present in the journal3. Invalid operation.4. Single I/O operation not replied in front-end. According to one embodiment, the operations described below seek to consider the various scenarios and reconstruct and replay only those single I/O writes in category #2 (above). At block710, the file system of the storage node obtains the list of available blocks from the last stored CP. At decision block720, this list is compared to blocks that the journal indicates have been used for single I/O write operations by determining whether such blocks are present in the journal. This removes blocks that were recovered using legacy journal replay operations from the list. For those blocks remaining in the list, processing continues with block730; otherwise, no reconstruction or recovery is performed for the block at issue. The remaining blocks in the list are then read from disk starting at block730, looking for any blocks that contain a data bundle packed block header (e.g., pack header351) that contains one or more operation header entries. At decision block740, it is determined if the data block at issue is a valid single I/O data bundle. If so, then processing continues with block750; otherwise, otherwise, no reconstruction or recovery is performed for the block at issue. If the data block at issue contains an operation header entry, then the data block is considered to be a valid single I/O data bundle as the data block was successfully stored using a single I/O write. The operations associated with valid single I/O data bundles are reconstructed and replayed by continuing with block750. At block750, the single I/O write operation may be reconstructed based on the metadata of the packed block header and the compressed data payload stored within the corresponding data block. At block760, the reconstructed single I/O write operation is replayed by the file system of the storage node. While not described above in the context ofFIG.6, in on embodiment, a cap may be imposed on the number of data blocks to be scanned from the pool file by limiting the number of single I/O write operations that can be outstanding at any given time. In this manner, only that smaller number of blocks representing the maximum number of outstanding single I/O write operations can be scanned from the pool file. While in the context of the present example, a number of enumerated blocks are included, it is to be understood that examples may include additional blocks before, after, and/or in between the enumerated blocks. Similarly, in some examples, one or more of the enumerated blocks may be omitted and/or performed in a different order. Example Computer System Embodiments of the present disclosure include various steps, which have been described above. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a processing resource (e.g., a general-purpose or special-purpose processor) programmed with the instructions to perform the steps. Alternatively, depending upon the particular implementation, various steps may be performed by a combination of hardware, software, firmware and/or by human operators. Embodiments of the present disclosure may be provided as a computer program product, which may include a non-transitory machine-readable storage medium embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Various methods described herein may be practiced by combining one or more non-transitory machine-readable storage media containing the code according to embodiments of the present disclosure with appropriate special purpose or standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (e.g., physical and/or virtual servers) (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps associated with embodiments of the present disclosure may be accomplished by modules, routines, subroutines, or subparts of a computer program product. FIG.8is a block diagram that illustrates a computer system800in which or with which an embodiment of the present disclosure may be implemented. Computer system800may be representative of all or a portion of the computing resources associated with a node (e.g., one of storage nodes436a-nor one or virtual storage systems510a-c) of a distributed storage system. Notably, components of computer system800described herein are meant only to exemplify various possibilities. In no way should example computer system800limit the scope of the present disclosure. In the context of the present example, computer system800includes a bus802or other communication mechanism for communicating information, and a processing resource (e.g., a hardware processor804) coupled with bus802for processing information. Hardware processor804may be, for example, a general purpose microprocessor. Computer system800also includes a main memory806, such as a random access memory (RAM) or other dynamic storage device, coupled to bus802for storing information and instructions to be executed by processor804. Main memory806also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor804. Such instructions, when stored in non-transitory storage media accessible to processor804, render computer system800into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system800further includes a read only memory (ROM)808or other static storage device coupled to bus802for storing static information and instructions for processor804. A storage device810, e.g., a magnetic disk, optical disk or flash disk (made of flash memory chips), is provided and coupled to bus802for storing information and instructions. Computer system800may be coupled via bus802to a display812, e.g., a cathode ray tube (CRT), Liquid Crystal Display (LCD), Organic Light-Emitting Diode Display (OLED), Digital Light Processing Display (DLP) or the like, for displaying information to a computer user. An input device814, including alphanumeric and other keys, is coupled to bus802for communicating information and command selections to processor804. Another type of user input device is cursor control816, such as a mouse, a trackball, a trackpad, or cursor direction keys for communicating direction information and command selections to processor804and for controlling cursor movement on display812. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Removable storage media840can be any kind of external storage media, including, but not limited to, hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc-Read Only Memory (CD-ROM), Compact Disc-Re-Writable (CD-RW), Digital Video Disk-Read Only Memory (DVD-ROM), USB flash drives and the like. Computer system800may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware or program logic which in combination with the computer system causes or programs computer system800to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system800in response to processor804executing one or more sequences of one or more instructions contained in main memory806. Such instructions may be read into main memory806from another storage medium, such as storage device810. Execution of the sequences of instructions contained in main memory806causes processor804to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media or volatile media. Non-volatile media includes, for example, optical, magnetic or flash disks, such as storage device810. Volatile media includes dynamic memory, such as main memory806. Common forms of storage media include, for example, a flexible disk, a hard disk, a solid state drive, a magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor804for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system800can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus802. Bus802carries the data to main memory806, from which processor804retrieves and executes the instructions. The instructions received by main memory806may optionally be stored on storage device810either before or after execution by processor804. Computer system800also includes a communication interface818coupled to bus802. Communication interface818provides a two-way data communication coupling to a network link820that is connected to a local network822. For example, communication interface818may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface818may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface818sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link820typically provides data communication through one or more networks to other data devices. For example, network link820may provide a connection through local network822to a host computer824or to data equipment operated by an Internet Service Provider (ISP)826. ISP826in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet”828. Local network822and Internet828both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link820and through communication interface818, which carry the digital data to and from computer system800, are example forms of transmission media. Computer system800can send messages and receive data, including program code, through the network(s), network link820and communication interface818. In the Internet example, a server830might transmit a requested code for an application program through Internet828, ISP826, local network822and communication interface818. The received code may be executed by processor804as it is received, or stored in storage device810, or other non-volatile storage for later execution. | 48,897 |
11861173 | DETAILED DESCRIPTION OF THE INVENTION FIG.1is a block diagram of the internal structure of a hard disk drive of the present invention including: an interface unit20for each group, wherein a control unit10of each group is electrically coupled to the computer system through its corresponding interface, wherein the interface unit20can be Parallel ATA series, Serial ATA series, SCSI series, serial SCSI (SAS, Serial Attached SCSI) series, PCI-e series, SATA-express series and USB series; Although PCI-e is a Board-to-Board coupling relationship, the present invention can use PCI-e for data transmission since there is no regulation requires that HDD cannot use PCI-e as an interface. The hard disk assembly80has at least one group with each group containing 1 to 12 hard disks. Each hard disk is equipped with any one of 2, 4 or 6 read-write heads. When a hard disk is equipped with 2 read-write heads, the first side of the hard disk is named the 1st Big Sector Block, and the second side of the hard disk is named the 2nd Big Sector Block. When the hard disk is equipped with 4 read-write heads, each hard disk is divided into 4 Big Sector Block including the first side (or A side) outer ring sector (side-A outer ring sectors), side-A inner ring sectors, side-B outer Ring sectors, and side-B inner ring sectors; FIG.2Ais a schematic diagram of the region division of the first side of a hard disk in an embodiment, in which a drive arm100has a first read/write head601and a second read/write head602, and the hard disk has an outer region801, a landing region861of the first read/write head601, the inner region802and a landing region862of the second read/write head601.FIG.2Bis a schematic diagram of the region division on the second side of the hard disk. One of the driving arms101has a first read/write head603and a second read/write head604. The hard disk has an outer region803, a landing region863of the first read/write head603, the inner region804and a landing region864of the second read/write head604. Traditional hard disks have a data-free safe region as a place where the read/write head rests or parks. This region is called the RW-Head Landing Zone or simply Landing region. In the embodiment of the present invention, both the first side and the second side of the first hard disk are designed with two read-write head landing regions, and the read-write head landing region near the center of the hard disk is called the second read-write head landing region (also It is the landing zone of a traditional hard disk), the landing zone far away from the center of the hard disk is called the first read-write head landing zone, the sector outside the first landing zone is called the outer magnetic zone, and the sector between the first landing zone and the second landing zone is called the inner sector. For example, each side of each hard disk has a total of 8,400 tracks, and each side has two read-write heads to access the data on that side. Among them, the outermost track 1 to track 4,000 is the outer magnetic zone; tracks 4,001 to 4,200 are the landing zone of the first read/write head, wherein no magnetic material coating in this region; tracks 4,201 to 8,200 are the inner magnetic zone; tracks 8,201 to 8,400 are the landing region of the second read/write head, wherein no magnetic material coating in this region. Therefore, assuming that the speed of the read-write head does not change, the time for each read-write head to move from above the first track (Track 00) to above the highest number of tracks in this case will be half that of a traditional hard disk drive. In other words, the average access time of the head (Average Access Time) is only half of the traditional hard disk drive. The first track on the first side of the hard disk is the 00th track of the first read/write head, the 4,000th track is the 3,999th track of the first read/write head, and the 4,201th track is track 00 of the second read/write head, track 8,200 is track 3,999 of the second read/write head; track 1 on the second side of the hard disk is track 00 of the read/write head, and track 4,000 is track 3,999 of the read/write head; Track 4,201 is track 00 of the fourth read/write head, and track 8,200 is track 3,999 of the fourth read/write head. The feature of this present invention in this example is that a single drive shaft drives 2, 4, 6, 8 or 12 read-write heads to synchronously access hard disk data. In this embodiment, there is a single shaft that drives four read-write heads to synchronously access data, when the first read/write head is pushed above the 00th track in the outer sector, the second read/write head is also positioned above the 00th track in the inner sector. At the same time, the third read-write head is located on the 00th track of the outer sector on the second side of the hard disk, while the fourth read-write head is located on the 00th track of the inner sector on the second side of the hard disk. Four read-write heads synchronously access data with a same track number in four respective regions. a control unit10has a central processing unit (CPU) with a firmware program, wherein writing commands and data to be written are received from the computer system through the interface unit20, the control unit10selects the read/write head serial number and the number of tracks (Cylinder Number), sector number (Sector Number) to which the data will be written, and separate the written data into any of 2, 4, 6, or 8 groups. The control unit separating the data to be written into 4 groups: the first write data, the second write data, the third write data, and the fourth write data, wherein, the first write data is output to the first write/read data processing unit301, the second write data is output to the second write/read data processing unit302, and the third write data is output to the third write/read data processing unit303, and the fourth write data is output to the fourth write/read data processing unit304. The data reading system receives the reading command from the computer system, and the control unit obtains the first magnetic track number and the first magnetic zone number stored in the file allocation table (FAT, File Allocation Table) and move the drive arm and read-write heads to the magnetic track number to read the data, and then the control unit receives the data from the first write/read data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304, and then read dada from four data processing units are merged into a piece of data, which is sent back to the computer system through the interface unit20; Write/read data processing unit group30, at least one group, each group comprises any combination of 2, 4, 6, 8 or 12 write/read data processing units, wherein 2 write/read data processing units with 2 read/write heads is 2× speed; 4 write/read data processing units with 4 read/write heads is 4× speed; 6 write/read data processing units with 6 read-write heads is 6× speed; 8 write/read data processing units with 8 read-write heads is 8× speed; 12 write/read data processing units with 12 read-write heads is 12× speed. One end of each write/read data processing unit is electrically coupled to the control unit, and the other end is electrically coupled to its corresponding read/write head. In one embodiment, the control unit further integrates the processing unit to become a multi-core control unit or a multi-core CPU unit. In this embodiment, a single drive shaft drives four read-write heads to synchronously access data at four times the speed, so there are four write/read data processing units, respectively, the first write/read data processing unit301and the second The writing/reading data processing unit302, the third writing/reading data processing unit303, and the fourth writing/reading data processing unit304are composed, and the first writing/reading data processing unit301is electrically coupled to the first read/write head601, the second write/read data processing unit302is electrically coupled to the second read/write head602, the third write/read data processing unit303is electrically coupled to the third write head603, and the fourth write/read data processing unit304is electrically coupled to the fourth read/write head604. The first write/read data processing unit301outputs the first write data sent from the control unit10with the CRC code of the first write data to the first read/write head601during the write operation, the first write data will be written in the first outer ring sector801of the first hard disk81to complete the data writing operation; in the read operation, the first outer ring of the first hard disk81will be read by the first read/write head601, and CRC are used to check the data and correct errors, and then send it back to the control unit10; The second write/read data processing unit302outputs the second write data sent from the control unit10with the CRC code of the second write data to the second read/write head602during the write operation. The second write data will be written in the first inner ring sector802of the first hard disk81to complete the data writing operation; in the read operation, the first inner ring sector802will be read by the first read/write head601, and CRC are used to check the data and correct errors, and then send it back to the control unit10; the third write/read data processing unit303outputs the third write data sent from the control unit10with the CRC code of the third write data to the third read/write head603during the write operation, wherein the third write data will be written in the second outer sector803of the first hard disk81to complete the data writing operation; in the read operation, the second outer sector803will be read by the third read/write head603, and CRC are used to check the data and correct errors, and then send it back to the control unit10; The fourth write/read data processing unit304outputs the fourth write data sent from the control unit10with the CRC code of the fourth write data to the fourth read/write head604during the write operation. The fourth write data will be written in the inner sector804of the second side of the hard disk to complete the data writing operation; in the read operation, the inner sector804of the second side of the first hard disk81will be read by the fourth read/write head604, and CRC are used to check the data and correct errors, and then send it back to the control unit10. FIG.3is a diagram of the internal structure of the write/read data processing units301,302,303, and304, all of which comprises a microcontroller and IC(s) or they can be integrated into the control unit10. The write/read data processing unit contains at least any of 5 main parts as following:(1) data buffer (Buffer)342: includes a random-access memory (RAM, Random Access Memory) or other volatile memory (Volatile Memory) with a capacity not less than the size of a hard disk drive sector (Sector);(2) CRC generator341: when the file is written into the data buffer342, a CRC code is generated according to the data content. CRC is a cyclic redundancy check, which is a hash function that generates short fixed-digit verification codes based on data packets or computer files. It is mainly used to detect or verify data transmission or storage for errors that may occur later, generally use 32-bit (CRC32) integers;(3) error data detection and correction unit343: When the file is read, it automatically checks whether the data content is wrong according to the CRC content and corrects the wrong data;(4) series/parallel data converter344: when writing files, it converts the parallel data in the buffer into serial data and then writes it into the hard disk together with the CRC code; when the file is read, it converts the incoming serial data from the read/write head and CRC code into parallel data and placed in a buffer before being read by the control unit10;(5) the track number and sector number comparator345: when the file is written or read, the control unit10sends the track number and sector number data to the drive arm and the read/write head combination unit60and the first data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304, when the read/write head is moved to the target on the magnetic track, the read-write heads sends the track number and sector number that heads are positioned back to the first read-write data processing unit301, the second read-write data processing unit302, and the third read-write data processing unit303and the fourth read-write data processing unit304, the track number and sector number comparator345compares whether the track number and sector number that heads are positioned are the same as the track number and sector number sent by the control unit10, wherein if they are the same, the first write/read data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304start to perform data writing or reading tasks; if they are not the same, the control unit10adjusts the position of the head. Since the first read-write head, second read-write head, third read-write head and the fourth read-write head access data synchronously, under normal circumstances, the four read-write heads should read the same track number and sector number and the track number and sector number that heads are positioned are the same as the track number and sector number sent by the control unit10. If it is different, the control unit10will be notified to adjust the head position. Permanent magnet (Permanent magnet) combination50, at least one group, each group comprises two permanent magnets, wherein a voice coil motor (VCM) is placed in the magnetic field generated by the two permanent magnets. When current passes through the coil, under the influence of the magnetic field, the driving arm and the read-write head combination unit60are moved by the leverage of the actuator axis. Actuator Arm and read/write head combination unit60, at least one set, each set contains voice coil motor (VCM), actuator Axis, read/write head (RW-Head) and related components, the control unit controls the voice coil motor to move the read-write head to the target track position. The number of read-write heads is always 2 times or 4 times the number of hard disks. When the number of read/write heads is twice the number of hard disks, the first read/write head is coupled to the first write/read data processing unit to access data on the first side of the first hard disk, and the second read/write head is coupled to the second write/read data processing unit to access the data on the second side of the first hard disk; when the number of read/write heads is 4 times the number of hard disks, the first read-write head is coupled to the first write/read data processing unit to access data in the outer sector801on the first side of the first hard disk, the second read/write head is coupled to the second write/read data processing unit to access the data in the inner ring sector802on the first side of the first hard disk, and the third read/write head is coupled to the third write/read data processing unit to access data in the outer sector803, the fourth read/write head is coupled to the fourth write/read data processing unit to access data in the second inner sector804of the first hard disk. Spindle motor70, at least one group, each group provides a stable speed of the hard disk (platter), usually but not limited to any one of 5,400 rpm, 7,200 rpm, 10 Krpm, 15 Krpm, etc. The first power supply circuit unit90, the power supply comes from +5 volts or +5 volts and +12 volts of the computer system, and the power supply circuit unit converts it into different voltages to supply internal components of the hard disk drive. Configuration setting switch11, at least one set, each set is a set of switches used to determine the configuration of the hard disk drive, for example, a hard disk drive with 2 hard disks and 8 read-write heads, you can select this 8 read-write heads synchronously access data to achieve 8-times-speed data access or divide into 2 groups, each containing 4 read-write heads, and each group can achieve four-times-speed data access speed in two configurations, see Table 1, which shows the relationship between transmission rate, hard disk and the number of read-write heads. TABLE 1DataTotalTotalTrans-NumberNumberStruc-ferofof R/WtureSpeedDisksheadsDescriptionsX1X12X2X122 hard disks are equipped with 12 read-write heads,and 12 read-write heads are simultaneously accessedat 12 times the speed{circle around (1)}The first hard disk, the first side, the outer region isaccessed by the first read-write head{circle around (2)}The first hard disk, the first side, the middle regionis accessed by the second read-write head{circle around (3)}The first hard disk, the first side, the inner region isaccessed by the third read-write head{circle around (4)}The first hard disk, the second side, the outer regionis accessed by the fourth read-write head{circle around (5)}The first hard disk, the second side, the middleregion is accessed by the fifth read-write head{circle around (6)}The first hard disk, the second side, the inner regionis accessed by the sixth read-write head{circle around (7)}The second hard disk, the first side, the outer regionis accessed by the first read-write head{circle around (8)}The second hard disk, the first side, the middleregion is accessed by the second read-write head{circle around (9)}The second hard disk, the first side, the inner regionis accessed by the third read-write head{circle around (10)}The second hard disk, the second side, the outerregion is accessed by the fourth read-write head{circle around (11)}The second hard disk, the second side, the middleregion is accessed by the fifth read-write head{circle around (12)}The second hard disk, the second side, the innerregion is accessed by the sixth read-write headX2X8X2X82 hard disks are equipped with 8 read-write heads, 8read-write heads are simultaneously accessed at 8times speed{circle around (1)}The first hard disk, the first side, the outer region isaccessed by the first read-write head{circle around (2)}The first hard disk, the first side, the inner region isaccessed by the second read-write head{circle around (3)}The first hard disk, the second side, the outer regionis accessed by the third read-write head{circle around (4)}The first hard disk, the second side, the inner regionis accessed by the fourth read-write head{circle around (5)}The second hard disk, the first side, the outer regionis accessed by the fifth read-write head{circle around (6)}The second hard disk, the first side, the inner regionis accessed by the sixth read-write head{circle around (7)}The second hard disk, the second side, the outerregion is accessed by the seventh read-write head{circle around (8)}The second hard disk, the second side, the innerregion is accessed by the 8th read-write headX3X4X2X82 hard disks are equipped with 8 read-write heads butonly 4 read-write heads are simultaneously accessedat 4 times the speedThere are two ways of implementation:(1) Only one hard disk is accessed at the same time (4read-write heads are accessed simultaneously), that is,the 1st to 4th read-write heads or the 5th to 8thread-write heads are synchronously accessed.(2) Simultaneously access the first or second side oftwo hard disks (4 read-write heads synchronousaccess), namely the first, third, fifth, seventh, etc. 4read-write heads synchronous access or 2nd, 4th, 6th,8th, etc. 4 read-write head synchronous accessX4X4X1X4In the implementation example of this case, a harddisk is equipped with 4 read-write heads, and thesynchronous access of the 4 read-write heads is 4times the speed{circle around (1)}The first hard disk, the first side, the outer region isaccessed by the first read-write head{circle around (2)}The first hard disk, the first side, the inner region isaccessed by the second read-write head{circle around (3)}The first hard disk, the second side, the outer regionis accessed by the third read-write head{circle around (4)}The first hard disk, the second side, the inner regionis accessed by the fourth read-write headX5X4X2X42 hard disks are equipped with 4 read-write heads,and the simultaneous access of 4 read-write heads is 4times the speed{circle around (1)}The first hard disk, the first side, configure the firstread-write head{circle around (2)}The first hard disk, the second side, configure thesecond read-write head{circle around (3)}The second hard disk, the first side, configure thethird read-write head{circle around (4)}The second hard disk, the second side, equippedwith the fourth read-write headX 6X 6X 3X 63 hard disks are equipped with 6 read-write heads,and the simultaneous access of 6 read-write heads is 6times the speed{circle around (1)}The first hard disk, the first side, configure the firstread-write head{circle around (2)}The first hard disk, the second side, configure thesecond read-write head{circle around (3)}The second hard disk, the first side, configure thethird read-write head{circle around (4)}The second hard disk, the second side, equippedwith the fourth read-write head{circle around (5)}The third hard disk, the first side, configure thefifth read-write head{circle around (6)}The third hard disk, the second side, configure thesixth read-write headX 7X 3X 3X 6Three hard disks are equipped with 6 read-writeheads, but only 3 read-write heads can accesssynchronously at the same time. This is 3 times thespeed. There are two implementation methods:Method 1: At the same time, the three read-writeheads on the first side of the three hard disks aresimultaneously accessed:{circle around (1)}The first hard disk, the first side, the first read/writehead{circle around (2)}The second hard disk, the first side, the thirdread-write head{circle around (3)}The third hard disk, the first side, the fifthread-write headMethod 2: At the same time, the three read-writeheads on the second side of the three hard disks aresimultaneously accessed:{circle around (1)}The first hard disk, the second side, the secondread-write head{circle around (2)}The second hard disk, the second side, the fourthread/write head{circle around (3)}The third hard disk, the second side, the sixthread-write headX 8X 2X 2X 4Two hard disks are equipped with 4 read-write headsbut only two read-write heads are simultaneouslyaccessed at the same time. The speed is 2 times. Thereare two implementation methods:(1) Double speed method with single disc and doubleread-write head{circle around (1)}The first side (A side) of the first hard disk isaccessed by the first read-write head{circle around (2)}The second side (B side) of the first hard disk isaccessed by the second read-write head(2) Double-disc single-sided double-speed method{circle around (1)}The first side (A side) of the first hard disk isaccessed by the first read-write head{circle around (2)}The first side (A side) of the second hard disk isaccessed by the third read-write head The present invention uses a single drive arm to synchronously access the hard disk drive with multiple read-write heads. Since the number of hard disks in the hard disk drive is different from the configuration of the read-write heads, the following combinations are included but not limited:(1) 1P2S4H single-disk 4 times speed: The description of this case uses this combination as an example. There is only one hard disk in the hard disk drive. The hard disk has two sides called the first side and the second side. Each side is configured with 2 with a total of 4 read-write heads. The read-write heads on the first side are called the first read-write head601and the second read-write head602. The first read-write head accesses the first side of the hard disk. The outer sector, the second read/write head accesses the inner sector of the first side of the first hard disk, seeFIG.2A; the read/write head on the second side of the hard disk is called the third read/write head603and the fourth read/write head604, in which the third read/write head accesses the outer sector on the second side of the hard disk, and the fourth read/write head accesses the inner sector on the second side of the first hard disk, which means A hard disk is equipped with a total of 4 read-write heads, and these 4 read-write heads are accessed at the same time, so 4 write/read data processing units (301˜304inFIG.1) are required respectively and 4 read/write The head is electrically coupled. Because the above four read-write heads are accessed synchronously, the time for writing data to and reading back data from the hard disk under the same amount of data transfer will be reduced by 75% compared to a normal drive. The average access time has also been reduced by 75%, in other words performance is 4 times that of ordinary hard drives. Under this architecture, because the two read-write heads on the first side and the two read-write heads on the second side of the hard disk, a total of four read-write heads access the same cylinder number and number of sectors at the same time (Sector Number) data, it is necessary to design a cluster (Cluster) equal to 4 sectors (Sector) or a multiple of 4 sectors. In addition, general hard disks have a safe region with no data, which serves as a rest or park place for the read/write head. This region is called the RW-Head Landing Zone. When the hard disk drive of the present invention is of 1P2S4H and 2P4S8H architectures, each side of each hard disk must reserve two read-write head landing regions, in other words, each side is equipped with two read-write heads and two landing regions. This is also another feature of the present invention, seeFIG.2A.(2) 2P4S8H dual-disc 8× speed: there are 2 hard disks in a hard disk drive. Each hard disk has two sides called the first side and the second side. Each side is equipped with 2 read-write heads. The first hard disk is equipped with 4 read-write heads. As mentioned in (1) above, the second hard disk is also equipped with 4, a total of 8 read-write heads, which means this hard drive contains 2 hard disks. A total of 8 read-write heads are configured. During data writing and data reading, the 8 read-write heads access at the same time, so 8 write/read data processing units are required to be electrically coupled to the 8 read-write heads, so data is written to the hard disk The time it takes to read back slices and data from a hard disk is one-eighth of that of a normal hard disk drive, and the average access time of the read/write head is also reduced by 87.5%, in other words the performance is equivalent to that of a normal hard disk drive 8 times. Under this structure, since a total of 8 read-write heads on the first hard disk and the second hard disk access the same track number and sector number data at the same time, it is necessary to design a cluster equal to 8 sectors or 8 Multiple sectors.(3) 2P4S12H dual-disc 12 times speed: there are 2 hard disks in a hard disk drive. Each hard disk has two sides called the first side and the second side. Refer toFIG.2C, each side has its own Equipped with 3 read-write heads, each side of the hard disk is divided into 3 sectors, which are the outer ring, the middle circle and the inner ring. The three read-write heads each access their own magnetic zone, namely the first The hard disk is equipped with 6 read-write heads, the second hard disk is also equipped with 6 read-write heads and a total of 12 read-write heads, which means that the hard disk drive contains 2 hard disks and a total of 12 Read and write heads. During data writing and data reading, the 12 read-write heads access at the same time, so 12 write/read data processing units are required to be electrically coupled to the 12 read-write heads, so data is written to the hard disk. The time it takes to read back slices and data from a hard disk is one-twelfth of that of a normal hard disk drive, and the average access time of the read/write head has also been reduced by 91.7%. In other words, the performance is equivalent to that of a normal hard disk drive. 12 times. Refer toFIG.2C. When the hard disk drive of the present invention is of 2P4S12H architecture, each side of each hard disk must reserve three read-write head landing regions, in other words, each side is equipped with three read-write heads and three landing regions.(4) 3P6S6H three-disc 6× speed: There are three hard disks in the hard disk drive. Each hard disk has two sides called the first side and the second side. Each side is equipped with a read-write head. The hard disk has 6 read-write heads to access data at the same time, so 6 write/read data processing units are needed. Due to the simultaneous access of 6 read-write heads, the time it takes to write data to the hard disk and read back data from the hard disk under the same amount of data will be 83% less than the average hard disk drive and the average read-write head average access time has also been reduced by 83%, in other words performance is 6 times that of ordinary hard drives. Generally, the access time of a hard disk drive is about 9˜15 ms. One of the characteristics of the present invention is that a hard disk drive is electrically coupled according to the number of hard disks contained in the hard disk drive and the number of read-write heads arranged on each side of each hard disk and each read-write head The number of write/read data processing units can achieve different data access performance. Furthermore, the above four architectures can change the data transmission performance by making the following configuration setting changes on the hard disk drive production side according to the needs. This technology is also unique to this patent, see Table 1:(1) 8× speed to 4× speed: Assuming that the structure of the hard disk drive has 2 hard disks and 8 read-write heads for simultaneous access to this patented 8× speed hard disk drive, it can be set by using the configuration setting switch11. Change to 4× speed hard disk drive, and 4× speed hard disk drive can be set to (1) single hard disk 4× speed or 2 hard disks single access 4× speed.(2) 4× speed to 2× speed: Set the hard disk drive to have 2 hard disks and 4 read-write heads for simultaneous access to a 4× speed hard disk drive, which can be changed to by using configuration setting switch112× speed hard disk drive; and 2× speed hard disk drive can be set to (1) single hard disk 2× speed or 2 hard disk single side access 2× speed.(3) 6× speed to 3 or 2× speed: Set the hard disk drive to have 3 hard disks and 6 read-write heads for simultaneous access to a 6× speed hard disk drive, which can be changed by using the configuration setting switch11. Change to (1) 3 single side of hard disk (the same as the first side or second side) 3 read-write head synchronous access to 3× speed hard disk drive (2) 2× speed single disc 2 read-write head synchronous access to 2× speed hard drive. The present invention includes at least three types of architectures: 1P2S4H, 2P4S8H, and 3P6S6H, and produces 2, 4, 6 and 8 times the performance of ordinary hard disk drives. Here, only the first architecture 1P2S4H is a hard disk Four read-write heads synchronously access data as an example, and the implementation is illustrated withFIG.1,FIG.3, andFIG.4. As shown inFIG.1, when a file is written: the firmware program (file management) in the control unit10will determine the file to be written based on the available sector (or cluster) of the current hard disk and the file size To which magnetic regions of which tracks, a set of data is sent to the voice coil motor (VCM), move the drive arm and read-write head combination unit60to the destination magnetic track, and simultaneously send the data of the number of magnetic tracks and the number of magnetic regions to the first Write/read data processing unit301, second write/read data processing unit302, third write/read data processing unit303, and fourth write/read data processing unit304; then control unit10First, a file to be written to the disk is divided into 4 parts. The method of partitioning is to divide a sector size (a sector size is generally defined as 512 Byte or 4 KByte) as the unit and sequentially divided into the first write disk. Region, the second write region, the third write region, and the fourth write region. After the completion, the contents of the four sectors are sorted to obtain the first write data, the second write data, and the third write Input data and fourth write data. After the file is divided, these 4 parts of data are sent to the first write/read data processing unit301, the second write/read data processing unit302, and the third write/read data processing unit301, respectively. A read data processing unit303and a fourth write/read data processing unit304. For example, when writing a file with a size of 2,047 Byte, the control unit10first divides the file into three 512 Bytes and 511 Bytes, and sends them to the first write/read data processing unit301and the second write/read data processing unit301, respectively. Output data processing unit302, third write/read data processing unit303, and fourth write/read data processing unit304. When the first write/read data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304receive After the target region track number data sent by the control unit10, the data from the first read/write head (RW-Head-1)601, the second read/write head (RW-Head-2)602, and the third The number of tracks read back by the read-write head (RW-Head-3)603and the fourth read-write head (RW-Head-4)604(FIG.3,345). If any of the four sets of data are different, it means If the position of the read/write head is wrong, notify the control unit10to adjust it, if it is the same, then write data: when the first write/read data processing unit301, the second write/read data processing unit302, and the third The write/read data processing unit303and the fourth write/read data processing unit304receive the first write data, the second write data, the third write data, and the fourth write from the control unit10After entering data and other segmented data, they will generate CRC codes according to the content of the data (FIG.3,341), and convert the data together with CRC into serial data (FIG.3,344), and then sequentially pass the data through the first reading The write head601, the second read/write head602, the third read/write head603, and the fourth read/write head604write to the outer sector801of the first side of the first hard disk, and the first inner ring of the first hard disk The sector802, the outer sector803on the second side of the first hard disk and the inner sector804on the second side of the first hard disk are designated in the sector. In the above example, the first write/read data processing unit301will send the 512 Byte data and the CRC to the first read/write head601and write the specified number of tracks in the outer ring region801on the first side of the first hard disk. In terms of the number of sectors, the second write/read data processing unit302sends the 512 Byte data and CRC to the second read/write head602and writes them to the designated track of the first side of the first hard disk inner ring sector802The third write/read data processing unit303sends the 512 Byte data and CRC to the third read/write head603and writes it into the second outer ring sector803of the first hard disk. The fourth write/read data processing unit304will send the 511 Byte data and the CRC to the fourth read/write head604on the number of tracks and the number of sectors at the same time, and write it into the inner sector of the second side of the first hard disk.804specifies the number of tracks and the number of sectors, and the number of tracks of the four is the same as the number of sectors. For example, the eighth sector of the 500th track is written in the same way to complete the writing of 2,047 Byte files. When the system side issues a file read command, the control unit10obtains the track number data of the first storage sector of the file content according to the file allocation table (FAT). It should be noted that in general hard disk drives in the FAT, the read/write head number (for example, the first read/write head or the second read/write head, etc.) is contained in the FAT. In this embodiment, four read/write heads access at the same time. The control unit10sends a set of data to the voice coil motor (VCM), moves the four synchronous access read-write heads to the target track number at the same time, and sends the track number and the number of magnetic regions to the first write/read Output data processing unit301, second write/read data processing unit302, third write/read data processing unit303, and fourth write/read data processing unit304; When the first write/read data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304receive After the target region track number data sent by the control unit10, the data from the first read/write head601, the second read/write head602, the third read/write head603, and the fourth read/write head604need to be compared and read back. Track number and sector number data (FIG.3,345). If the two data are different, it means that the position of the read-write head is wrong, and notify the control unit10to adjust (correct the data sent to the voice coil motor VCM), if they are the same, start processing Read data: The first read/write head601in the drive arm and read/write head combination unit60reads the first sector data and CRC content of the file on the outer sector of the first side of the first hard disk to the first write The input/read data processing unit301and the second read/write head602read the first sector data and CRC content of the file on the inner ring of the first side of the first hard disk to the second write/read data The processing unit302and the third read/write head603read the first sector data and CRC content of the file in the outer sector on the second side of the first hard disk to the third write/read data processing unit303and the fourth read/write head604reads the first sector data and the CRC content of the file in the inner ring region on the second side of the first hard disk to the fourth write/read data processing unit304; the processing unit301first converts the serial data into parallel data and checks that the read sector data is correct according to its CRC content (FIG.3,343), the second write/read data processing unit302first converts the serial data After the serial data is converted to parallel data and the read sector data is checked correctly according to its CRC content, the third write/read data processing unit303first converts the serial data to parallel data and checks the read sector according to its CRC content after the data is correct, and the fourth write/read data processing unit304first converts the serial data to parallel data and checks that the read sector data is correct according to its CRC content, and then sends the data of this sector back to the control individually; the control unit10merges the four pieces of data into four complete sectors, and sends them back to the computer system through the interface unit. If the file content is greater than four sectors, the control unit will use the index of the next storage sector of the file, continue to read the contents of the next two sectors, until all the contents of the file are read. Taking the same example as above, after the control unit10knows the track position of the file storage from the FAT, it first controls the drive arm and the read/write head unit60to move the read/write head to the track position, and the first read The write head601will read the 512 Byte data to the first write/read data processing unit301and the second read/write head unit602will read the 510 Byte data to the second write/read data processing unit302, The three read/write heads603will read the 512 Byte data to the third write/read data processing unit303and the fourth read/write head unit604will read the 511 Byte data to the fourth write/read data processing unit304in. Next, the first write/read data processing unit301and the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit304, respectively Interpret the data by CRC, correct it if there is an error, and send the data to the control unit10when the data is correct. After the control unit10receives the 512 Byte, 512 Byte, 512 Byte, and 511 Byte data in the above example, it integrates the data into 2047 Byte, and sends the data back to the computer system through the interface unit20to complete the file reading action. Differences between hard disk drives of present invention and traditional hard disk drives including: first, each side of the traditional hard disk drive only has one read-write head to access data; second, the read-write head on the first side and the second side of the traditional hard disk drive are not synchronized for accessing data, a piece of data (a file) may be written into the first side and the second side respectively, or only the first side or the second side, but in this case, the first read/write head601, the second read/write head The head602, the third read-write head603, and the fourth read-write head604access the whole data synchronously, so the file storage rules need to be modified. The example is as follows: If a small file uses only one sector, the number of tracks is 100 and the number of sectors is 03 on the outer ring of the first side of the hard disk, then The corresponding tracks and sectors in the inner ring on the first side of the of the hard disk, the outer ring on the second side and the inner ring on the second side will be reserved and will not be used by other files, in short the hard disk has the same number of tracks on the first side of the outer ring, the first side of the inner ring, the second side of the outer ring, and the second side of the inner ring. the same number of sectors can only be used for read/write a same file. Those who are familiar with hard disk drive technology know that there are inevitably bad sectors in the hard drive production stage. The hard drive factory will mark these bad sectors and replace them with reserved sectors under the same track. Finally, all the bad sectors have been sorted out and the bad sector list P-List (Primary Defect List) is generated. After the hard drive leaves the factory, the user cannot see the bad sector list. These bad sectors will not affect the user. FIG.4is a low-level format (Low-Level Format) flow chart, in which steps S210to S290are different from those of a general hard disk drive. This embodiment uses 4 read-write heads to access synchronously while the general hard disk drive is at the same time, only a single read/write head accesses. Step S295is one of the features of the present invention. This step S295reorders sector numbers and generates P-List. The rules are as follows: Under the same cylinder (Cylinder), If a certain region on one side (such as the outer sector on the first side) has a bad number of sectors (for example, the number of sectors 03), when performing step S295reordering, place the other three regions (including the first inner ring sector, the second outer ring sector and the second inner ring sector) the same number of sectors (number of sectors 03) is marked as “not use”, so that the number of sectors accessed by the first, second, third and fourth read-write heads are the same to achieve the purpose of improving access performance. The steps to perform low-level formatting of the hard drive are as follows:(S205) Configuration of the hard disk is performed.(S210) Starting from track 00, the control unit10sends vector data to the voice coil motor of the drive arm and read/write head combination unit60. The voice coil motor pushes the read/write head to the outermost track 00 of the hard disk. The control unit10simultaneously send out the track count and magnetic sector count data to the first write/read data processing unit301, the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write Input/read data processing unit304; because four read-write heads act at the same time, the first read-write head601and the second read-write head602, the third read-write head603and the fourth read-write head604are moved simultaneously To track 00; then the four read-write heads read the number of tracks and the number of sectors is sent back to the first write/read data processing unit301, the second write/read data processing unit302, and the third write/read data processing unit301The read data processing unit303and the fourth write/read data processing unit304confirm that the position of the read/write head is correct, and if there is an error, send it back to the control unit10for correction;(S220) Starting from sector 00, the read-write head reads the number of tracks and the number of sectors information and sends back to the first write/read data processing unit301, the second write/read data processing unit302, and the third The write/read data processing unit303and the fourth write/read data processing unit304determine that the read-write head is in the correct position;(S230) Start magnetic sector data write/read test.FIG.1The control unit10sends test data to the first write/read data processing unit301and the second write/read data processing unit302, the third write/read data processing unit303, and the fourth write/read data processing unit302. Read data processing unit304, first write/read data processing unit301and second write/read data processing unit302, third write/read data processing unit303, and fourth write/read The data processing unit304and the CRC data are respectively written into the outer ring of the first side of the first hard disk through the first read-write head601and the second read-write head602, the third read-write head603, and the fourth read-write head604Sector801, first inner ring sector802, second outer ring sector803, second inner ring sector804, and first read/write head601, second read/write head602, and third read/write head603. The fourth read/write head604reads from the first side outer ring sector801, the first side inner ring sector802, the second side outer ring sector803, and the second side inner ring sector804of the first hard disk. Return the data and compare whether the written data is consistent with the read data. Commonly used test data include AAh, 55h, 00h, FFh and random numbers;(S240) Bad sector? If the data read is the same as the data write, the sector is a usable sector, and skip to step (S260); if the data read is different from the data write or the CRC cannot be detected and corrected, it is a bad sector. The drive manufacturer usually conducts 3˜6 retests, and when it is determined that the sector cannot be used (recovery), skip to step (S250);(S250) Mark the sector as bad (NG), skip back to (S260) to continue the next sector test; because the first read/write head601, the second read/write head602, the third read/write head603, and the fourth read The write head604simultaneously tests one of the first hard disk's first outer ring sector801, the first inner ring sector802, the second outer ring sector803, and the fourth inner ring sector804. When the write and read test failures are marked as “bad”, there are several possible conditions:(1) Only one of801,802,803and804is defective;(2) At least one of801,802,803and804is defective;(S260) Last sector? If it is the last sector, skip to (S280) for the next track test, otherwise skip to (S270).(S270) sector+1, continue to execute the next magnetic sector write and read back test, skip to (S230).(S280) Last track? If it is the last track, skip to (S295), otherwise skip to (S290) track=track+1(S290) Track=Track+1, continue writing and reading test of the next track(S295) Reorder the sector numbers and generate P-List After the control unit completes the low-level formatting, each track must have a sector status table to record the final test result of each sector on the track. Step S295is to compare the first hard disk under the same track. The sector status table for each track of the outer ring sector801, the first inner ring sector802, the second outer ring sector803, and the second inner ring sector804and the available sectors are reordered. In one embodiment, Table 2 to Table 5 are the 100th track magnetic zone status table of the first hard disk after low-level formatting, where Table 2 is the outer magnetic zone on the first side, and Table 3 is the inner magnetic zone on the first side. Table 4 is the outer region on the second side, and Table 5 is the state table of the inner region on the second side. In one embodiment, Table 6 to Table 9 are the 100th track magnetic zone status table of the first hard disk after low-level formatting, where Table 6 is the outer magnetic zone on the first side, and Table 7 is the inner magnetic zone on the first side. Table 8 is the outer region on the second side and Table 9 is the state table of the inner region on the second side. The description is as follows:(1) Assuming that the total number of magnetic regions for each track is 64, the ordering is from the number of magnetic regions 00 to the number of magnetic regions 63, and the total number of reserved magnetic regions is 8 magnetic regions.(2) After the low-level formatting is completed, the status table of the first hard disk with track number of 100 sectors is shown in Table 2 to Table 5: Table 2 explains: the first outer ring sector, After completing the low-level formatting, there is no bad sector status table, where S-00 is the first sector, S-63 is the 64th sector, and R-00˜R-07 are reserved (replacement) Sector; Table 3 description: the first inner ring sector, complete low-level formatting, of which S-03 is a bad sector; Table 4 description: the second outer ring sector completes low-level format The S-59 sector is a bad sector; Table 5 explains: the second inner ring sector has completed low-level formatting, and the S-03 and S-16 sectors are bad sectors. TABLE 2The first hard disk first outer ringregion, the number of tracks is 100S-00S-01S-02S-03S-04S-05S-06S-07S-08S-09S-10S-11S-12S-13S-14S-15S-16S-17S-18S-19S-20S-21S-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56S-57S-58S-59S-60S-61S-62S-63R-00R-01R-02R-03R-04R-05R-06R-07 In Table 2: The region on the first side of the first hard disk, the number of tracks is 100, and the region status table without any bad regions after low-level formatting. Among them, S-00 is the first sector and S-63 is the first 64 sectors, R-00˜R-07 are reserved (replacement) sectors. TABLE 3The first hard disk first side inner circleregion, the number of tracks is 100S-00S-01S-02S-03S-04S-05S-06S-07NGS-08S-09S-10S-11S-12S-13S-14S-15S-16S-17S-18S-19S-20S-21S-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56S-57S-58S-59S-60S-61S-62S-63R-00R-01R-02R-03R-04R-05R-06R-07 In Table 3: The first hard disk has the inner circle sector on the first side, the number of tracks is 100, and the low-level format is completed. The S-03 sector is a bad sector. TABLE 4The outer region on the second side of the firsthard disk, the number of tracks is 100S-00S-01S-02S-03S-04S-05S-06S-07S-08S-09S-10S-11S-12S-13S-14S-15S-16S-17S-18S-19S-20S-21S-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56S-57S-58S-59S-60S-61S-62S-63NGR-00R-01R-02R-03R-04R-05R-06R-07 In Table 4: The outer magnetic zone on the second side of the first hard disk, the number of tracks is 100, and the low-level formatting is completed. The S-59 zone is a bad zone TABLE 5The inner circle of the second side of the firsthard disk, the number of tracks is 100S-00S-01S-02S-03S-04S-05S-06S-07NGS-08S-09S-10S-11S-12S-13S-14S-15S-16S-17S-18S-19S-20S-21S-22S-23NGS-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56S-57S-58S-59S-60S-61S-62S-63R-00R-01R-02R-03R-04R-05R-06R-07 In Table 5: The inner circle sector on the second side of the first hard disk, the number of tracks is 100, and the low-level formatting is completed. The S-03 and S-16 sectors are bad sectors.(3) After performing step S295to reorder the serial numbers of the magnetic regions, Table 6 explains: the outer magnetic regions on the first side, all the original magnetic regions are available, but in order to achieve the need of synchronous access, therefore: (1) magnetic region S-03 is marked as X-03 NU; the original magnetic region S-16 is marked as X16 NU; the original magnetic region S-59 is marked as X-59 NU, (NU means not used); (2) In addition to the above 3 magnetic region modification marks, reordering marks for all other sectors; TABLE 6The first hard disk first outer ringregion, the number of tracks is 100S-00S-01S-02X-03S-03S-04S-05S-06NUS-07S-08S-09S-10S-11S-12S-13S-14X-16S-15S-16S-17S-18S-19S-20S-21NUS-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56X-59S-57S-58S-59S-60NUS61-S-62S-63R-00R-01R-02R-03R-04 In Table 6: All the original regions are available, but in order to achieve the need of synchronous access, so the serial number of the original region S-03 is marked as X-03 NU; the original S-16 is marked as X16 NU; the original S-59 is marked as X-59 NU, except for the above 3 sector modification marks, all other sectors are reordered.(4) After performing step S295to reorder the serial numbers of the sectors, Table 7 explains: the first outer magnetic sector, only S-03 of the original sectors is bad sector, but in order to achieve the need of synchronous access, so: {circle around (1)} The serial number of the original sector S-16 is marked as X-16 NU; the original S-59 is marked as X-59 NU; the original S-03 remains marked as X-03 NG. In addition to the above 3 sector modification marks, all others Sector reordering marks. TABLE 7The first hard disk first side inner circleregion, the number of tracks is 100S-00S-01S-02X-03S-03S-04S-05S-06NGS-07S-08S-09S-10S-11S-12S-13S-14X-16S-15S-16S-17S-18S-19S-20S-21NUS-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56X-59S-57S-58S-59S-60NUS61-S-62S-63R-00R-01R-02R-03R-04 In Table 7 Only S-03 is the bad region, but in order to achieve the need of synchronous access, therefore: 1) The serial number of the original region S-16 is marked as X-16 NU; the original S-59 is marked as X-59 NU; the original S-03 remains marked as X-03 NG 2) Except for the above 3 sector modification marks, all other sectors are reordered.(5) Step S295After re-ordering the serial numbers of the sectors, Table 8 explains: the first outer magnetic sector, only S-59 of the original magnetic sector is bad sector, but in order to achieve the need of synchronous access, so {circle around (1)} The original magnetic sector number S-03 is marked as X-03 NU; the original S-16 is marked as X16 NU; the original S-59 remains marked as X-59 NG. {circle around (2)} Except for the above 3 magnetic sector modification marks, all other sectors are renew Sort mark. TABLE 8The outer region on the second side of the firsthard disk, the number of tracks is 100S-00S-01S-02X-03S-03S-04S-05S-06NUS-07S-08S-09S-10S-11S-12S-13S-14X-16S-15S-16S-17S-18S-19S-20S-21NUS-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56X-59S-57S-58S-59S-60NGS61-S-62S-63R-00R-01R-02R-03R-04 In Table 8 Only S-59 is bad magnetic area, but in order to achieve the need of synchronous access, (1) The serial number of the original magnetic area S-03 is marked as X-03 NU; the original S-16 is marked as X16 NU; the original S-59 remains marked as X-59 NG. (2) Except for the above 3 sector modification marks, all other sectors are reordered.(6) Step S295After the reordering of the sector numbers, Table 9 explains: the second side inner ring sector, the sector number reordering status table, all the original sectors only S-03 and S-16 are bad sectors, But in order to achieve the need for synchronous access, {circle around (1)} The original sector number S-03 remains marked as X-03 NG; the original S-16 remains marked as X-16 NG; the original S-59 is marked as X-59 NU CU {circle around (2)} Except for the above 3 magnetic zone modification marks, all other magnetic zones are reordered. TABLE 9The inner circle of the second side of the firsthard disk, the number of tracks is 100S-00S-01S-02X-03S-03S-04S-05S-06NGS-07S-08S-09S-10S-11S-12S-13S-14X-16S-15S-16S-17S-18S-19S-20S-21NGS-22S-23S-24S-25S-26S-27S-28S-29S-30S-31S-32S-33S-34S-35S-36S-37S-38S-39S-40S-41S-42S-43S-44S-45S-46S-47S-48S-49S-50S-51S-52S-53S-54S-55S-56X-59S-57S-58S-59S-60NUS61-S-62S-63R-00R-01R-02R-03R-04In Table 9 On the second side of the inner circle region, the region serial number is reordered. Only S-03 and S-16 are bad regions, but in order to achieve the need of synchronous access,(1) The original region serial number S-03 remains marked as X-03 NG; the original S-16 remains marked as X-16 NG; the original S-59 is marked as X-59 NU.(2) Except for the above 3 sector modification marks, all other sectors are reordered.Conclusion: After reordering the original magnetic field state table according to the above-mentioned embodiment, the magnetic field state table can reach the first read-write head, the second read-write head, and the third read-write head no matter which region or track it is in. Each of the sectors accessed by the write head and the fourth read/write head is available and has the same number of sectors (sequence). We call this sector-access synchronization (Sector-Access Synchronize), in other words 4 read-write heads can simultaneously access the same sequence and the same number of sectors data, thus achieving 4 times the access performance. Integrating the sector status table of each track becomes the P-List of the hard drive. It is not necessary to implement the synchronization of sector access. Even if it is not implemented, the four read-write heads can complete the access to the sector data individually, even if it is not implemented. Within the scope of patent power. The data transmission rate, according toFIG.1and the above-mentioned embodiments, in this case, one hard disk must be equipped with 2 read-write heads, 12 hard disks must be equipped with 24 read-write heads, and 24 read-write heads must be coupled to write separately/Read data processing unit group30has 24 write/read data processing units, so when 24 read-write heads simultaneously access hard disk data at the same time, it is 24 times the data transfer rate; also, see page 2—As shown inFIG.1, in this case, one hard disk must be equipped with 4 read-write heads, 12 hard disks must be equipped with 48 read-write heads, and 48 read-write heads must be coupled to 48 write/read data processing units. When 48 read-write heads access the hard disk data synchronously, it is 48 times the data transfer rate; also, seeFIG.2C. In this case, one hard disk must be equipped with 6 read-write heads and 12 hard disks. 72 read/write heads, 72 read/write heads have to be coupled to 72 write/read data processing units, so when 72 read/write heads access hard disk data synchronously, it is 72 times the data transfer rate, and the so-called double speed. The data transfer rate is about 150 MB/s to 200 MB/s, and the 24× speed is about 7200 MB/s to 9600 MB/s, which basically meets the PCI-e 6.0 transmission specification of 7877 MB/s (actually 7755 MB/s), while the 48× speed And 72× speed can meet the requirements of the future generation of transmission interface and transmission specifications. Furthermore, this case includes two sets of transmission interfaces20, two sets of control units10, two sets of write/read data processing units30, two sets of permanent magnet combinations50, two sets of drive arms and read/write head combination units60, Two sets of rotating shaft motors70, two sets of hard disk sets80, etc., in other words, with 6 read-write heads on one hard disk, and a total of 144 read-write heads are simultaneously accessed, the above-mentioned data transfer rate further reaches the traditional hard disk 144 times the machine. The data transmission speed of the hard disk drive is related to the rotation speed of the spindle motor and the access time from the read/write head to the magnetic track. Today, the data transmission speed of the 2.5″ hard drive is about 150 MB/s to 200 MB/s. With the original hard disk drive shaft motor speed and read/write head access time unchanged, after implementing this patented technology, a single disk hard disk drive can reach data transfer speeds of 900 MB/s to 1200 MB/s, which is higher than that of SATA 3.0 Transmission speed: The hard disk drive with dual discs can reach a data transmission speed of 1,800 MB/s˜2,400 MB/s, which is about three times the data transmission speed of SATA 3.0. Although the present invention has been described with reference to the above embodiments, it will be apparent to one of the ordinary skill in the art that modifications to the described embodiment may be made without departing from the spirit of the invention. Accordingly, the scope of the invention will be defined by the attached claims not by the above detailed descriptions. | 62,623 |
11861174 | DETAILED DESCRIPTION Non-Volatile Memory Express (NVME) is a storage interface specification for communication between hosts and storage devices (e.g., SSDs on a Peripheral Component Interconnect Express (PCIe) bus). According to the NVME specification, a storage device may handle thousands of IO operations in parallel. To provide this benefit to enterprise-class data centers, NVME may be extended over fabrics for increased scalability and shareability. In this regard, NVME over fabrics (NVMEOF) is a flexible transport abstraction layer that provides for a consistent definition of NVME over a wide range of storage networking fabrics, such as Ethernet and Fibre Channel (FC). A storage device compatible with the NVME specification and able to process requests (e.g., read requests, write requests, administrative requests, etc.) consistent with and/or provided according to the NVME specification is referred to as an “NVME storage device” (also referred to herein as a “storage device”). Example of an “NVME storage device” may include solid-state drives (SSDs) compatible with the NVME specification. A host may be a computing system or device that may access data stored in and write data to one or more NVME storage devices. In some examples, the host may be a server providing services to client(s) based on the data stored at one or more of the NVME storage devices. The NVME specification defines both an interface (e.g., a register-level interface) and a command protocol used to communicate with the NVME storage devices. In a system utilizing the NVME specification, one or more NVME storage devices (e.g., including port(s) of the NVME storage device(s)) may be configured to communicate with a host. Communication between the host and one or more NVME storage devices may be implemented by an NVME controller. The NVME controller may be a storage array controller at a front-end that can manage one or more NVME storage devices, such as SSDs, at a back-end. A host may be connected to a host port on the NVME controller, thereby associating the host port with the host. In some examples, the host port may be a physical port acting as an interface between the host and the NVME controller. The interface between the NVME controller and the NVME storage device may be based on several queue pairs (i.e., paired submission and completion queues) shared between the NVME controller (e.g., including ports) of the NVME controller) and the NVME storage device (e.g., including port(s) of the NVME storage device). The queue pairs may be located either in the host memory or in the memory provided by the NVME storage device. In some examples, the NVME specification may allow up to 64K individual queue pairs per NVME storage device, and each queue pair can have up to 64K entries. Once the queue pairs are configured, these queue pairs may be used for communication between the NVME controller and an NVME storage device using the command protocol. Every new entry may be submitted to an NVME storage device using a submission command via a submission queue. When the submission command is processed, an entry (that has been previously associated with the submission queue from which the command was retrieved) may be put on a completion queue using a completion command, and an interrupt may be generated. There may be separate queue pairs for administration operations (e.g., creating and deleting queues or updating firmware on the device) and for IO operations (e.g., read and write operations). Separate queue pairs may avoid excessive delay of IO operations due to long-running administration operations. Each queue for IO operations between the NVME controller and an NVME storage device may include both read requests and write requests. Generally, NVME storage devices can process IO operations at a faster rate as compared to the NVME controller. However, since a single NVME controller at the front-end may manage multiple NVME storage devices at the back-end, the processing bad at the NVME controller may increase manifold with the increase in processing bad in one or more of the NVME storage devices. Thus, the NVME controller may not be able to process requests to the NVME storage devices at an optimal rate, and consequently, Input-Output Operations per Second (IOPS) between the NVME controller and the NVME storage devices may be reduced thereby adversely affecting performance. Further, in some NVME storage devices, such as SSDs, the read requests are processed significantly faster compared to write requests. As each queue may include a mix of both read and write requests, the processing of read requests may be unduly delayed until the write requests are processed at the NVME storage devices. Additionally, the NVME controller may not prioritize outstanding requests at the NVME storage devices. The increase in outstanding requests that are pending for processing may lead to choking of IO operations at the NVME storage devices. As a result, there may be increased latency at the NVME storage devices and timeouts in application(s) running in the hosts. Examples described herein provide dynamic prioritization of read IO queue between the NVME controller and the NVME storage device based on the number of read requests, consequently improving IOPS for storage applications. The examples described herein may include selecting an active host port at the NVME controller that has not been fully utilized and creating a candidate list of NVME storage devices that are associated with that host port. The candidate list may include bottleneck NVME storage devices for which prioritization of read IO queue could be considered. Examples described herein may create the candidate list based on various measures including, but not limited to, utilization, throughput, IO request completions, busy time periods, etc., associated with the NVME storage devices. A priority rank may be assigned to the read IO queues at each NVME storage device included in the candidate list based on the number of read requests in that read IO queue. Some examples described herein also assign priority rank to read IO queues based on utilization level of the associated storage device, thereby providing more granularity in prioritizing the read IO queues. In this manner, the read IO queues between the NVME controller and the NVME storage devices may be prioritized for processing based on the number of read requests and other factors as described herein. Prioritizing the read IO queues at one or more NVME storage devices may reduce the latency in processing IO operations from the hosts via the NVME storage devices and thereby reduce timeouts in applications running in the hosts. FIG.1depicts an example system100including an NVME controller110(hereinafter also referred to as “controller110”) that facilitates connecting hosts102to communicate with NVME storage devices104(hereinafter also referred to as “storage devices104”). In some examples, the storage devices104and the controller110may be included in a storage array, and the controller110may serve as a storage array controller of the storage array. The system100illustrated inFIG.1may include a plurality of NVME storage devices104(labeled as NVME storage devices104-1through104-P) and a plurality of hosts102(labeled as hosts102-1through102-N). Each of the NVME storage devices104may be accessed by a corresponding subset of the hosts102. For example, a first subset of the hosts102can communicate with an NVME storage device104-1, a second subset of the hosts102can communicate with an NVME storage device104-2, and so on. In some examples, a given host of the hosts102can communicate with two or more NVME storage devices104(i.e., the given host may belong to two or more subsets of the hosts). In some examples, the controller110may be attached to, be part of, be associated with, and/or be otherwise related to a fabric (e.g., NVME fabrics) to which the hosts102and NVME storage devices104are communicatively connected. The controller110may include at least one processor112communicatively coupled to a machine-readable storage medium114including at least analysis instructions116and prioritization instructions118that, when executed by the at least one processor112, cause the controller110to perform actions described herein in relation to the controller110. In some examples, the instructions of the controller110may be executed in a switch (e.g., embedded in a container), in a virtual machine (VM), or in an NVME storage device (e.g., the NVME storage device104-1). The controller110may facilitate connecting the hosts102to NVME storage devices104-1to104-P. The hosts102may communicate to the NVME storage device(s) based on a mapping. For example, inFIG.1, the mapping may indicate that the hosts labeled102-1, . . . ,102-N can communicate with the NVME storage devices104. The controller110may include host ports106-1, . . . ,106-N, also referred to as host ports106. Each of the hosts102may connect with a host port, from among the host ports106, thereby associating each of the host ports106with a particular host. Each of the hosts102associated with a host port, from the host ports106, may be enabled to communicate with an NVME storage device from a plurality of NVME storage devices104. The controller110may include analysis instructions116and prioritization instructions118to perform one or more functionalities of the controller110as described herein. In other examples, functionalities described herein in relation to the controller110may be implemented via hardware or any combination of hardware and machine-executable instructions. The combination of hardware and machine-executable instructions may be implemented in a number of different ways. For example, the machine-executable instructions may include processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware may include at least one processor (e.g., at least one processing resource, CPU, circuitry, etc.) to execute those instructions. In examples described herein, a single computing device (e.g., a storage array) may include a machine-readable storage medium storing the instructions and the processor (or other hardware) to execute the instructions, or the machine-readable storage medium storing instructions may be separate from and accessible by the computing device and the processor. In some examples, a sampling interval may be configured for the controller110. The sampling interval is indicative of a time interval at which the controller110is to perform one or more functionalities for prioritizing one or more of the read IO queues108-1to108-P between the controller110and the NVME storage devices104-1to104-P. The sampling interval may be a predefined value set by a user. The sampling interval may be, for example, 1800 seconds, 3600 seconds, and the like. In some examples, the prioritization instructions118may create two pairs of IO queues between the controller110and each NVME storage device104-1to104-P. One pair of queues may include two submission queues and the other pair of queues may include two completion queues, for example. One submission queue and one completion queue may be dedicated for read operations (read IO queues108-1to108-P) and the other submission queue and completion queue may be dedicated for write operations (write IO queues, not shown inFIG.1). The read IO queues108-1to108-P may be stored in a memory unit of the NVME storage device104-1to104-P, or alternatively stored in a memory unit of the controller110or the hosts102-1to102-N. The analysis instructions116may determine a utilization time of each of the plurality of host ports106-1to106-N. Although, in the description hereinafter, the operations/functionalities are described with reference to the host port106-1and storage device104-1, similar operations/functionalities may also be performed in respect of each of the other host ports106-2to106-N and each of the storage devices104. The analysis instructions116may determine a throughput of the host port106-1based on a number of IO request completions at the host port106-1over the sampling interval. The IO request completions at the host port106-1may refer to IO requests serviced or processed at the host port106-1during the sampling interval. In some examples, the throughput of the host port106-1may be a ratio of a number of IO request completions at the host port106-1to the sampling interval. Further, the analysis instructions116may determine an average service time of the host port106-1. The average service time is indicative of the average time taken for processing an IO operation (read or write) by the host port106-1. Processing an IO operation at the host port106-1may include inserting an IO request (read requests or write requests from the host102) in a submission queue and receiving a response for the IO request at the completion queue from the storage device104-1, for example. The average service time of the host port106-1may be computed as a ratio of a busy time period of the host port106-1and the number of IO request completions at the host port106-1over the sampling interval. The busy time period of the host port106-1refers to a time duration for which the host port106-1remains unavailable for further processing/receiving IO requests from the hosts102, such as the host102-1. The analysis instructions116may compute the utilization time of the host port106-1as a product of the throughput of the host port106-1and the average service time of the host port106-1. The utilization time of each of the host ports106-2to106-N may also be determined in a similar manner. The analysis instructions116may compare the utilization time of the host port106-1with a host port utilization threshold. In some examples, the host port utilization threshold may be expressed in terms of percentage of the sampling interval for which the host port106-1is utilized. For example, the host port utilization threshold may be 98% of the sampling interval. The analysis instructions116may determine that a host port is fully utilized if the utilization time of host port is greater than the utilization threshold (e.g., 98%) and analyze the next host port106-2, for example. In some examples, the analysis instructions116may determine whether the host port106-1is idle based on incoming IO requests from the host. For instance, the analysis instructions116may determine that the host port106-1is not receiving IO requests if the host port throughput, the average service time, or the host port utilization time is equivalent to zero. In response to determining that the host port106-1is not receiving IO requests, the analysis instructions116may analyze the next host port106-2, for example. In response to determining that the utilization time of the host port106-1is not fully utilized and that the host port106-1is not idle, the analysis instructions116may create a candidate list of storage devices. The candidate list may include NVME storage devices for which prioritization of read IO queues could be considered. The candidate list may be created based on measures including the utilization, the throughput, the busy time period, and the number of IO request completions, of each storage device104-1to104-P. For example, the analysis instructions116may determine a number of IO request completions for each storage device104-1to104-P over a sampling interval. The IO request completions at the storage device may refer to IO requests serviced or processed at the storage device during the sampling interval. Servicing or processing of IO requests may include completion of reading data from or writing data to the storage device104-1. The analysis instructions116may also determine a visit ratio of the storage device104-1. The visit ratio of the storage device104-1may refer to the number of IO request completions at the storage device104-1per unit time of the sampling interval. Further, analysis instructions116may determine a throughput of the storage device104-1based on the visit ratio. In some examples, the throughput of the storage device104-1may be computed as a product of the visit ratio of the storage device104-1and the throughput of the host port106-1. The analysis instructions116may determine a service demand for the storage device104-1as a product of the visit ratio of the storage device104-1and the average service time of the storage device104-1. The average service time of the storage device may refer to the time taken for a storage device104-1to receive an IO request (read request or write request) and process the IO request. Further, the analysis instructions116may determine a utilization of the storage device104-1as a product of the throughput of the host port106-1and the service demand of the storage device104-1. Likewise, throughputs, average service times, utilizations of the other storage devices104-2to104-P may also be determined. In some examples, the analysis instructions116may determine an average throughput, an average utilization, an average IO request completion of the storage devices104-1to104-P. The average throughput may be a ratio of the total throughput of the storage devices104-1to104-P and the total number of storage devices104-1to104-P. The average utilization may be a ratio of the total utilization of the storage devices104-1to104-P and the total number of storage devices104-1to104-P. The average IO request completions may be a ratio of the total IO request completions of the storage devices104-1to104-P and the total number of storage devices. The average service time of the storage devices104-1to104-P may be a ratio of the total average service time of the104-1to104-P to the number of storage devices104-1to104-P. The analysis instructions116may create the candidate list by including one or more of the storage devices104-2to104-P based on a comparison of the performance of a given storage device104-1with the average performance of the storage devices104-1to104-P. The performance may be characterized by the utilizations, throughputs, the busy time periods, and number of IO request completions as determined above. For example, the analysis instructions116may determine whether the utilization of the storage device104-1is greater than an average utilization of the storage devices104-1to104-P. The analysis instructions116may determine whether the throughput of the storage device104-1is less than an average throughput of the storage devices104-1to104-P. The analysis instructions116may determine whether the IO request completion of the storage device104-1is less than the average IO request completion of the storage devices104-1to104-P. Further, the analysis instructions116may determine whether the busy time period of the storage device104-1is greater than the average service time of the storage devices. Based on the above determinations, the analysis instructions116may create the candidate list. For example, if the utilization is greater than the average utilization, the throughput and IO request completions of the storage device104-1are less than the average throughput and average IO request completions of the storage devices104-1to104-P, and if the busy time periods of the storage device104-1is greater than the average service time of the storage devices104-1to104-P, then the analysis instructions116may include the storage device104-1in the candidate list. While the examples described herein discusses the use of utilization, throughput, IO request completions, busy time periods, other examples may use additional or alternative measures that are not discussed to create the candidate list. The prioritization instructions118may determine a number of read requests in a read IO queue at each storage device in the candidate list. A read request refers to an IO request from a host102-1to read data from one of the storage devices104-1. Based on the number of read requests, the prioritization instructions118may assign a priority rank to the read IO queue108-1at the storage device104-1. The priority rank may indicate the priority of a read IO queue108-1for processing at a storage device104-1. Examples of the priority ranks may include ‘URGENT’, ‘HIGH’, ‘MEDIUM’, ‘LOW’, and the like. In such examples, the highest priority may be ‘URGENT’, and the lowest priority may be ‘LOW’. In some examples, the prioritization instructions118may identify a read IO queue having the highest number of read requests in a storage device from the candidate list. In some examples, the prioritization instructions118may assign a highest priority rank (i.e., ‘URGENT’) to the identified read IO queue. For each storage device in the candidate list, the prioritization instructions118may determine an average time for processing the read requests. In particular, prioritization instructions118may determine whether there is a change in the average time for processing read requests by the storage device104-1over two successive sampling intervals. In some examples, the average time for processing a first set of read requests by the storage device104-1during a first sampling interval may be compared with the average time for processing a second set of read requests by the storage device104-1in a second sampling interval, where the first and second sampling intervals are successive sampling intervals. Based on the comparison, the prioritization instructions118may determine the change in the average time for processing the read requests. In some examples, the change may indicate an increase or decrease in the average time for processing the read requests by the storage device104-1. In some examples, for each storage device in the candidate list, the prioritization instructions118may determine whether the number of read requests in a read IO queue is greater than a number of write requests in a write IO queue. A write request refers to an IO request from a host to write data in one of the storage devices. The number of read requests and the number of write requests may be determined using the read-write ratio between the host port106-1and the storage device104-1. In response to determining that there is an increase in the average time for processing the read requests by the storage device104-1and to the number of read requests being greater than a number of write requests at the storage device104-1, the prioritization instructions118may determine a quantity of read IO queues that have already been assigned the highest priority rank, i.e., ‘URGENT’, in the storage devices included in the candidate list. Based on the quantity of ‘URGENT’ read IO queues, prioritization instructions118may adjust priority rank assignments to ensure that an excessive number of read IO queues are not assigned the highest priority rank. In some examples, the prioritization instructions118may determine whether the quantity of read IO queues having the highest priority rank is less than a threshold quantity. The threshold quantity may be half of the total quantity of storage devices in the candidate list, for example. In response to determining that the quantity of the read IO queues having the highest priority rank is less than the threshold quantity, the prioritization instructions118may determine whether the utilization of the storage device is in a threshold utilization range. In some examples, the threshold utilization range may refer to one or more ranges of predefined utilization values that are set based on user input. For example, a first threshold utilization range may include 95%-100% utilization values, a second threshold utilization range may include 75%-95%, and so on. The prioritization instructions118may determine a match between the utilization of the storage device104-1and one of the ranges of utilization values. The prioritization instructions118may assign a priority rank to the read IO queue based on the match. For example, a user may set the highest priority rank (‘URGENT’) for a range of utilization values of 95%-100% In response to determining that the storage device104-1is associated with a utilization of 98%, the priority instructions120may assign the highest priority rank to the read IO queue108-1of that storage device. In some examples, the prioritization instructions118may determine that the quantity of read IO queues having the highest priority rank is less than a threshold quantity. Further, the prioritization instructions118may determine that the utilization of the storage device is not in a threshold utilization range. For example, the utilization of the storage device may be 40%, which is neither in the first threshold utilization range nor in the second threshold utilization range in the above example. In such examples, the prioritization instructions may determine whether the read IO pattern is a sequential pattern or not. In response to determining that the read IO pattern is a sequential pattern, the prioritization instructions118may assign a lowest priority rank or may not assign any priority rank to the read IO queue at the storage device. In some examples, in response to determining that the average time for processing the read requests is decreasing or not increasing the prioritization instructions118may assign a lowest priority rank or may not assign a priority rank to the read IO queue108-1at the storage device104-1. For example, if the average time for processing a read request is 0.2 ms (milliseconds) less than the average time for processing the previous read request, the prioritization instructions118may determine that the read IO queue may not be immediately processed at the storage device. In another example, in response to determining that the number of read requests is not greater than a number of write requests at the storage device104-1, the prioritization instructions118may assign the lowest priority rank or not assign a priority rank to the read IO queue108-1at the storage device104-1. The prioritization instructions118may accordingly determine whether the read IO queues have to be prioritized or not and assign a priority rank accordingly. Based on the priority rank of the read IO queue, the controller110may prioritize the processing of the read requests in the read IO queue at the storage device104-1. For example, the read IO queue with a highest priority rank (‘URGENT’) may be processed first. In the manner, as described above, the prioritization instructions118may prioritize read IO queues108-1to108-P at the storage devices104based on the number of read requests and utilization. Likewise, in a similar manner, the prioritization instructions118may dynamically adjust the priority rank of read IO queues at the storage devices included in the candidate list. Thus, for each storage device included in the candidate list, a priority rank for the read IO queues may be determined based on which the processing of the read requests is performed. FIG.2is a block diagram of a computing system200including a processor202and a machine-readable storage medium204encoded with example instructions206,208,210, and212to prioritize read IO queues between an NVME controller (such as the controller110ofFIG.1) and an NVME storage device (such as the storage device104-1ofFIG.1), in accordance with an example. In some examples, the machine-readable storage medium204may be accessed by the processor202. The processor202may execute instructions (i.e., programming or code) stored on the machine-readable storage medium204. The instructions206,208,210, and212ofFIG.2, when executed by the processor202, may implement various aspects of prioritizing read IO queues between the controller and the storage device. In some examples, the instructions206and208may be included within the analysis instructions116, and the instructions210and212may be included within the prioritization instructions118ofFIG.1. In some examples, the computing system200may serve as or may be included in (e.g., as part of) an NVME controller (e.g., the NVME controller110ofFIG.1). For ease of illustration,FIG.2will be described with reference toFIG.1. In certain examples, the instructions206-212may be executed for performing the functionalities of the NVME controller110and one or more methods, such as the methods300,400,500A, and500B described below with reference toFIGS.3,4,5A, and5B. In certain examples, as an alternative or in addition to executing the instructions206-212, the processor202may include at least one integrated circuit, control logic, electronic circuitry, or combinations thereof that include a number of electronic components for performing the functionalities described herein as being performed by the controller110. Instructions206, when executed by the processor202, may determine a utilization time of the host port106-1in the NVME controller110. The host port106-1is associated with a host102-1and is to communicate with an NVME storage device104-1. In response to determining that the utilization time of the host port106-1is lower than a host port utilization threshold, instructions208may create a candidate list of NVME storage devices. The candidate list may be created based on measures including utilizations, throughputs, busy time periods, and IO request completions of the NVME storage devices104. In some examples, the instructions may include determining the utilization, throughput, busy time periods, and IO request completions of each NVME storage device104-1to104-P and determining an average utilization, average throughput, and average IO request completions of all the NVME storage devices104. The instructions may further include comparing the individual utilization, throughput, and IO request completions with the average utilization, average throughput, and IO request completions of the NVME storage devices. The busy time period of the NVME storage device may be compared with the average service time of the NVME storage devices. If the utilization of the NVME storage device is greater than the average utilization of the NVME storage devices and if the throughput and IO request completions at the NVME storage device are less than the average throughput and average IO request completions of all the NVME storage devices, and the busy time period is greater than the average service time of the NVME storage devices, then the NVME storage device may be included in the candidate list. For the NVME storage device104-1included in the candidate list, instructions210, when executed by the processor202, may determine the number of read requests in a read IO queue at NVME storage devices. The number of read requests may include the number of outstanding read requests (i.e., queue depth) in the read IO queue. An outstanding read request may refer to a read request in the read IO queue that is pending for processing at the NVME storage device. Instructions212, when executed by the processor202, may assign a priority rank to the read IO queue at the NVME storage device based on the number of read requests. In some examples, the instructions may include identifying the read IO queue with the highest number of read requests and assigning the highest priority rank to that read IO queue. In other examples, the instructions may include identifying the read IO queue with the highest number of read requests and determining the utilization of the NVME storage device before assigning the priority rank. The instructions206-212may include various instructions to execute at least a part of the methods described below with reference toFIGS.3,4,5A and5B. Also, although not shown inFIG.2, the machine-readable storage medium204may also include additional program instructions to perform various other method blocks described below with reference toFIGS.3-5B. FIGS.3-5Bdepict flowcharts of example methods300,400,500A, and500B, for prioritizing read IO queues between NVME storage devices (e.g., the NVME storage devices104ofFIG.1) and an NVME controller (NVME controller110ofFIG.1). For ease of illustration, the execution of example methods300,400,500A, and500B is described in detail below with reference toFIG.1. Although the below description is described with reference to the NVME controller110ofFIG.1, other applications or devices suitable for the execution of methods300,400,500A, and500B may be utilized. Furthermore, although the below description is described with reference to the NVME storage device104-1ofFIG.1the methods300,400,500A, and500B are applicable to other NVME storage devices. In some examples, each of the methods300,400,500A, and500B may be executed by each NVME storage device present in the system100. The method steps at various blocks depicted inFIGS.3-5Bmay be performed by the NVME controller110. In some examples, each of the methods300,400,500A, and500B at each such method blocks may be executed by the computing system200via the processor202that executes the instructions206-212stored in the non-transitory machine-readable storage medium204. Additionally, implementation of methods300,400,500A, and500B is not limited to such examples. Although each of the flowcharts ofFIGS.3-5Bshow a specific order of performance of certain functionalities, the methods300,400,500A, and500B are not limited to such order. For example, the functionalities shown in succession in the flowcharts may be performed in a different order, may be executed concurrently or with partial concurrence, or a combination thereof. InFIG.3, at block302, the method300may include determining a utilization time of the host port106-1in the NVME controller110. The host port106-1is associated with a host102-1and is to communicate with an NVME storage device104-1. The utilization time may be determined in the following example manner. In some examples, a throughput of the host port106-1may be determined based on a number of IO request completions at the host port106-1over the sampling interval. In some examples, the throughput of the host port106-1is a ratio of a number of IO request completions at the host port106-1to the sampling interval. Further, an average service time of the host port106-1may be determined. The average service time is indicative of the average time taken for servicing an IO request at the host port106-1. The average service time of the host port106-1may be computed as a ratio of a busy time period of the host port106-1and the number of IO request completions at the host port106-1over the sampling interval. The busy time period of the host port106-1refers to a time duration for which the host port106-1remains unavailable for further processing receiving IO requests from the host102-1. The utilization time of the host port106-1may be computed as a product of the throughput of the host port106-1and the average service time of the host port106-1. In response to determining that the utilization time of the host port106-1is lower than a host port utilization threshold, at block304, the method300may include creating a candidate list of NVME storage devices. The candidate list may be created based measures including on the utilizations, throughputs, busy time periods and the IO request completions of the storage devices104. In some examples, the storage devices104may be grouped or included in the candidate list if the utilization of the NVME storage device is greater than the average utilization of the NVME storage devices, the throughput of the NVME storage device is less than the average throughput of the NVME storage devices, the busy time of the NVME storage device is greater than the average service time of the NVME storage devices, and the IO request completion of the NVME storage device is less than the average IO request completion of the NVME storage devices. At block306, the method300may include, for the storage device, such as the storage device104-1, included in the candidate list, determining a number of read requests in a read IO queue at the storage device104-1. At block308, the method300may include assigning a priority rank to the read IO queue at each storage device included in the candidate list based on the number of read requests as will be described further with reference toFIG.5B. FIGS.4A and4Bdepict method400as a flowchart, which may be useful for creating a candidate list of storage devices, in accordance with an example. In some implementations, method400may be useful for performing at least part of block304of method300. As depicted inFIG.4A, at block402, the method400may include obtaining a latency input and configuring a sampling interval. The latency input may refer to a maximum time delay for IO operations that is requested by a workload executing on the host102. The latency input may be received from the workload executing on the host102, for example. Each workload may provide a different latency input, which may be 1 ms or 2 ms, for example. The sampling interval may be received as an input from the host102. At block404, the method400may include determining a number of host ports106-1to106-N in an NVME controller110. The number of host ports106-1to106-N may be determined based on the number of hardware connections between the host102and the controller110. The hosts102and the controller110may be connected using Small Computer System Interface (SCSI)-based connections, Fibre Channel (FC)-based connections, Network File System (NFS)-based connections, for example. At block406, the method400may include determining a number of NVME storage devices104-1to104-P associated with the NVME controller110. The NVME storage devices104-1to104-P may be connected based on SCSI or PCIe connections, for example. In some examples, the storage devices104may register with the NVME controller110using a registration request and thereby associate with the NVME controller110. At block408, the method400may include creating two pairs of queues between the NVME controller110and an NVME storage device, such as the NVME storage device104-1. One pair of the queues may include a submission queue for read and write requests and other pair of the queues may include a completion queue for read and write requests. At block410, the method400may include determining a throughput of a host port106-1based on the IO request completions over the sampling interval. In some examples, the throughput of the host port106-1is a ratio of a number of IO request completions at the host port106-1to the sampling interval. At block412, the method400may include determining an average service time of the host port106-1based on a busy time period of the host port106-1and the IO request completions. In some examples, the average service time of the host port106-1may be computed as a ratio of a busy time period of the host port106-1and the number of IO request completions at the host port106-1over the sampling interval. The busy time period of the host port106-1refers to a time duration for which the host port106-1remains unavailable for further processing/receiving IO requests from the host102-1. At block414, the method400may include computing a utilization time of the host port106-1, abbreviated as T(U) host port, as a product of the throughput of the host port106-1and the average service time of the host port106-1. At block416, the method400may include determining whether the host port106-1is idle based on the IO requests from the host. For example, the determination may be made based on throughput, utilization, or average service time of the host port. For example, the host port106-1may not be receiving IO requests if the host port throughput, the average service time, or the host port utilization time is equivalent to zero. In response to determining that the host port106-1is not receiving IO requests from a host (“NO” at block416), the method may determine that no action has to be taken with respect to the host106-1. In some examples, the method may include selecting another host port (e.g., host port106-2) and performing the blocks410-416for that host port. If the host port106-1is receiving IO requests from a host (“YES” at block416), the method400proceeds to block418which may include comparing T(U) host port with a host port utilization threshold, abbreviated as T(U) host port threshold. In response to determining that T(U) host port for the host port106-1is equal to or greater than the T(U) host port threshold (“NO” at block418), the method400may determine that no action has to be taken with respect to the host port106-1. In some examples, the method may include selecting another host port (e.g., host port106-2) and perform the blocks410-416of that host port. In response to determining that the T(U) host port for the host port106-1is less than the T(U) host port threshold (“YES” at block418), the method400may include creating a candidate list of NVME storage devices corresponding to the host port106-1. The candidate list may include NVME storage devices for which prioritization of the read IO queues could be considered. At block420the method400may include, determining a number of IO request completions at the NVME storage device104-1. At block422, the method400may include determining a visit ratio of the NVME storage device104-1. The visit ratio of the storage device104-1is defined as the number of IO request completions by the storage device104-1per unit time of the sampling interval. FIG.4Bdepicts a flowchart of the method400continued fromFIG.4A, in accordance with the example. At block424, the method400may include determining throughput of the NVME storage device104-1based on the visit ratio. In some examples, the throughput of the storage device104-1may be computed as a product of the visit ratio of the storage device104-1and the throughput of the host port106-1. At block426, the method400may include determining an average service time of the NVME storage device104-1. The average service time may be based on a busy time period of the NVME storage device104-1and the number of IO request completions. For example, the average service time may be a ratio of the busy time period of the NVME storage device and the IO request completions at the NVME storage device. The busy time period of the storage device104-1may indicate a time period for which the storage device104-1may remain busy to process new IO requests during the sampling interval. At block428, the method400may include determining a total service demand of the NVME storage device based on the visit ratio and the average service time of the NVME storage device. For example, the total service demand may be computed as a product of the visit ratio and the average service time of the NVME storage device. At block430, the method400may include determining utilization of the NVME storage device based on the throughput and the average service demand. For example, the utilization may be computed as a product of the average service demand and the throughput of the NVME storage device. At block432, the method400may include determining whether the NVME storage device may be included in the candidate list. For example, the method may compare the utilization of the NVME storage device104-1with the average utilization of the NVME storage devices104-1to104-P. The method may also compare the throughput of the NVME storage device104-2with the average throughput NVME storage devices104-1to104-P. Further, the method400may compare the busy time of the NVME storage device104-1with the average service time of the NVME storage devices. Additionally, the IO request completions of the NVME storage device may be compared with the average IO request completions of the NVME storage devices104-1to104-P. At block434, the method may include grouping or including the NVME storage device in the candidate list in response to determining that the utilization of the NVME storage device104-1is greater than the average utilization of the NVME storage devices104-1to104-P, and the throughput of the NVME storage device104-1is less than the average throughput NVME storage devices104-1to104-P, and the busy time period of the NVME storage device104-1is greater than the average service time of the NVME storage devices104-1to104-P, and the IO request completions of the NVME storage device104-1is less than the average IO request completions of the NVME storage devices104-1to104-P (“YES” at block432). If one or more of the conditions at block432are not satisfied (“NO” at block432), the method400may include not taking an action, i.e., the storage device104-1is not included in the candidate list at block436. In some examples, the method may include selecting another NVME storage device (e.g. storage device104-2) and perform the blocks420to432. FIG.5Adepicts a method500A for prioritizing the read IO queues at the NVME storage devices in the candidate list. At block502, the method500A may include determining the number of read requests in a read IO queue of each NVME storage device104-1to104-P. In some examples, the number of read requests may refer to the number of outstanding read requests in the read IO queue of each NVME storage device104-1to104-P. In some examples, the read IO queue with the highest number of read requests may be assigned a highest priority rank. Each NVME storage device104-1to104-P may include a read IO queue with similar or varying number of read requests. At block504, the method500A may include sorting the candidate list based on the number of read requests in the respective read IO queues. The NVME storage device with a read IO queue with the highest number of read requests may be the first and the NVME storage device with a read IO queue with least number of read requests may be the last, for example. At block506, for each of the NVME storage devices in the sorted candidate list, the method500A may include comparing the number read requests in read IO queue and the write requests in write IO queue. The number of read requests may be determined based on the read-write ratio, which may be the ratio of read requests and write requests between the controller and the storage device, for example. At block506, the method500A may also include determining whether the average time for processing read requests in the read IO queue is increasing or not. For example, the method500A may include comparing the average time (“CURRENT T(R)AVG”) for processing a first set of read requests by the storage device104-1during the current sampling interval with the average time (“PREVIOUS T(R)AVG”) for processing a second set of read requests by the storage device104-1in a previous sampling interval. In some examples, at block506, the method may further include determining a block size of the read requests. i.e., whether the block size is a small block size (less than 16K) or large block size (greater than 16K). In some examples, at block506, the method may also include determining the IO pattern of the read IO queue, i.e., whether the IO pattern is random pattern or sequential pattern, which may be used for priority rank assignment (described in relation toFIG.5B). In response to determining that the number of read requests is more than the number of write requests and that there is an increase in the average time for processing read requests by the storage device104-1(“YES” at block506), the method500A may determine whether a priority rank has to be assigned to the queue or not. The assignment of priority rank may be performed depending on the quantity of read IO queues at the storage devices in the candidate list that have already been assigned the highest priority rank, for example. At block508, the method500A may include determining whether the quantity of read IO queues is less than or equal to a threshold quantity. The threshold quantity may be a predefined number configured based on a user input. For example, the threshold quantity may be half the number of storage devices in the candidate list. In response to determining that the quantity of read IO queues with highest priority rank is more than the threshold quantity (“NO” at block508), at block510, the method500A may not take an action, i.e., not perform the assignment of priority rank to the read IO queue of that storage device. The method500A may select the next storage device in the sorted candidate list and perform the method blocks504and506, for example. In response to determining that the quantity of read IO queues with highest priority rank is less than or equal to the threshold quantity (“YES” at block508), the method500A, at block512, may include assigning a priority rank to the read IO queue based on the utilization of the associated storage device (described further in relation toFIG.5B). Further, at block506, in response to determining that the number of read requests is not more than the write requests or that the average time for processing the read requests is not increasing (“NO” at block506), the method500A, at block514, may include not assigning a priority rank to the read IO queue at the storage device. In some examples, the method500A may include assigning a lowest priority rank to the read IO queue. Further, the method500A may include selecting the next storage device from the sorted candidate list and perform the method block506. FIG.5Bdepicts a flow diagram of a method500B for assigning a priority rank to the read IO queue at each NVME storage device, in accordance with an example. The method500B ofFIG.5Bmay be an example implementation of the block512ofFIG.5A. The method500B may include determining whether the utilization of the storage device is in a predefined threshold utilization range. In some examples, the predefined threshold utilization range may be configured based on user input. From block516to block530, the method500B may include determining whether the utilization of the storage device is in one of the ranges of the predefined threshold utilization. For example, the first range of predefined threshold utilization may include a range of more than 95% utilization. A second threshold range may include 75-95% utilization range. A third threshold range may include 50-75% of utilization. A fourth threshold range may include less than 50% of utilization. Based on a match between the utilization of the storage device and the threshold utilization range, a priority rank may be assigned to the read IO queue of that storage device. The priority rank may indicate the level of urgency for processing the read requests. Examples of a priority rank may include ‘URGENT’, ‘HIGH’, ‘MEDIUM’, ‘LOW’, and the like. Table 1 depicts an example set of predefined threshold utilization range and priority ranks for the respective threshold utilization range. TABLE 1Example priority rank assignment based on utilization levelsUtilization rangePriority rankUtilization >= 95%URGENTUtilization > 95% and <= 75%HIGHUtilization > 75% and <= 50%MEDIUMUtilization < 50%LOW At block516, the method may include determining whether the utilization of the storage device104-1is in a first utilization threshold range (e.g., greater than or equal to 95%) or not. In response to determining that the storage device utilization is greater than or equal to 95% (“YES” at block516), at block518, the method500B may include assigning the highest priority rank, i.e., ‘URGENT’ priority rank. If the utilization is not in the first threshold utilization range (“NO” at block516), at block520, the method500E may include determining whether the utilization of the storage device104-1is in a second threshold utilization range (e.g., greater than or equal to 75% and less than 95%) or not. In response to determining that the storage device utilization is greater than or equal to 75% and less than 95% (“YES” at block520), at block522, the method500E may include assigning a second-highest priority rank, i.e., ‘HIGH’ priority. If the utilization is not in the second threshold utilization range (“NO” at block520), at block524, the method500B may include determining whether the utilization of the storage device104-1is in a third threshold utilization range (e.g., greater than or equal to 50% and less than 75%) or not. In response to determining that the storage device utilization is greater than or equal to 50% and less than 75% (“YES” at block524), at block526, the method500B may include assigning a third-highest priority rank, i.e., ‘MEDIUM’ priority. Further, if the utilization is not in the third threshold utilization range (“NO” at block524), at block528, the method500B may include determining whether the utilization of the storage device104-1is in a fourth threshold utilization range (e.g., less than 50%) or not. In response to determining that the storage device utilization is less than 50% (“YES” at block528), at block530, the method500B may include assigning a fourth-highest priority rank, i.e., ‘LOW’ priority. In some examples, the utilization of the storage device may not match with a predefined threshold utilization range (“NO” at block528). In such examples, at block532, the method500B may include determining whether the IO pattern of the read IO queue at the storage device is a sequential pattern or random pattern. In response to determining that the IO pattern is a sequential pattern (“YES” at block532), at block534, the method500B may not assign a priority rank to the read IO queue. In some examples, the method500B, at block534, may include assigning the lowest priority rank to the read IO queue. In this manner, the read IO queues between the controller110and the storage devices in the candidate list may be dynamically prioritized. Examples are described herein with reference toFIGS.1-5B. It should be noted that the description and figures merely illustrate the principles of the present subject matter along with examples described herein and should not be construed as limiting the present subject matter. Although some examples may be described herein with reference to a single NVME storage device, examples may be utilized for several NVME storage devices. Furthermore, any functionality described herein as performed by a component (e.g., an NVME controller, an NVME storage device or a host) of a system may be performed by at least one processor of the component executing instructions (stored on a machine-readable storage medium) to perform the functionalities described herein. Various implementations of the present subject matter have been described below by referring to several examples. The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting. As used herein, the singular forms “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term “plurality,” as used herein, is defined as two or more than two. The term “another,” as used herein, is defined as at least a second or more. The term “connected,” as used herein, is defined as coupled or associated, whether directly without any intervening elements or indirectly with at least one intervening element, unless otherwise indicated. Two elements can be connected mechanically, electrically, or communicatively linked through a communication channel, pathway, network, or system. The term “and/or” as used herein refers to and encompasses any and all possible combinations of the associated listed items. The term “based on” means based at least in part on. It will also be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context indicates otherwise. In examples described herein, functionalities described as being performed by “instructions” may be understood as functionalities that may be performed by those instructions when executed by a processor. In other examples, functionalities described in relation to instructions may be implemented by any combination of hardware and programming. As used herein, a “computing device” may be a server, storage device, storage array, desktop or laptop computer, switch, router, or any other processing device or equipment including a processor. In examples described herein, a processor may include, for example, one processor or multiple processors included in a single computing device or distributed across multiple computing devices. As used herein, a “processor” may be at least one of a central processing unit (CPU), a semiconductor-based microprocessor, a graphics processing unit (GPU), a field-programmable gate array (FPGA) configured to retrieve and execute instructions, other electronic circuitry suitable for the retrieval and execution instructions stored on a machine-readable storage medium, or a combination thereof. In examples described herein, a processing resource may fetch, decode, and execute instructions stored on a storage medium to perform the functionalities described in relation to the instructions stored on the storage medium. In other examples, the functionalities described in relation to any instructions described herein may be implemented in the form of electronic circuitry, in the form of executable instructions encoded on a machine-readable storage medium, or a combination thereof. The storage medium may be located either in the computing device executing the machine-readable instructions, or remote from but accessible to the computing device (e.g., via a computer network) for execution. In the examples illustrated inFIGS.1-5BNVME controller110may be implemented by one machine-readable storage medium, or multiple machine-readable storage media. As used herein, a “machine-readable storage medium” may be any electronic, magnetic, optical, or other physical storage apparatus to contain or store information such as executable instructions, data, and the like. For example, any machine-readable storage medium described herein may be any of RAM, EEPROM, volatile memory, non-volatile memory, flash memory, a storage drive (e.g., an HDD, an SSD), any type of storage disc (e.g., a compact disc, a DVD, etc.), or the like, or a combination thereof. Further, any machine-readable storage medium described herein may be non-transitory. In examples described herein, a machine-readable storage medium or media may be part of an article (or article of manufacture). All of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), and/or all of the elements of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or elements are mutually exclusive. The foregoing description of various examples has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or limiting to the examples disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various examples. The examples discussed herein were chosen and described in order to explain the principles and the nature of various examples of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various examples and with various modifications as are suited to the particular use contemplated. The features of the examples described herein may be combined in all possible combinations of methods, apparatus, systems, and computer program products. | 59,679 |
11861175 | DETAILED DESCRIPTION Aspects of the present disclosure relate generally to latency in data storage systems and, more specifically, to controlling latency injection in data storage systems. While the present disclosure is not necessarily limited to such applications, various aspects of the disclosure may be appreciated through a discussion of various examples using this context. Latency in data storage systems refers to the delay between input of a read or write request to the storage system and completion of the corresponding read or write operation. Latency arises for a variety of reasons due to the particular storage technology and/or overall architecture and operation of a storage system. Latency typically varies with the current operational state of a system and this, in turn, can vary based on parameters of workloads serviced by the system. In many data storage systems, the overall processing time associated with servicing a read/write request in the system is greater than the read/write time apparent to the user. This is due to internal management processes occurring in the background which are transparent to system users. In some systems, such as solid-state drives (SSDs), background processing is required for internal maintenance of the system storage. For example, flash-based SSDs perform various background processes, such as garbage-collection, wear-levelling, error-checking and recalibration, for maintenance of the flash memory. These internal maintenance processes require data to be relocated within the flash storage and lead to so-called “write amplification” whereby the total number of internal I/O (input/output) operations is amplified in comparison with the number of write requests received by the system. Stored data may also be copied to multiple storage locations (e.g., multiple nodes of a distributed storage system) to add redundancy and provide resilience against localized system failures. Many storage systems also cache write data for destaging from the cache to underlying storage, and a write-completion message is sent to the user when the data has been cached. The latency apparent to system users is thus less than the true processing time in such systems. As a result, background processes can build up to the point where user I/O rates are not sustainable. User I/Os must then be throttled to allow background processes to catch up. In current SSDs, for example, when resource usage levels (e.g., cache utilization and RTU (Ready-to-Use flash block) levels) reach critical thresholds, all read/write requests are delayed before processing to allow background processes to catch up. For the reasons explained above, latency can vary for individual read/write requests based on current operating parameters of a storage system, with some requests being delayed significantly more than others. SSDs, for example, typically have low average latency but also exhibit high tail latencies.FIG.1of the accompanying drawings shows a typical SSD latency distribution, where the long latency tail is due to those I/Os which take much longer than average to complete. These long tail latencies can also result in bandwidth fluctuations over longer periods of time. Embodiments of the present disclosure may improve techniques for managing latency in data storage systems. These techniques can include controlling read/write operations in a data storage system. An I/O interface can receive read and write requests to the system, and an I/O controller can control retrieval and storage of data in the system in response to read and write requests. The I/O controller can send completion messages, confirming storage of data specified in write requests, to the I/O interface. For each of at least some write requests to the system, the I/O controller can calculate, based on operating parameters of the system, a total processing time associated with servicing the write request in the system and determine an actual time taken to store data specified in the write request. If this actual time is less than the total calculated processing time, the controller can delay sending the completion message for the write request. Embodiments of this disclosure can inject latency by delaying completion messages for write requests which complete faster than the total calculated processing time for servicing those requests in the system. The total processing time calculated for a write request can accommodate background processing time based on relevant operating parameters of the system, such as average write times (e.g., average flash page write times), write amplification levels where background processes require additional data writes for redundancy and/or internal maintenance purposes, and cache destaging times in systems where write data is cached. Latency is injected after a write request completes, rather than before the request is admitted for processing, based on whether the write completed faster than the total processing time associated with that particular request. This creates opportunities to perform outstanding background activities without increasing the latency tail. Delaying write completion messages reduces the rate at which new user I/Os are generated, and as long as there is background work to perform while I/Os are delayed, the throughput and average latency of the system are not affected. This can provide smoother throughput, injecting latency more gradually by detecting when the workload is not sustainable, and also tighten the latency distribution by injecting latency into write operations that complete fastest. Tail latencies can thus be significantly reduced without affecting overall system throughput. The I/O controller can be adapted to delay sending a completion message by a time τ dependent on the difference δ between the total processing time calculated for a write request and the actual time for storing the write data, such that τ is greater for larger δ. The amount of latency injected is thus adapted to individual requests, with requests that complete fastest being delayed the most while those which take longer to complete are penalized less, or not at all. In some embodiments, latency may be injected for all write requests which complete faster than their calculated total processing time. However, the I/O controller can be selectively operable in one of two modes. In the first mode, completion messages for write requests are not delayed. In the second mode, the completion message for a write request is delayed if its actual completion time is less than the total processing time calculated for that request. Here, the controller monitors resource usage in the storage system, such as cache utilization levels and/or amount of free storage (e.g., RTU levels in flash-based systems) as appropriate, and switches from the first to the second operating mode if resource usage satisfies a threshold condition. This allows latency injection to be switched on and off dynamically as required based on resource availability in the system. In some embodiments, latency injection can be tailored according to operating parameters associated with particular storage regions (e.g., storage partitions or volumes assigned to different workloads). Operating parameters can be defined for each of a plurality of storage regions in the system, and the controller is then adapted to calculate the total processing time associated with servicing a write request based on the operating parameters for a storage region used for storing data specified in that write request. Different storage regions here may be defined by logical or physical address ranges as appropriate in a given system. Since the cost of data writes, in terms of processing overhead, is typically far greater than that for read operations, injecting latency after completion of write requests alone is highly effective in reducing tail latencies. However, some embodiments of the present disclosure may apply similar latency injection techniques to read operations, in particular for NAND flash-based storage systems in which data is periodically re-read in the background within the storage for internal management purposes, such as error-checking, threshold voltage recalibration, data refresh and so on. The I/O controller may thus be further adapted, for each of at least some read requests, to calculate a total processing time associated with servicing the read request based on operating parameters of the system, and to determine an actual time taken to read data specified in the read request. If that actual read time is less than the total processing time for the read request, the controller can then delay sending the read data to the I/O interface. Here, the controller can calculate the total processing time associated with servicing the read request to include time taken to re-read data units corresponding to the read request in the storage. Similar techniques can be applied here to those for write requests described above. For example, read latency can be adaptive—depending on the difference between the actual and calculated processing cost—and can be implemented dynamically based on resource usage levels. Embodiments of this disclosure may improve fairness by discriminating between read and write components of the workload and by injecting latency in proportion to the true cost of user I/Os. In some embodiments, latency can be injected in a way that aims to equalize the latency of the I/O requests. It is to be understood that the aforementioned advantages are example advantages and should not be construed as limiting. Embodiments of the present disclosure can contain all, some, or none of the aforementioned advantages while remaining within the spirit and scope of the present disclosure. Turning now to the figures,FIG.1is a chart100illustrating a typical SSD latency distribution. In this distribution, the long latency tail can be due to those I/Os which take much longer than average to complete. These long tail latencies can also result in bandwidth fluctuations over longer periods of time. FIG.2is a block diagram illustrating a system200for latency injection, according to some embodiments of the present disclosure. The system200includes storage21(which may include one or more storage media) and control components, represented here by system controller23. The components of system controller23can handle user I/Os associated with read and write requests from system users. The system controller23can include an I/O interface24for receiving read and write requests to the system and an I/O controller25for controlling retrieval and storage of data in storage21in response to the read and write requests. For a read request arriving at the I/O interface24, the I/O controller25retrieves the requested data from storage21and returns the read data to the user via I/O interface24. For a write request, the I/O controller25stores the data specified in the request in system storage21and then sends a completion message, confirming storage of the specified data, to the I/O interface24for return to the user. System controller23includes memory26for storing various system metadata used by I/O controller25. This metadata includes some form of address map recording locations of user data in storage21, such as an LBA/PBA (Logical Block Address to Physical Block Address) map that can map logical addresses used by system users to physical addresses in the storage21. The system metadata can also include various operating parameters/metrics of the storage system200These operating parameters may comprise fixed and/or variable parameters (depending on the particular type and architecture of system200as described below) and are used by I/O controller25in performing a latency injection process for write operations in the system. This latency injection process may be applied to some or all write requests received by the system, as explained below. In general, the storage21may comprise one or more storage components and may comprise one or more types of memory/storage technology, including cache memory, solid-state storage such as NAND flash memory, resistive memory technology such as PCM (Phase Change Memory), HDDs (hard disk drives), tape storage, and so on. Storage21may, for instance, include tiered or distributed storage having multiple storage devices of one or more types. In some embodiments, storage21includes a cache memory for temporary storage of data, along with underlying main storage which may itself include multiple storage components of one or more types. The controller memory26may be provided separately from system storage21or may be implemented within this storage21. For example, memory26may be implemented by a volatile cache memory (e.g., DRAM (Dynamic Random Access Memory), SDRAM (Synchronous DRAM), or FCRAM (Fast-Cycle DRAM)) from which stored data is backed-up to underlying, non-volatile storage on power-down of the system. I/O controller25may include control logic which is implemented in hardware, software, or a combination thereof. For example, functionality of controller25may be implemented by software, including one or more program modules, which configure one or more processors of system controller23to perform operations described herein. In some embodiments, hardware implementations (e.g., in hard-wired logic and/or programmable logic circuits such as FPGAs (Field-Programmable Gate Arrays)) can be employed for functionality of controller25, and suitable implementations will be apparent to those skilled in the art from the description of operations herein. WhileFIG.2shows components involved at operation of the storage system200, system200may include various other components such as additional interfaces (e.g., between I/O controller25and one or more storage components of the system200), one or more buffers for buffering of read/write data, and logic modules implementing functionality such as data compression, ECC (Error-Correction Code) processing, and so on. Depending on particular architecture of the storage system, functionality of the I/O controller25may be implemented by one or more logic modules which may be integrated with system storage21or provided separately therefrom. For example, functionality of I/O controller25may be implemented, in whole or in part, by a high-level controller for multiple storage devices (e.g., in a tiered or RAID (Redundant Array of Independent Devices)-type storage system). Particular functionality may also be implemented by one or more control modules which are integrated with system storage (e.g., embedded/on-chip controllers for local control of individual storage components). FIG.3is a flow diagram illustrating a latency injection process300that can be performed in system200for a write request, according to some embodiments of the present disclosure. Operation30represents receipt of the write request by I/O interface24. At operation31, I/O controller25can retrieve the (current) system operating parameters from controller memory26. At operation32, the I/O controller25can calculate, based on the operating parameters retrieved at operation31, a total processing time (denoted here by Tpw) for the write request. This time Tpwis an estimation of the total processing time associated with servicing the write request in storage system200. The time Tpwthus depends on parameters of that particular write request (e.g., file size and potentially other parameters discussed below) and accounts for time associated with processes occurring in the background, as described in more detail below. As indicated at operation33, the data specified in the write request is stored in system storage21, and the I/O controller25determines the actual time, denoted by Taw, taken to store the write data. In decision operation34, controller25determines whether this actual time Tawis less than the total processing time Tpwcalculated at operation32for the write request. If so, then at operation35the controller25injects latency τ before sending the completion message for the write request to I/O interface24at operation36. The completion message for the write request is thus delayed by a time τ. If Taw≥Tpwat decision operation34, then no latency is injected, and the write completion message is sent immediately at operation36. Calculation of the total processing time Tpwat operation32ofFIG.3can depend on the architecture and storage technology in system200. The principles here are illustrated schematically inFIGS.4through6for three example scenarios. FIG.4is a schematic diagram illustrating a system400in which system storage21includes a non-volatile cache41and a main storage42, according to some embodiments of the present disclosure. The I/O controller25of this system400is adapted to store data specified in a write request in the cache41for destaging to main storage42. In this scenario, the actual time Tawtaken to store data (operation33ofFIG.3) can include the time taken to store that data in the cache41. This initial data write can be performed very quickly, typically in a few microseconds. However, destaging of this data to main storage42may take orders of magnitude longer. Here, therefore, the total processing time Tpwassociated with servicing the write request can include at least the time taken to destage data to the main storage42for that write request. Tpwmay also include the initial write time Taw, if not deemed negligible in comparison with destaging time in the system, and may further account for any additional background processing performed in the main storage42. FIG.5is a schematic diagram illustrating a system500in which there is additional background processing in a main storage, according to some embodiments of the present disclosure. Here, data units are re-written within a main storage56for internal management purposes (e.g., due to internal maintenance processes in solid-state memory and/or to add data redundancy for protection against localized system failures). In this case, therefore, after destaging of write data from cache41, additional background writes are performed in main storage56. Here, I/O controller25is adapted to calculate the total processing time Tpwassociated with servicing the write request to include the time taken to re-write data units corresponding to the write request in the main storage56. For a given write request, the number of re-written data units (e.g., flash pages or other fundamental write-units for the storage technology in question) corresponding to a write request depends on the particular background processes and the file-size of the request. For example, if data is re-written within storage56for redundancy purposes only, and with a redundancy factor f, then for X units of cached write data, the total number of data units written to storage56will be f X (initial destaging plus (f−1)X re-written units), subject to appropriate adjustment for any additional processing, such as data compression, data units added by ECC coding processes, etc., where employed. Tpwcan then account for time taken to write all these data units. Similarly, if background re-writes are performed for internal maintenance purposes (without redundancy), resulting in an average write amplification factor of W, then Tpwcan include time taken to write WX units (e.g., initial write of X units and time taken to re-write (W−1)X units, which can include time taken to read the (W−1)X units). This can be subject to appropriate adjustment for data compression, etc. Write amplification W may vary with operational state of a storage system and can be calculated periodically based on current system operating parameters. Write amplification, data compression, and redundancy factors may also vary for different user workloads, and this can be accommodated as described further below. FIG.6is a schematic diagram illustrating a system600in which write data is not cached, but data is re-written within a storage61for internal management purposes, according to some embodiments of the present disclosure. For example, the data can be re-written within storage61to add redundancy. In this example, storage61can include n storage nodes, S1to Sn. Data can be written directly to one storage node, here S1, and time for this write operation can determine the actual write time Tawfor a write request. Additional data can then be written to the other storage nodes S2to Sn, and the total processing time Tpwassociated with servicing the write request is calculated to include time taken for these extra write operations. The additional data written to ensure redundancy can be one or multiple copies of the original data and may include parity metadata or error correction codes. While basic system architectures are described above, a given storage system may employ one or a combination of such architectures. For example, different background processes may be performed in different storage components, and/or a given storage device may perform data re-writes for both internal maintenance and redundancy purposes. In each case, however, I/O controller25can calculate a total processing time Tpwfor servicing a write request Tpwbased on relevant operating parameters of the system. Such operating parameters may include cache destaging times, the number of data units that can be written in parallel to storage components, redundancy and/or write amplification factors, compression ratios, additional ECC data, etc., as appropriate for a given system. Factors such as write amplification which vary with operational state of a storage system can be calculated periodically based on current system parameters (e.g., using internally-logged statistics for numbers of logical data units received from users and physical data units written to storage). By injecting latency into write requests as described above, I/O controller25creates time for background processes to catch up with user I/Os. This significantly reduces the likelihood that background processes will back up to a critical point at which all user I/Os need to be throttled up-front, with consequent disruption to user workloads. By injecting latency after write requests complete, controller25can target those requests that complete faster than their total servicing time in the system. This tightens the latency distribution for the system, reducing tail latencies without affecting overall system throughput. FIG.7is a block diagram illustrating components of an SSD700, according to some embodiments of the present disclosure. The SSD700includes a memory controller71and flash storage implemented here by NAND flash memory72. Flash memory72may include one or more storage channels each providing a bank of flash storage dies on one or more chips/packages of chips, and may comprise SLC (single-level cell) dies, MLC (multi-level cell) dies or a combination thereof. Memory controller71includes an I/O interface73, an I/O controller74, a flash-link interface75, and controller memory76for storing system metadata. A write cache77is provided for caching write data to be destaged to flash memory72. In some embodiments, the SSD700may also include a read buffer78, as indicated by dashed lines inFIG.7, for buffering of data read from flash memory72. Controller memory76and read buffer78may be implemented, for example, in DRAM or another other type of persistent memory, such as MRAM (Magnetoresistive RAM) or PCM (Phase-Change Memory). Write cache77may be implemented by non-volatile memory such as MRAM and PCM or by a battery-backed DRAM that ensures data persistency. I/O controller74can control storage and retrieval of data in flash memory72and maintain an LBA/PBA map in controller memory76, thereby mapping locations of logical (user) data blocks to physical block addresses in the flash. I/O controller74also monitors resource usage in the system, such as utilization levels in write cache77and the amount of free storage (e.g., RTU levels) in flash memory72, and records these levels in memory76. Various other operating parameters can be stored in memory76for use in the latency injection scheme as described below. In some embodiments, I/O controller74can be selectively operable in one of two modes.FIG.8is a flow diagram illustrating a latency injection process800for write requests in the SSD700, according to some embodiments of the present disclosure. Operation80represents receipt of a write request at I/O interface73. At operation81, the I/O controller74determines whether resource usage in the system satisfies a threshold condition signifying high resource usage levels. In this example, the threshold condition is satisfied if cache utilization exceeds a first threshold or if RTU levels drop below a second threshold. If the threshold condition is not satisfied at operation81, then the write data is cached at operation82and the completion message is sent to the I/O interface immediately. This is illustrated at operation83. Hence, the completion message is not delayed for this write request and no latency is injected. However, if resource usage is deemed high at decision81, then process800proceeds to operation84. Here, I/O controller74retrieves a “latency floor” (FW) for write operations, which is stored as an operating parameter in controller memory76. This latency floor FWcan be defined as the average cost (in terms of time) to perform all physical I/O in flash memory72associated with storing a single logical block write. The average cost includes all inline write I/Os and all I/O during background processes and is updated periodically as described below. At operation85, I/O controller74calculates the total processing time Tpwfor servicing the write request in the system. This calculation, explained in greater detail below, uses the current write latency floor FW, a write amplification factor W, and I/O metadata such as the write-request file size and any applicable compression ratio R for the write request. The write amplification W and compression ratio R can be stored as operating parameters in controller memory76. At operation86, the write request can be stored in write cache77, and I/O controller74can determine the actual time Tawtaken to cache the write data. At operation87, I/O controller74determines whether this actual time Tawis less than the total processing time Tpwcalculated at operation85for the write request. If not, then operation proceeds to operation83, and the completion message is sent immediately. No latency is therefore injected. However, if Taw<Tpwat operation87, then, at operation88, the controller injects latency τ before sending the completion message for the write request at operation83. Here, the latency τ is function F of the difference δ between the total processing time Tpwand the actual write time Tawfor the write request, where τ is greater for larger δ. In this embodiment, F=1 and so τ=δ, whereby completion of the write request is delayed by (Tpw−Taw). A calculation of the total processing time Tpwat operation85is illustrated in the following example for an SSD700that has 10 flash devices connected over 10 lanes. Each flash device supports up to 4 parallel writes, thereby allowing up to 10×4=40 flash pages to be written in parallel to flash memory72. If the average program write latency (e.g., average time to write a program unit of 40 flash pages) is 1 ms, a physical flash page can be destaged from write cache77and written to flash on average every 1000/40=25 μs. If the write amplification factor is W=4, then the latency floor FW=4×25 μs=100 μs. For a 128 kB write request with compression ratio R=2, Tpwcan be calculated as Tpw=(128 kB/16 kB/2)×100 μs=400 μs for a flash page size of 16 kB. This corresponds to an initial write (destaging) time of (128 kB/16 kB/2)×25 μs=100 μs, plus a (128 kB/16 kB/2)×(4−1)×25 μs=300 μs cost of background re-writes due to garbage collection (here ignoring read latency, block erase latency, and all other background activities except garbage collection). Assuming the cache write takes Taw=10 μs, then the latency τ injected at operation88is given by τ=100 μs+300 μs−10 μs=390 μs. FIG.9is a flow diagram illustrating a process900of monitoring system parameters stored in controller memory76, according to some embodiments of the present disclosure. At operation90, I/O controller74can retrieve the physical page-write statistics logged to flash memory72(e.g., the number of physical page writes completed in flash memory72for user data over the current operating period). At operation92, I/O controller74can read the user page-write stats (logged in controller memory76), which can be the number of logical page writes for the current operating period. At operation94, the I/O controller74can retrieve the garbage collection (GC) stats from flash memory72. The GC stats can be the number of flash pages rewritten due to garbage collection over the current period. The ratio of (physical page-writes+GC page-writes) to logical page-writes gives the current write amplification factor W. While garbage collection is the predominant contributor to write-amplification, writes due to other background processes can be considered in some embodiments. For example, writes due to processes for dealing with read-disturb blocks, background error scrub, re-calibration of threshold voltage shift values, etc. can be considered, in which case statistics for these additional processes can also be retrieved at operation94. In some embodiments, predicting the future number of physical pages to be relocated based on the amount of valid data in the flash blocks next in line for garbage collection might be preferable instead of using past garbage collection statistics. At operation96, I/O controller74can then calculate the write latency floor FWfor the current period and updates this value in memory76. I/O controller74can then wait (operation98) for the current operating period to expire, whereupon process900can revert to operation90, and parameters can again be updated. It will be seen that the above embodiment injects write latency dynamically as required based on resource usage in system700, and the write latency floor can be adaptive to current operating state of the system. The latency injected can then be tailored to individual write requests, whereby requests which complete faster than their total estimated processing time are delayed by the appropriate amount. In some embodiments, the disclosed techniques can avoid indiscriminately delaying all user I/Os, thereby exacerbating the tail of the latency distribution. For example, the disclosed techniques can differentiate between read and write components of user I/Os, injecting latency into write operations in proportion to their true cost. Since write bandwidth can impact read bandwidth, restricting write bandwidth when required can also improve read throughput and reduce read latency. The effect of the above technique on the SSD latency distribution is illustrated schematically inFIGS.10A and10B. FIGS.10A and10Bare charts1000and1010illustrating latency distributions, according to some embodiments of the present disclosure.FIG.10Aillustrates how the technique operates in relation to the initial latency distribution (without latency injection).FIG.10Bshows the transformation affected by the latency injection technique. In chart1010, the long latency tail has been removed, and the distribution tightened, without affecting overall system throughput. While injecting latency into write requests has a significant effect on latency distributions, similar techniques may be applied to read requests in some embodiments. In particular, where data units are re-read within the system storage due to internal management processes (e.g., error checking, recalibration of threshold voltage shift values, etc.) or require periodic data refresh (e.g., due to read disturb or charge loss effects), the I/O controller74may delay completion of reads requests if the actual time taken to read data specified in a read request is less than a total processing time (including background processes) associated with servicing the read request in the system. The total processing time associated with servicing the read request then includes time taken by background processes to re-read data units corresponding to the read request in the system storage.FIG.11shows an implementation of this process in the SSD700ofFIG.7. FIG.11is a flow diagram illustrating a process1100of read latency injection, according to some embodiments of the present disclosure. Operation110represents receipt of a read request at I/O interface73of SSD700. At operation111, I/O controller74determines whether resource usage in the system satisfies the threshold condition described above. If not, the required data is simply read at operation112and sent immediately to I/O interface73at operation113. If the resource usage does satisfy the threshold condition, however, process1100proceeds to operation114where I/O controller74retrieves a latency floor, denoted by FR, for read operations from controller memory76. Like FW, the read latency floor FRis defined as the average cost (in terms of time) associated with a physical page read, including associated background read processes, in flash memory72. This can be calculated at operation96of monitoring process900(FIG.9) based on physical and logical page read counts and background read statistics retrieved along with the write stats at operations90to94. At operation115, the controller calculates the total processing time Tprfor servicing the read request by multiplying the number of physical pages in the read request by the read latency floor FR. At operation117, the requested data is read from flash memory72and the actual read time Taris determined. In decision operation117, the controller then determines whether this actual time Taris less than the total processing time Tprfor the read request. If not, then operation proceeds to operation113and the read data is sent immediately. If Tar<Tprat operation117, then at operation118controller74injects latency by storing the read data, for a time τ, in read buffer78before sending the data to I/O interface73. As for write latency, the read latency τ may be a function of the difference δ between the total processing time Tprand the actual read Tar(e.g., τ=δ). The embodiments described above can be adapted to support multi-tenancy and further improve quality-of-service by considering user workloads individually. Operating parameters can be defined for each of a plurality of storage regions in the system. I/O controller74can then calculate the total processing time Tpw(and Tprfor read latency injection) for servicing a write (or read) request based on the operating parameters for a storage region used for storing the data for that request. In SSD700, for example, I/O controller74may assign different LBA ranges to different user workloads, and each LBA range may have its own set of parameters to describe the user workload and compute the read/write latency floors. Alternatively, or in addition, the controller may split the LBA space into a number of ranges (e.g., 100 GB ranges) where each range has its own set of parameters. It will be appreciated that the techniques described above can complement existing methods of up-front latency injection, allowing latency to be injected more gradually based on whether ongoing I/O workloads are sustainable. This mitigates the need for upfront latency injection and the significant disruption to user workloads which result. Various other changes and modifications can be made to the embodiments described above. By way of example, various other resource usage conditions may be used for switching between latency modes. For instance, some embodiments may switch write latency in and out based on cache utilization only. Various other functions may be also be envisaged for calculating the latency τ and/or the read/write latency floors. For example, τ could be a monotonically increasing function of the aforementioned difference δ subject to a minimum latency applied to all requests for which Taw<Tpw. In general, where features are described herein with reference to control apparatus, corresponding features may be provided in a storage system employing such apparatus, and in a latency control method embodying the disclosure. Operations of flow diagrams may be performed in a different order to that shown and some operations may be performed in parallel as appropriate. FIG.12is a block diagram illustrating an exemplary computer system1200that can be used in implementing one or more of the methods, tools, components, and any related functions described herein (e.g., using one or more processor circuits or computer processors of the computer). In some embodiments, the major components of the computer system1200comprise one or more processors502, a memory subsystem504, a terminal interface512, a storage interface516, an input/output device interface514, and a network interface518, all of which can be communicatively coupled, directly or indirectly, for inter-component communication via a memory bus503, an input/output bus508, bus interface unit507, and an input/output bus interface unit510. The computer system1200contains one or more general-purpose programmable central processing units (CPUs)502A,502B, and502-N, herein collectively referred to as the CPU502. In some embodiments, the computer system1200contains multiple processors typical of a relatively large system; however, in other embodiments the computer system1200can alternatively be a single CPU system. Each CPU502may execute instructions stored in the memory subsystem504and can include one or more levels of on-board cache. The memory504can include a random-access semiconductor memory, storage device, or storage medium (either volatile or non-volatile) for storing or encoding data and programs. In some embodiments, the memory504represents the entire virtual memory of the computer system1200and may also include the virtual memory of other computer systems coupled to the computer system1200or connected via a network. The memory504is conceptually a single monolithic entity, but in other embodiments the memory504is a more complex arrangement, such as a hierarchy of caches and other memory devices. For example, memory may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors. Memory can be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures. Components of systems illustrated herein (e.g., systems200and/or400-700) can be included within the memory504in the computer system1200. However, in other embodiments, some or all of these components may be on different computer systems and may be accessed remotely (e.g., via a network). The computer system1200may use virtual addressing mechanisms that allow the programs of the computer system1200to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities. Thus, components of the memory504are not necessarily all completely contained in the same storage device at the same time. Further, although these components are illustrated as being separate entities, in other embodiments some of these components, portions of some of these components, or all of these components may be packaged together. In an embodiment, components of systems200and400-700include instructions that execute on the processor502or instructions that are interpreted by instructions that execute on the processor502to carry out the functions as further described in this disclosure. In another embodiment, the components of systems200and400-700are implemented in hardware via semiconductor devices, chips, logical gates, circuits, circuit cards, and/or other physical hardware devices in lieu of, or in addition to, a processor-based system. In another embodiment, the components of systems200and400-700include data in addition to instructions. Although the memory bus503is shown inFIG.12as a single bus structure providing a direct communication path among the CPUs502, the memory subsystem504, the display system506, the bus interface507, and the input/output bus interface510, the memory bus503can, in some embodiments, include multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the input/output bus interface510and the input/output bus508are shown as single respective units, the computer system1200may, in some embodiments, contain multiple input/output bus interface units510, multiple input/output buses508, or both. Further, while multiple input/output interface units are shown, which separate the input/output bus508from various communications paths running to the various input/output devices, in other embodiments some or all of the input/output devices may be connected directly to one or more system input/output buses. The computer system1200may include a bus interface unit507to handle communications among the processor502, the memory504, a display system506, and the input/output bus interface unit510. The input/output bus interface unit510may be coupled with the input/output bus508for transferring data to and from the various input/output units. The input/output bus interface unit510communicates with multiple input/output interface units512,514,516, and518, which are also known as input/output processors (IOPs) or input/output adapters (IOAs), through the input/output bus508. The display system506may include a display controller. The display controller may provide visual, audio, or both types of data to a display device505. The display system506may be coupled with a display device505, such as a standalone display screen, computer monitor, television, or a tablet or handheld device display. In alternate embodiments, one or more of the functions provided by the display system506may be on board a processor502integrated circuit. In addition, one or more of the functions provided by the bus interface unit507may be on board a processor502integrated circuit. In some embodiments, the computer system1200is a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface but receives requests from other computer systems (clients). Further, in some embodiments, the computer system1200is implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smart phone, network switches or routers, or any other appropriate type of electronic device. It is noted thatFIG.12is intended to depict the representative major components of an exemplary computer system1200. In some embodiments, however, individual components may have greater or lesser complexity than as represented inFIG.12, Components other than or in addition to those shown inFIG.12may be present, and the number, type, and configuration of such components may vary. In some embodiments, the data storage and retrieval processes described herein could be implemented in a cloud computing environment, which is described below with respect toFIGS.13and14. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher-level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. FIG.13is a block diagram illustrating a cloud computing environment1300, according to some embodiments of the present disclosure. As shown, cloud computing environment1300includes one or more cloud computing nodes610with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone620A, desktop computer620B, laptop computer620C, and/or automobile computer system620D may communicate. Nodes610may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment1300to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices620A—620D shown inFIG.4are intended to be illustrative only and that computing nodes610and cloud computing environment1300can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). FIG.14is a block diagram illustrating a set of functional abstraction model layers1400provided by the cloud computing environment1300, according to some embodiments of the present disclosure. It should be understood in advance that the components, layers, and functions shown inFIG.7are intended to be illustrative only and embodiments of the disclosure are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer710includes hardware and software components. Examples of hardware components include: mainframes711; RISC (Reduced Instruction Set Computer) architecture-based servers712; servers713; blade servers714; storage devices715; and networks and networking components716. In some embodiments, software components include network application server software717and database software718. Virtualization layer720provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers721; virtual storage722; virtual networks723, including virtual private networks; virtual applications and operating systems724; and virtual clients725. In one example, management layer730provides the functions described below. Resource provisioning731provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing732provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal733provides access to the cloud computing environment for consumers and system administrators. Service level management734provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment735provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer740provides examples of functionality for which the cloud computing environment can be utilized. Examples of workloads and functions that can be provided from this layer include: mapping and navigation741; software development and lifecycle management742; virtual classroom education delivery743; data analytics processing744; transaction processing745; and latency injection746. The present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a standalone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Although the present disclosure has been described in terms of specific embodiments, it is anticipated that alterations and modification thereof will become apparent to the skilled in the art. Therefore, it is intended that the following claims be interpreted as covering all such alterations and modifications as fall within the true spirit and scope of the present disclosure. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including,” when used in this specification, specify the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. In the previous detailed description of example embodiments of the various embodiments, reference was made to the accompanying drawings (where like numbers represent like elements), which form a part hereof, and in which is shown by way of illustration specific example embodiments in which the various embodiments may be practiced. These embodiments were described in sufficient detail to enable those skilled in the art to practice the embodiments, but other embodiments may be used and logical, mechanical, electrical, and other changes may be made without departing from the scope of the various embodiments. In the previous description, numerous specific details were set forth to provide a thorough understanding the various embodiments. But, the various embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure embodiments. When different reference numbers comprise a common number followed by differing letters (e.g.,100a,100b,100c) or punctuation followed by differing numbers (e.g.,100-1,100-2, or100.1,100.2), use of the reference character only without the letter or following numbers (e.g.,100) may refer to the group of elements as a whole, any subset of the group, or an example specimen of the group. As used herein, “a number of” when used with reference to items, means one or more items. For example, “a number of different types of networks” is one or more different types of networks. Further, the phrase “at least one of,” when used with a list of items, means different combinations of one or more of the listed items can be used, and only one of each item in the list may be needed. In other words, “at least one of” means any combination of items and number of items may be used from the list, but not all of the items in the list are required. The item can be a particular object, a thing, or a category. For example, without limitation, “at least one of item A, item B, and item C” may include item A, item A and item B, or item B. This example also may include item A, item B, and item C or item B and item C. Of course, any combinations of these items can be present. In some illustrative examples, “at least one of” can be, for example, without limitation, two of item A; one of item B; ten of item C; four of item B and seven of item C; or other suitable combinations. | 64,291 |
11861176 | DETAILED DESCRIPTION Systems and methods are described for smoothing-out latency of IO operations processed by a distributed storage system. One way of attempting to provide a better user experience for users of distributed storage systems is by providing a QoS feature that allows users to set a QoS that guarantees a particular level of performance for volumes of the distributed storage system. For example, QoS may guarantee a particular level of performance by provisioning minimum, maximum, and/or burst levels of input/output operations per second (IOPS) to the volumes. A distributed storage system may divide time into IO processing intervals and periodically set a target number of IO operations that can be performed per IO processing interval based on various metrics (e.g., a measure of load on the distributed storage system and/or a measure of block storage fullness). While proper settings for various QoS parameters may enhance overall performance of the distributed storage system, spikes in IOPS may occur at the beginning of each IO processing interval as described further below with reference toFIG.3. Spikes in IOPS for any volume hosted by a slice service of a storage node may reduce efficiency of all other volumes hosted by the same slice service and increase the latency of each of the IO operations being processed concurrently. Meanwhile, the volumes are under-utilized for the remainder of the IO processing interval after the target IOPS has been achieved. Embodiments described herein seek to address or at least mitigate the above-mentioned inefficiencies by smoothing out latency spikes that would otherwise be experienced during input/output (IO) processing intervals of a distributed storage system. Various embodiments of the present technology provide for a wide range of technical effects, advantages, and/or improvements to computing systems and components. For example, various embodiments may include technical effects, advantages, and/or improvements relating to one or more of (i) distribution of latency more evenly over IO processing intervals of a distributed storage system; (ii) efficiency of processing of IO operations by the distributed storage system; and (iii) use of non-routine and unconventional computer operations to enhance the use of metrics to leverage average real-time data on a continuous scale rather than on an interval scale. In the context of various examples described herein, latency is distributed among IO operations to more evenly spread processing of the IO operations over an IO processing interval. A target latency for input/output (IO) operations for a volume of multiple volumes of a distributed storage system is periodically calculated. The periodic calculation may include determining a target number of IO operations per second (target IOPS) to be processed during the IO processing interval for the volume and determining the target latency by dividing the IO processing interval by the target IOPS. As IO operations are received by the distributed storage system for the volume, a latency may be associated with the IO operation based on the target latency and the IO operation may be added to the back of a queue. Subsequently, during monitoring of the queue, responsive to expiration of a time period that is based on at time at which a given IO operation at the head of the queue was received and the assigned latency, the given IO operation is removed from the queue and processed. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. Terminology Brief definitions of terms used throughout this application are given below. A “computer” or “computer system” may be one or more physical computers, virtual computers, or computing devices. As an example, a computer may be one or more server computers, cloud-based computers, cloud-based cluster of computers, virtual machine instances or virtual machine computing elements such as virtual processors, storage and memory, data centers, storage devices, desktop computers, laptop computers, mobile devices, or any other special-purpose computing devices. Any reference to “a computer” or “a computer system” herein may mean one or more computers, unless expressly stated otherwise. Depending upon the particular context, a “client” may be used herein to refer to a physical or virtual machine or a process running thereon. A client process may be responsible for storing, retrieving, and deleting data in the system. A client process may address pieces of data depending on the nature of the storage system and the format of the data stored. For example, the client process may reference data using a client address. The client address may take different forms. For example, in a storage system that uses file storage, the client may reference a particular volume or partition, and a file name With object storage, the client address may be a unique object name. For block storage, the client address may be a volume or partition, and a block address. Clients may communicate with metadata, corresponding to the slice services and the volume(s) residing on the slice services, using different protocols, such as SCSI, iSCSI, FC, common Internet file system (CIFS), network file system (NFS), HTTP, web-based distributed authoring and versioning (WebDAV), or a custom protocol. Each client may be associated with a volume. In some examples, only one client accesses data in a volume. In some examples, multiple clients may access data in a single volume. As used herein, “telemetry data” generally refers to performance, configuration, load, and other system data of a monitored system. Telemetry data may refer to one data point or a range of data points. Non-limiting examples of telemetry data for a distributed storage system include latency, utilization, a number of input output operations per second (IOPS), a slice service (SS) load, Quality of Service (QoS) settings, or any other performance related information. As used herein, “slice service load” or “SS load” generally refer to a measure of volume load per storage node of a distributed storage system. As described further below, IO operations may be throttled by the storage operating system of the distributed storage system depending upon and responsive to observation of the SS load exceeding various predefined or configurable thresholds. In one embodiment, SS load is a measure of cache (e.g., primary cache and secondary cache) capacity utilization in bytes (e.g., percent full of 8 gigabytes (GB) in the primary cache and a relatively large number of GB in the secondary cache). Depending upon the particular implementation, the SS load may be the maximum between the fullness of the primary cache and the secondary cache (e.g., a maximum among all services hosting a given volume). According to one embodiment, these two metrics, along with perceived latency, may be the inputs into the SS load calculation. For example, SS load may be the maximum value between primary cache fullness, secondary cache fullness, and latency. The terms “connected” or “coupled” and related terms are used in an operational sense and are not necessarily limited to a direct connection or coupling. Thus, for example, two devices may be coupled directly, or via one or more intermediary media or devices. As another example, devices may be coupled in such a way that information can be passed there between, while not sharing any physical connection with one another. Based on the disclosure provided herein, one of ordinary skill in the art will appreciate a variety of ways in which connection or coupling exists in accordance with the aforementioned definition. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The phrases “in an embodiment,” “according to one embodiment,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present disclosure, and may be included in more than one embodiment of the present disclosure. Importantly, such phrases do not necessarily refer to the same embodiment. The terms “component”, “module”, “system,” and the like as used herein are intended to refer to a computer-related entity, either software-executing general-purpose processor, hardware, firmware and a combination thereof. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various non-transitory, computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). Example Operating Environment FIG.1is a block diagram illustrating an environment100in which various embodiments may be implemented. In various examples described herein, an administrator (e.g., user112) of a distributed storage system (e.g., cluster135) or a managed service provider responsible for multiple distributed storage systems of the same or multiple customers may monitor various telemetry data of the distributed storage system or multiple distributed storage systems via a browser-based interface presented on a client (e.g., computer system110). In the context of the present example, the environment100is shown including a data center130, a cloud120, computer system110, and a user112. The data center130, the cloud120, and the computer systems110may be coupled in communication via a network105, which, depending upon the particular implementation, may be a Local Area Network (LAN), a Wide Area Network (WAN), or the Internet. The data center130may represent an enterprise data center (e.g., an on-premises customer data center) that is build, owned, and operated by a company or the given data center may be managed by a third party (or a managed service provider) on behalf of the company, which may lease the equipment and infrastructure. Alternatively, the data center130may represent a colocation data center in which a company rents space of a facility owned by others and located off the company premises. Data center130is shown including a distributed storage system (e.g., cluster135) and a collector138. Those of ordinary skill in the art will appreciate additional IT infrastructure may be part of the data center130; however, discussion of such additional IT infrastructure is unnecessary to the understanding of the various embodiments described herein. Turning now to the cluster135, it includes multiple storage nodes136a-nand an API137. In the context of the present example, the multiple storage nodes136a-nare organized as a cluster and provide a distributed storage architecture to service storage requests issued by one or more clients (e.g., computer system110) of the cluster. The data served by the storage nodes136a-nmay be distributed across multiple storage units embodied as persistent storage devices, including but not limited to hard disk drives, solid state drives, flash memory systems, or other storage devices. A non-limiting example of a storage node136is described in further detail below with reference toFIG.2. The API137may provide an interface through which the cluster135is configured and/or queried by external actors (e.g., the collector138, clients, and a cloud-based, centralized monitoring system (e.g., monitoring system122). Depending upon the particular implementation, the API137may represent a Representational State Transfer (REST)ful API that uses Hypertext Transfer Protocol (HTTP) methods (e.g., GET, POST, PATCH, DELETE, and OPTIONS) to indicate its actions. Depending upon the particular embodiment, the API137may provide access to various telemetry data (e.g., performance, configuration and other system data) relating to the cluster135or components thereof. In one embodiment, API calls may be used to obtain information regarding a custom, proprietary, or standardized measure of the overall load or overall performance (e.g., IOPS) of a particular storage node136or to obtain information regarding the overall load or performance of multiple storage nodes136. As those skilled in the art will appreciate various other types of telemetry data, including, but not limited to measures of latency, utilization, load, and/or performance at various levels (e.g., the cluster level, the storage node level, or the storage node component level), may be made available via the API137and/or used internally by various monitoring modules. The collector138may be implemented locally within the same data center in which the cluster135resides and may periodically poll for telemetry data of the cluster135via the API137. Depending upon the particular implementation, the polling may be performed at a predetermined or configurable interval (e.g., 60 seconds). The collector138may locally process and/or aggregate the collected telemetry data over a period of time by data point values and/or by ranges of data point values and provide frequency information regarding the aggregated telemetry data retrieved from the cluster135to the centralized monitoring system for local use or analysis by the user112. In the context of the present example, the cloud120, which may represent a private or public cloud accessible (e.g., via a web portal) to an administrator (e.g., user112) associated with a managed service provider, includes the monitoring system122that may be used to facilitate evaluation and/or selection of new QoS settings. Systems Metrics and Load of a Distributed Storage System A distributed storage system (e.g., cluster135) may include a performance manager or other system metric monitoring and evaluation functionality that can monitor clients' use of the distributed storage system's resources. In addition, the performance manager and/or a QoS system (e.g., a QoS module) may be involved in the regulation of a client's use of the distributed storage system. The client's use of the distributed storage system can be adjusted based upon one or more of system metrics, the client's QoS settings, and the load of the distributed storage system. System metrics may be various measurable attributes of the distributed storage system that may represent directly or be used to calculate a load of the distributed storage system, which, as described in greater detail below, can be used to throttle clients of the distributed storage system. System metrics are metrics that reflect the use of the system or components of the distributed storage system by all clients. System metrics can include metrics associated with the entire distributed storage system or with components within the distributed storage system. For example, system metrics can be calculated at the system level, cluster level, node level, service level, or drive level. Space utilization of is one example of a system metric. The cluster space utilization reflects how much space is available for a particular cluster, while the drive space utilization metric reflects how much space is available for a particular drive. Space utilization metrics can also be determined at the system level (e.g., the percentage of configured block storage remaining available within the distributed storage system), service level, and the node level. Other examples of system metrics include measured or aggregated metrics such as read latency, write latency, IOPS, read IOPS, write IOPS, I/O size, write cache capacity, dedupe-ability, compressibility, total bandwidth, read bandwidth, write bandwidth, read/write ratio, workload type, data content, data type, etc. IOPS can be real input/output operations per second that are measured for a cluster or drive. Bandwidth may be the amount of data that is being transferred between clients and the volume of data. Read latency may represent the time taken for the distributed storage system to read data from a volume and return the data to a client. Write latency may represent the time taken for the distributed storage system to write data and return a success indicator to the client. Workload type can indicate if IO access is sequential or random. The data type can identify the type of data being accessed/written, e.g., text, video, images, audio, etc. The write cache capacity may refer to a write cache or a node, a block server, or a volume server. The write cache may be implemented in the form of a relatively fast memory that is used to store data before it is written to storage. As noted above, each of these metrics can be independently calculated for the system, a cluster, a node, etc. In addition, these values can also be calculated at a client level. IOPS may be calculated based on latency and the number of concurrent outstanding read and/or write operations that may be queued (QueueDepth) by the distributed storage system as follows: IOPS=QueueDepth/Latency Bandwidth may be calculated based on QueueDepth, latency and I/O size as follows: Bandwidth=(QueueDepth*IOSize)/Latency where, IOSize is the average I/O size over a period of time (typically, falling between 4 KB to 32 KB, inclusive) System metrics may be calculated over a period of time, e.g., 250 milliseconds (ms), 500 ms, 1 second (s), etc., that may be referred to herein as a sample period or sampling interval. Accordingly, different values such as a min, max, standard deviation, average, etc., can be calculated for each system metric. One or more of the metrics may directly represent and/or be used to calculate a value that represents a load of the distributed storage system. Loads can be calculated for the distributed storage system as a whole, for individual components, for individual services, and/or individual clients. System load values may then be used by the QoS system to determine whether and how clients are to be throttled. In some embodiments, performance for individual clients may be adjusted based upon the monitored system metrics. For example, based on a number of factors (e.g., system metrics and client QoS settings), a number of IO operations that can be performed by a particular client over a period of time (referred to herein as an IO processing interval) may be managed. As described in further detail below, the performance manager and/or the QoS system may regulate the number of IO operations that are performed by delaying IO operations submitted by clients by some amount, for example, as determined by a calculated target latency, before allowing the IO operations to proceed. In one implementation, responsive to a client exceeding a QoS bound assigned to it and/or responsive the aggregate of IO operations received from all clients during a particular IO processing interval exceeding certain defined limits, the performance manager and/or the QoS system may regulate the number of IOPS that are performed by locking a client out of a volume for different amounts of time to manage how many IOPS can be performed by the client. For example, in accordance with an approach referred to herein as IOPS push back and described further below with reference toFIG.5, when a client is heavily restricted, the client may be locked out of accessing a volume for 450 ms of every 500 ms and when the client is not heavily restricted, the client may be blocked out of a volume for 50 ms of every 500 ms. As such, in this example, the lockout effectively manages the number of IOPS that the client may perform every 500 ms. Although examples using IOPS are described, other metrics may also be used, as will be described in more detail below. Client Quality of Service (QoS) Parameter Settings In addition to system metrics, client quality of service (QoS) parameters can be used to affect how a client uses the distributed storage system. Unlike metrics, client QoS parameters are not measured values, but rather represent variables that can be set to define the desired QoS bounds for a client. Client QoS parameters can be set by an administrator or a client. In one implementation, client QoS parameters include minimum, maximum, and max burst values. Using IOPS as an example, a minimum IOPS value is a proportional amount of performance of a cluster for a client. Thus, the minimum IOPS is not a guarantee that the volume will always perform at this minimum IOPS value. When a volume is in an overload situation, the minimum IOPS value is the minimum number of IOPS that the distributed storage system attempts to provide the client. However, based upon cluster performance, an individual client's IOPS may be lower or higher than the minimum value during an overload situation. In one implementation, the distributed storage system can be provisioned such that the sum of the minimum IOPS across all clients can be sustained for all clients at a given time. In this situation, each client should be able to perform at or above its minimum IOPS value. The distributed storage system, however, can also be provisioned such that the sum of the minimum IOPS across all clients cannot be sustained for all clients. In this case, if the distributed storage system becomes overloaded through the use of all clients, the client's realized IOPS can be less than the client's minimum IOPS value. In failure situations, the distributed storage system may also throttle users such that their realized IOPS are less than their minimum IOPS value. A maximum IOPS parameter is the maximum sustained IOPS value over an extended period of time. The burst IOPS parameter is the maximum IOPS value that a client can “burst” above the maximum IOPS parameter for a short period of time based upon credits. In one implementation, credits for a client are accrued when a given client is operating under its respective maximum IOPS parameter. Accordingly, clients may be limited to use of the distributed storage system in accordance with their respective maximum IOPS and burst IOPS parameters. For example, a given client may not be able to use the distributed storage system's full resources, even if they are available, but rather, may be bounded by the respective maximum IOPS and burst IOPS parameters of the given client. In some embodiments, client QoS parameters can be changed at any time by the client, an administrator, and/or by automated means. Example Storage Node FIG.2is a block diagram illustrating a storage node200in accordance with an embodiment of the present disclosure. Storage node200represents a non-limiting example of storage nodes136a-n. In the context of the present example, storage node200may include a storage operating system (OS)210, one or more slice services220a-n, and one or more block services216a-q. The storage OS210may provide access to data stored by the storage node200via various protocols (e.g., small computer system interface (SCSI), Internet small computer system interface (ISCSI), fibre channel (FC), common Internet file system (CIFS), network file system (NFS), hypertext transfer protocol (HTTP), web-based distributed authoring and versioning (WebDAV), or a custom protocol. A non-limiting example of the storage OS210is NetApp Element Software (e.g., the SolidFire Element OS) based on Linux and designed for SSDs and scale-out architecture with the ability to expand up to 100 storage nodes. In the context of the present example, the storage OS210also includes a QoS module215, a workload monitoring module212, and a system metric monitoring module213. The QoS module215may be responsible for, among other things, applying QoS settings (e.g., as requested by clients) to one or more volumes utilized by the clients, periodically calculating a target latency for IO operations, placing the IO operations on a queue with an assigned latency based on the target latency, and monitoring of the queue to cause the IO operations to be processed within the assigned latency. A non-limiting example of periodic latency calculation is described in further detail below with reference toFIG.4. A non-limiting example of IO operation queuing and latency assignment is described in further detail below with reference toFIG.6. A non-limiting example of IO queue monitoring processing is described in further detail below with reference toFIG.8. While various examples herein may be described with reference to a minimum IOPS, a maximum IOPS, and a burst IOPS as an example set of QoS settings, it is to be appreciated the various approaches for automated tuning of QoS settings described herein are equally applicable to various other individual QoS settings and to sets of one or more QoS settings, including, but not limited to a read latency parameter, a write latency parameter, a total IOPS parameter, a read IOPS parameter, a write IOPS parameter, an I/O size parameter, a total bandwidth parameter, a read bandwidth parameter, a write bandwidth parameter, and a read/write IOPS ratio parameter. While in the context of the present example, a single instance of the QoS module215is shown within the storage OS210, an instance of the QoS module215may alternatively be implemented within each of the slice services220a-n. The workload monitoring module212may be responsible for monitoring and evaluating information (e.g., IOPS) indicative of a workload to which the storage node200is exposed. While various examples described herein may be described in the context of a total number of IOPS, it is to be appreciated the various approaches for automated tuning of QoS settings described herein are equally applicable to other individual characteristics of a workload or sets of one or more workload characteristics, including, but not limited to a number of read IOPS, a number of write IOPS, a proportion of read IOPS to write IOPS, an I/O size, and a statistical measure of any of the foregoing over a period of time. The system metric monitoring module213may be responsible for monitoring and calculating a measure of load on the cluster as a whole and/or at various levels or layers of the cluster or the storage node200. For example, metrics may be available for individual or groups of storage nodes (e.g., storage nodes136a-n), individual or groups of volumes221, individual or groups of slice services220, and/or individual or groups of block services216. The QoS module215, the workload monitoring module212, and the system metric monitoring module213may periodically monitor and collect QoS settings, workload characteristics, and/or system metrics every sampling period (e.g., 500 ms). In addition, the QoS module215may periodically (e.g., every sampling period) make use of the collected data/metrics directly or indirectly to perform a latency calculation for use in connection with distributing latency among multiple IO operations as described below. While various examples described herein may be described with reference to the use of SS load as an example system load metric, it is to be appreciated the various approaches for automated tuning of QoS settings described herein are equally applicable various other individual system metrics and to sets of one or more system metrics, including, but not limited to a read latency metric, a write latency metric, an IOPS metric, a read IOPS metric, a write IOPS metric, a total bandwidth metric, a read bandwidth metric, a write bandwidth metric, a read/write IOPS ratio metric, a read/write latency metric, and a read/write bandwidth ratio metric. Turning now to the slice services220a-n, each slice service220may include one or more volumes (e.g., volumes221a-x, volumes221c-y, and volumes221e-z). Client systems (not shown) associated with an enterprise may store data to one or more volumes, retrieve data from one or more volumes, and/or modify data stored on one or more volumes. The slice services220a-nand/or the client system may break data into data blocks. Block services216a-qand slice services220a-nmay maintain mappings between an address of the client system and the eventual physical location of the data block in respective storage media of the storage node200. In one embodiment, volumes221include unique and uniformly random identifiers to facilitate even distribution of a volume's data throughout a cluster (e.g., cluster135). The slice services220a-nmay store metadata that maps between client systems and block services216. For example, slice services220may map between the client addressing used by the client systems (e.g., file names, object names, block numbers, etc. such as Logical Block Addresses (LBAs)) and block layer addressing (e.g., block identifiers) used in block services216. Further, block services216may map between the block layer addressing (e.g., block identifiers) and the physical location of the data block on one or more storage devices. The blocks may be organized within bins maintained by the block services216for storage on physical storage devices (e.g., SSDs). A bin may be derived from the block ID for storage of a corresponding data block by extracting a predefined number of bits from the block identifiers. In some embodiments, the bin may be divided into buckets or “sublists” by extending the predefined number of bits extracted from the block identifier. A bin identifier may be used to identify a bin within the system. The bin identifier may also be used to identify a particular block service216a-qand associated storage device (e.g., SSD). A sublist identifier may identify a sublist with the bin, which may be used to facilitate network transfer (or syncing) of data among block services in the event of a failure or crash of the storage node200. Accordingly, a client can access data using a client address, which is eventually translated into the corresponding unique identifiers that reference the client's data at the storage node200. For each volume221hosted by a slice service220, a list of block identifiers may be stored with one block identifier for each logical block on the volume. Each volume may be replicated between one or more slice services220and/or storage nodes200, and the slice services for each volume may be synchronized between each of the slice services hosting that volume. Accordingly, failover protection may be provided in case a slice service220fails, such that access to each volume may continue during the failure condition. The above structure allows storing of data evenly across the cluster of storage devices (e.g., SSDs), which allows for performance metrics to be used to manage load in the cluster. For example, if the cluster is under a load meeting or exceeding a particular threshold, clients can be throttled or locked out of a volume by, for example, the storage OS210reducing the amount of read or write data that is being processed by the storage node200. In some examples, data may be locally stored in a primary cache in a storage controller's dynamic random-access memory (DRAM) after a host read to improve read performance Additionally, a secondary cache consisting of a set of SSD drives may be used in combination with the primary cached. The set of SSD drives may be logically grouped together as a volume to facilitate movement of data to and from the secondary cache via simple volume I/O mechanisms. After data is cached and stored on the SSDs, subsequent reads of that data are performed on the SSD Cache, thereby eliminating the need to access the HDD volume. In some embodiments, IO operations may be throttled by the storage OS210depending upon and responsive to one or more system metrics (e.g., SS load and/or a measure of fullness of block storage) exceeding various predefined or configurable thresholds. A graph illustrating an example of IOPS push back is described below with reference toFIG.5. Latency Distribution FIG.3is a graph300illustrating the difference between an IO processing approach that distributes latency among multiple IO operations for an IO processing interval and one that does not. The graph300depicts three IO processing intervals (e.g., target IOPS interval310) during which a distributed storage system processes a target number of IO operations per second (e.g., target IOPS320) per IO processing interval. In the context of the present example, the target IOPS interval310is 500 ms, but it may be shorter or longer. In some examples, the target IOPS320is periodically calculated for the next IO processing interval based on various metrics (e.g., a measure of load on the distributed storage system and/or a measure of block storage fullness) collected during a sample period (which may be the same duration as the target IOPS interval). In the context of the present example, the solid lines illustrate IOPS over time in a scenario in which latency distribution is not performed. Since latencies area not assigned to the IO operations, spikes in IOPS may be observed at the beginning of each target IOPS interval as numerous pending IO operations (some carried over from a prior target IOPS interval) are processed concurrently. As noted above such spikes in IOPS for any volume (e.g., volume221a) hosted by a slice service (e.g., slice service220a) of a storage node (e.g., storage node200) may reduce efficiency of all other volumes (e.g., volumes221b-x) hosted by the same slice service and increase the latency of each of the IO operations being processed concurrently. Meanwhile, the volumes are under-utilized for the remainder of the IO processing interval after the target IOPS320has been achieved. Embodiments described herein seek to address or at least mitigate the above-mentioned inefficiencies by smoothing out latency spikes that would otherwise be experienced during the IO processing intervals. In the context of the present example, the dashed lines illustrate IOPS over time in a scenario in which a latency distribution approach in accordance with various embodiments described herein is performed. As can be seen, the slope or steepness of the dashed lines as IOPS is increasing during the IO processing intervals is less than that of the corresponding solid lines, thereby more uniformly spreading the performance of the IO operations over the IO processing intervals and lessening latency spikes. One potential way to achieve such a result is by shrinking the target IOPS interval and the sample period to zero; however, such an approach would increase overhead costs due to more frequent sampling. As described in further detail below, latency distribution may be accomplished by maintaining the notion of a target IOPS interval and a sampling period and adding on top of that the new features of calculating a target latency and application of latencies to IO operations based on the target latency as they are received by the distributed storage system. Target Latency Calculation FIG.4is a flow diagram illustrating a set of operations for periodically determining a target latency for an IO processing interval in accordance with an embodiment of the present disclosure. The various blocks of the flow diagrams ofFIGS.4,6, and8may be executed by a computing device (e.g., a processor, processing circuit, and/or other suitable component, for example, of a storage node (e.g., storage node200)). In the context of various examples, latency calculation, IO operation queuing and latency assignment, and IO queue monitoring may be described with reference to a QoS module (e.g., QoS module215) implemented within a storage operating system (e.g., storage operating system210) or slice services (e.g., slice service220a-n) of each storage node of a distributed storage system (e.g., cluster135). At decision block410, a determination is made regarding whether a sample period has expired. If so, processing continues with block420; otherwise processing loops back to decision block410. During the sample period (e.g., 500 ms), various data may be collected and metrics (e.g., a measure of volume load per storage node of the distributed storage system and/or a measure of fullness of block storage available within the distributed storage system) may be calculated. In one embodiment, the metrics may be based on data collected during the sample period. Alternatively, if hysteresis is desired, the metrics may be based in part (e.g., 30%) on accumulated past data from prior sample periods and based in part (e.g., 70%) on data gathered from the current sample period. In this manner, the metrics may leverage average real-time data on a continuous scale rather than on an interval scale. A non-limiting example of the measure of load is SS load. At block420, a volume hosted by the slice service for which the QoS module is performing the latency calculation is identified. At block430, a target IOPS for the volume is determined for the next IO processing interval (e.g., target IOPS interval310). According to one embodiment, the target IOPS is based on one or both of a target SS load and a target fullness of block storage of the distributed storage system as described further below with reference toFIG.5. At block440a target latency for IO operators to be processed during the next IO processing interval is determined. According to one embodiment, the target latency may be determined based on the target IOPS and the IO processing interval. For example, the target latency for IO operations for the volume during the next IO processing interval may be calculated by dividing the number of IO operations by the duration of the IO processing interval in seconds. At decision block450, a determination is made regarding whether another volume is to be processed. If so, processing loops back to block420where another volume hosted by the slice service is identified; otherwise, processing for the volumes hosted by the slice service is complete. According to one embodiment, all volumes or a subset of all volumes of a slice service may be processed. While, for simplicity, in the context of the present example, the periodic latency calculation is described with reference to a single slice service within a single storage node, it is to be appreciated the periodic latency calculation may be performed for each slice service of each storage node of the distributed storage system. While in the context of the present example and in the context of subsequent flow diagrams, a number of enumerated blocks are included, it is to be understood that embodiments may include additional blocks before, after, and/or in between the enumerated blocks. Similarly, in some embodiments, one or more of the enumerated blocks may be omitted or performed in a different order. IOPS Push Back FIG.5is a graph illustrating IOPS push back500in accordance with an embodiment of the present disclosure. In one embodiment, the IOPS pushback500is performed according to a percent loaded/full calculation (e.g., determined by a system metric monitoring module (e.g., system metric monitoring module213) and/or by a QoS module (e.g., QoS module215)). The percent loaded/full calculation may involve selecting a maximum target IOPS that satisfies both a target SS load530for a given slice service (e.g., slice service220a) and a target fullness540relating to a target fullness of block storage of the distributed storage system as a whole. In various embodiments, the target SS load530generally represents a value on a scale of 0-100 indicative of a load at or below which it is desired to operate the given slice service. Similarly, the target fullness540may represent a value on a scale of 0-100 indicative of a fullness at or below which it is desired to maintain block storage of the distributed storage system. As shown in the present example, when the target SS load530is within a first range (e.g., between 0 and 37, inclusive), the storage OS (e.g., storage OS210) does not throttle the volumes residing on the particular slice service. When the target SS Load530is within a second range (e.g., between 38-59, inclusive), the storage OS may throttle multiple volumes (e.g., all of volumes212a-212x) residing on the particular slice service linearly from the maximum IOPS value320(e.g., 4,000 IOPS) to the minimum IOPS value310(e.g., 1,000 IOPS) based on the client QoS settings. If, instead, the target SS Load530is within a third range (e.g., between 60-100, inclusive), the storage OS may throttle multiple volumes (e.g., all volumes212a-212x) residing on the particular slice service using an inverse exponential function towards 0. In the context of the present example, for its part, the target fullness540may further limit the target IOPS within a first range (e.g., between 0 and 50, inclusive), in which the storage OS linearly increases push back on the target IOPS as the target fullness540increases. As shown in the present example, the target fullness540operates as the limiting factor for target IOPS when the value of the target fullness540is in the first range between 0 and 50, inclusive, whereas the target SS load530operates as the limiting factor for target IOPS when the value of the target fullness540is in a second range between 51 and 100, inclusive. IO Operation Queuing and Latency Assignment FIG.6is a flow diagram illustrating a set of operations for performing IO operation queuing and latency assignment in accordance with an embodiment of the present disclosure. At decision block610, a determination is made regarding whether an IO operation has been received. If so, processing continues with block620; otherwise processing loops back to decision block610. As described further below, in some examples, multiple IO sizes may be supported (e.g., IO operations involving block sizes of 4 kilobytes (KB), 8 KB, 16 KB, 32 KB, 64 KB, 132 KB, 256 KB, 512 KB, and/or 1,024 KB) of data. As depicted in the performance curve700ofFIG.7, the cost of performing a single IO operation of different IO sizes may vary. For example, as indicated by the performance curve700, the cost of performing an IO operation on an 8 KB block is less than performing two IO operations on 4 KB blocks. Similarly, the cost of performing an IO operation on a 16 KB block is less than performing four IO operations on 4 KB blocks. As can be seen inFIG.7, the amount of cost savings begins to taper off at about IO operations having an IO size of 64 KB. At block620, the IO operation is placed on an IO queue (e.g., queue615) for completion based on the target latency for the volume at issue. Depending upon the particular implementation, IO operations may be assigned a queue latency equal to the target latency previously calculated (e.g., at block440ofFIG.4). In some embodiments, the queue latency assigned to an IO operation may be adjusted and/or normalized based on various factors. For example, the average/expected latency of the remaining IO path (i.e., the total path length) that the operation is expected to endure may be taken into consideration, for example, by subtracting the average/expected latency of the remaining IO path from the target latency. In this manner, just enough extra queue latency may be dynamically added to achieve a smooth distribution of IO operation processing from end-to-end. As a non-limiting example of taking into consideration the average/expected latency of the remaining IO path, consider a scenario in which 4K IO operations (i.e., IO operations of size 4 KB) have been taking ˜4 ms to process, and the target latency for the current IO processing interval is 10 ms. In this example, a queue latency of 6 ms may be assigned to each 4K IO, since it is expected that each will experience another 4 ms of latency as it traverses the remaining IO path. The target latency may additionally or alternatively be normalized to account for different IO sizes, for example, by normalizing the average latency down to the smallest IO size supported by the distributed storage system. According to one embodiment, a normalization factor may be determined based on a particular performance curve (e.g., performance curve700) indicative of the relative costs of performing IO operations of different sizes in the context of the particular distributed storage system. In one embodiment, the normalization factor may be equal to the IO operation cost as indicated by the particular performance curve. Using the performance curve700as a non-limiting example, the normalization factor for 4K IO operations may be 1, the normalization factor for 8K IO operations may be 0.8, the normalization factor for 16K IO operations may be 0.65, and so on. Assuming the use of a normalization factor alone, a queue latency assigned to a particular IO operation may be determined by multiplying the target latency by the normalization factor. After any adjustments and/or normalization has been applied, the resulting queue latency may then be associated with or assigned to the IO operation by including information indicative of an absolute time or a relative time within the queue entry. For example, the queue entry for a particular IO operation may include a time stamp indicative of the time the IO operation was received plus the latency or the queue entry may include a countdown timer field initially set to the latency. Information identifying the object issuing the IO operation may also be stored within the queue entry to facilitate subsequent notification to the object when the IO operation is permitted to proceed. While in the context of the present example, all IO operations are shown as being queued as a result of block620, it is to be understood IO operations may be assigned a queue latency of 0 ms, thereby causing such IO operations to move through the QoS system synchronously without being placed on the queue615. For instance, in the context of the example described above in connection with block620relating to the taking into consideration of the average/expected latency, assume now instead of average IO path latencies of ˜4 ms the IO path latencies have been ˜15 ms on average. In such a scenario, subtracting the IO path latencies from the target latency (e.g., 10 ms) is a negative number, which may result in assigning a queue latency of 0 ms to each 4K IO, effectively making all future IOs move through the QoS system synchronously (i.e., no queueing). When an IO operation is not placed on the queue615, a token may be synchronously issued to the object that made the IO request to allow the object to complete the read or write operation via the slice service hosting the volume with which the read or write operation is associated. For those IO operations that are placed on the queue615, they may be processed as described further below with reference toFIG.8. IO Queue Monitoring FIG.8is a flow diagram illustrating a set of operations for monitoring an IO queue in accordance with an embodiment of the present disclosure. At decision block810, a determination is made regarding whether an IO queue (e.g., queue815) contains an IO operation. If so, processing continues with decision block820; otherwise, processing loops back to decision block810. At decision block820, it is determined whether the IO operation has been delayed sufficiently. If so, processing continues with block830; otherwise, processing loops back to decision block820. The IO operation may be queued with information indicative of a latency assigned to the IO operation. According to one embodiment, the assigned latency value for the JO operation may be represented as absolute time or a relative time (e.g., a countdown timer) from the time at which the IO operation was received. In such situations, when a difference between the current time and the absolute time is within a predetermined or configurable threshold (e.g., 5 ms) of the current time or when the countdown timer is within the predetermined or configurable threshold, the QoS module may determine the IO operation has been delayed sufficiently. At block830, the IO operation is processed. According to one embodiment, the JO operation is removed from the head of the queue and a token is asynchronously issued to the object that made the IO request to allow the object to complete the read or write operation via the slice service hosting the volume with which the read or write operation is associated. While in the context of the examples ofFIG.6andFIG.8all IO operations are described as being temporarily queued and tokens authorizing the IO operations are issued asynchronously after the IO operations have been sufficiently delayed, in alternative embodiments, short-cut processing may be performed in some circumstances. For example, prior to queuing IO operations onto an empty IO queue they may be evaluated to determine whether they are within the predetermined or configurable threshold; and if so, a token may be synchronously issued to the requesting object; otherwise, the IO operation may be queued as described above. Example Computer System Embodiments of the present disclosure include various steps, which have been described above. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a processing resource (e.g., a general-purpose or special-purpose processor) programmed with the instructions to perform the steps. Alternatively, depending upon the particular implementation, various steps may be performed by a combination of hardware, software, firmware and/or by human operators. Embodiments of the present disclosure may be provided as a computer program product, which may include a non-transitory machine-readable storage medium embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Various methods described herein may be practiced by combining one or more non-transitory machine-readable storage media containing the code according to embodiments of the present disclosure with appropriate special purpose or standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (e.g., physical and/or virtual servers) (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps associated with embodiments of the present disclosure may be accomplished by modules, routines, subroutines, or subparts of a computer program product. FIG.9is a block diagram that illustrates a computer system900in which or with which an embodiment of the present disclosure may be implemented. Computer system900may be representative of all or a portion of the computing resources associated with a storage node (e.g., storage node136), a collector (e.g., collector138), a monitoring system (e.g., monitoring system122) or an administrative workstation or client (e.g., computer system110). Notably, components of computer system900described herein are meant only to exemplify various possibilities. In no way should example computer system900limit the scope of the present disclosure. In the context of the present example, computer system900includes a bus902or other communication mechanism for communicating information, and a processing resource (e.g., a hardware processor904) coupled with bus902for processing information. Hardware processor904may be, for example, a general-purpose microprocessor. Computer system900also includes a main memory906, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus902for storing information and instructions to be executed by processor904. Main memory906also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor904. Such instructions, when stored in non-transitory storage media accessible to processor904, render computer system900into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system900further includes a read only memory (ROM)908or other static storage device coupled to bus902for storing static information and instructions for processor904. A storage device910, e.g., a magnetic disk, optical disk or flash disk (made of flash memory chips), is provided and coupled to bus902for storing information and instructions. Computer system900may be coupled via bus902to a display912, e.g., a cathode ray tube (CRT), Liquid Crystal Display (LCD), Organic Light-Emitting Diode Display (OLED), Digital Light Processing Display (DLP) or the like, for displaying information to a computer user. An input device914, including alphanumeric and other keys, is coupled to bus902for communicating information and command selections to processor904. Another type of user input device is cursor control916, such as a mouse, a trackball, a trackpad, or cursor direction keys for communicating direction information and command selections to processor904and for controlling cursor movement on display912. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Removable storage media940can be any kind of external storage media, including, but not limited to, hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc-Read—Only Memory (CD-ROM), Compact Disc-Re-Writable (CD-RW), Digital Video Disk-Read—Only Memory (DVD-ROM), USB flash drives and the like. Computer system900may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware or program logic which in combination with the computer system causes or programs computer system900to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system900in response to processor904executing one or more sequences of one or more instructions contained in main memory906. Such instructions may be read into main memory906from another storage medium, such as storage device910. Execution of the sequences of instructions contained in main memory906causes processor904to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media or volatile media. Non-volatile media includes, for example, optical, magnetic or flash disks, such as storage device910. Volatile media includes dynamic memory, such as main memory906. Common forms of storage media include, for example, a flexible disk, a hard disk, a solid-state drive, a magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus902. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor904for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system900can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus902. Bus902carries the data to main memory906, from which processor904retrieves and executes the instructions. The instructions received by main memory906may optionally be stored on storage device910either before or after execution by processor904. Computer system900also includes a communication interface918coupled to bus902. Communication interface918provides a two-way data communication coupling to a network link920that is connected to a local network922. For example, communication interface918may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface918may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface918sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link920typically provides data communication through one or more networks to other data devices. For example, network link920may provide a connection through local network922to a host computer924or to data equipment operated by an Internet Service Provider (ISP)926. ISP926in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet”928. Local network922and Internet928both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link920and through communication interface918, which carry the digital data to and from computer system900, are example forms of transmission media. Computer system900can send messages and receive data, including program code, through the network(s), network link920and communication interface918. In the Internet example, a server930might transmit a requested code for an application program through Internet928, ISP926, local network922and communication interface918. The received code may be executed by processor904as it is received, or stored in storage device910, or other non-volatile storage for later execution. | 60,405 |
11861177 | DETAILED DESCRIPTION In some memory systems, a host device may transmit a verify command to a memory device. The verify command may indicate a set of data stored at the memory device and a request to check a reliability or validity of the set of data. The memory device may read the set of data based on or in response to the verify command to determine whether the set of data is valid or not. In some examples, the memory device may read a logic value corresponding to the data and compare the logic value with a threshold value to determine whether the set of data is valid. The memory device may transmit a verify command status response to the host device indicating whether the set of data is valid. If the set of data is not valid, the host device may issue read and write commands to re-write the set of data. In some examples, the host device may set an enable early recovery (EER) bit in a verify error recovery mode page corresponding to the verify command. The memory device may perform an error recovery procedure according to a duration that is based or in response to a value of the EER bit. The host device may set the EER bit based on or in response to a target level of reliability for the set of data. In some examples, however, the EER bit may not provide sufficient granularity for the host device to accurately indicate the target level of reliability. Moreover, the EER bit may indicate a duration of an error handling operation, but the EER bit may not correspond to the target level of reliability for the set of data. Additionally or alternatively, setting the EER bit and performing the error handling operation may result in increased latency and relatively high power consumption. Systems, devices, and techniques are described to provide for the host device to indicate one or more target levels of reliability and one or more corresponding error management operations via the verify command, which may provide for finer granularity and reduced latency and power consumption. The host device may determine a target level of reliability for a set of data based on or in response to one or more commands to be executed for the set of data, a mode of operation corresponding to the set of data, physical stressors the set of data may experience, or the like. In some examples, the target level of reliability may be selected from a set of possible target levels of reliability. A bit, a bit field, or an attribute associated with the verify command may be configured to indicate the target level of reliability. The bit, the bit field, or the attribute may indicate one or more error management operations associated with the target level of reliability, or the memory device may be configured with a set of error management operations corresponding to each target level of reliability. The memory device may read and verify the set of data and perform the indicated error management operations based on or in response to the verify command and a current level of reliability of the data. The error management operations may include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an error handling operation, or any combination thereof, in accordance with the target level of reliability. Indicating a target level of reliability and corresponding error management operations via a verify command as described herein may provide for reduced power consumption and reduced latency, thereby improving error management procedures and a reliability of data stored at the memory device. Features of the disclosure are initially described in the context of systems, devices, and circuits with reference toFIGS.1through3. Features of the disclosure are described in the context of a flow diagram with reference toFIG.4. These and other features of the disclosure are further illustrated by and described in the context of apparatus diagrams and flowcharts that relate to configurable verify level with reference toFIGS.5through8. FIG.1illustrates an example of a system100that supports a configurable verify level in accordance with examples as disclosed herein. The system100includes a host system105coupled with a memory system110. A memory system110may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system110may be or include a Universal Flash Storage (UFS) device, an embedded Multi-Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities. The system100may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device. The system100may include a host system105, which may be coupled with the memory system110. In some examples, this coupling may include an interface with a host system controller106, which may be an example of a controller or control component configured to cause the host system105to perform various operations in accordance with examples as described herein. The host system105may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system105may include an application configured for communicating with the memory system110or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA) controller). The host system105may use the memory system110, for example, to write data to the memory system110and read data from the memory system110. Although one memory system110is shown inFIG.1, the host system105may be coupled with any quantity of memory systems110. The host system105may be coupled with the memory system110via at least one physical host interface. The host system105and the memory system110may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system110and the host system105). Examples of a physical host interface may include, but are not limited to, a SATA interface, a UFS interface, an eMMC interface, a PCIe interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), and a Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller106of the host system105and a memory system controller115of the memory system110. In some examples, the host system105may be coupled with the memory system110(e.g., the host system controller106may be coupled with the memory system controller115) via a respective physical host interface for each memory device130included in the memory system110, or via a respective physical host interface for each type of memory device130included in the memory system110. The memory system110may include a memory system controller115and one or more memory devices130. A memory device130may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereof). Although two memory devices130-aand130-bare shown in the example ofFIG.1, the memory system110may include any quantity of memory devices130. Further, if the memory system110includes more than one memory device130, different memory devices130within the memory system110may include the same or different types of memory cells. The memory system controller115may be coupled with and communicate with the host system105(e.g., via the physical host interface) and may be an example of a controller or control component configured to cause the memory system110to perform various operations in accordance with examples as described herein. The memory system controller115may also be coupled with and communicate with memory devices130to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device130—among other such operations—which may generically be referred to as access operations. In some cases, the memory system controller115may receive commands from the host system105and communicate with one or more memory devices130to execute such commands (e.g., at memory arrays within the one or more memory devices130). For example, the memory system controller115may receive commands or operations from the host system105and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices130. In some cases, the memory system controller115may exchange data with the host system105and with one or more memory devices130(e.g., in response to or otherwise in association with commands from the host system105). For example, the memory system controller115may convert responses (e.g., data packets or other signals) associated with the memory devices130into corresponding signals for the host system105. The memory system controller115may be configured for other operations associated with the memory devices130. For example, the memory system controller115may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from the host system105and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices130. The memory system controller115may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller115. The memory system controller115may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry. The memory system controller115may also include a local memory120. In some cases, the local memory120may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller115to perform functions ascribed herein to the memory system controller115. In some cases, the local memory120may additionally or alternatively include static random access memory (SRAM) or other memory that may be used by the memory system controller115for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller115. Additionally or alternatively, the local memory120may serve as a cache for the memory system controller115. For example, data may be stored in the local memory120if read from or written to a memory device130, and the data may be available within the local memory120for subsequent retrieval for or manipulation (e.g., updating) by the host system105(e.g., with reduced latency relative to a memory device130) in accordance with a cache policy. Although the example of the memory system110inFIG.1has been illustrated as including the memory system controller115, in some cases, a memory system110may not include a memory system controller115. For example, the memory system110may additionally or alternatively rely upon an external controller (e.g., implemented by the host system105) or one or more local controllers135, which may be internal to memory devices130, respectively, to perform the functions ascribed herein to the memory system controller115. In general, one or more functions ascribed herein to the memory system controller115may in some cases instead be performed by the host system105, a local controller135, or any combination thereof. In some cases, a memory device130that is managed at least in part by a memory system controller115may be referred to as a managed memory device. An example of a managed memory device is a managed NAND (MNAND) device. A memory device130may include one or more arrays of non-volatile memory cells. For example, a memory device130may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), electrically erasable programmable ROM (EEPROM), or any combination thereof. Additionally or alternatively, a memory device130may include one or more arrays of volatile memory cells. For example, a memory device130may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells. In some examples, a memory device130may include (e.g., on a same die or within a same package) a local controller135, which may execute operations on one or more memory cells of the respective memory device130. A local controller135may operate in conjunction with a memory system controller115or may perform one or more functions ascribed herein to the memory system controller115. For example, as illustrated inFIG.1, a memory device130-amay include a local controller135-aand a memory device130-bmay include a local controller135-b. In some cases, a memory device130may be or include a NAND device (e.g., NAND flash device). A memory device130may be or include a memory die160. For example, in some cases, a memory device130may be a package that includes one or more dies160. A die160may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die160may include one or more planes165, and each plane165may include a respective set of blocks170, where each block170may include a respective set of pages175, and each page175may include a set of memory cells. In some cases, a NAND memory device130may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device130may include memory cells configured to each store multiple bits of information, which may be referred to as multi-level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more generically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry. In some cases, planes165may refer to groups of blocks170, and in some cases, concurrent operations may take place within different planes165. For example, concurrent operations may be performed on memory cells within different blocks170so long as the different blocks170are in different planes165. In some cases, an individual block170may be referred to as a physical block, and a virtual block180may refer to a group of blocks170within which concurrent operations may occur. For example, concurrent operations may be performed on blocks170-a,170-b,170-c, and170-dthat are within planes165-a,165-b,165c, and165-d, respectively, and blocks170-a,170-b,170-c, and170-dmay be collectively referred to as a virtual block180. In some cases, a virtual block may include blocks170from different memory devices130(e.g., including blocks in one or more planes of memory device130-aand memory device130-b). In some cases, the blocks170within a virtual block may have the same block address within their respective planes165(e.g., block170-amay be “block 0” of plane165-a, block170-bmay be “block 0” of plane165-b, and so on). In some cases, performing concurrent operations in different planes165may be subject to one or more restrictions, such as concurrent operations being performed on memory cells within different pages175that have the same page address within their respective planes165(e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes165). In some cases, a block170may include memory cells organized into rows (pages175) and columns (e.g., strings, not shown). For example, memory cells in a same page175may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line). For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page175may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block170may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re-written with new data. Thus, for example, a used page175may in some cases not be updated until the entire block170that includes the page175has been erased. In some cases, a memory system controller115or a local controller135may perform operations (e.g., as part of one or more media management algorithms) for a memory device130, such as wear leveling, background refresh, garbage collection, scrub, block scans, health monitoring, or others, or any combination thereof. For example, within a memory device130, a block170may have some pages175containing valid data and some pages175containing invalid data. To avoid waiting for all of the pages175in the block170to have invalid data in order to erase and reuse the block170, an algorithm referred to as “garbage collection” may be invoked to allow the block170to be erased and released as a free block for subsequent write operations. Garbage collection may refer to a set of media management operations that include, for example, selecting a block170that contains valid and invalid data, selecting pages175in the block that contain valid data, copying the valid data from the selected pages175to new locations (e.g., free pages175in another block170), marking the data in the previously selected pages175as invalid, and erasing the selected block170. As a result, the quantity of blocks170that have been erased may be increased such that more blocks170are available to store subsequent data (e.g., data subsequently received from the host system105). The system100may include any quantity of non-transitory computer readable media that support configurable verify level. For example, the host system105, the memory system controller115, or a memory device130may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system105, memory system controller115, or memory device130. For example, such instructions, if executed by the host system105(e.g., by the host system controller106), by the memory system controller115, or by a memory device130(e.g., by a local controller135), may cause the host system105, memory system controller115, or memory device130to perform one or more associated functions as described herein. In some cases, a memory system110may utilize a memory system controller115to provide a managed memory system that may include, for example, one or more memory arrays and related circuitry combined with a local (e.g., on-die or in-package) controller (e.g., local controller135). An example of a managed memory system is a managed NAND (MNAND) system. In some cases, the host system105(e.g., a host system controller106) may transmit a verify command to the memory system110. The verify command may indicate a set of data stored at a memory device130and a request to check a reliability or validity of the set of data. The memory system110(e.g., the memory system controller115, a local controller135associated with the memory device130, or some other component within the memory system110) may read the set of data based on or in response to the verify command to determine whether the set of data is valid or not. In some examples, the memory system110may read a logic value corresponding to the data and compare the logic value with a threshold value to determine whether the set of data is valid. The memory system110may transmit a verify command status response to the host system105indicating whether the set of data is valid. If the set of data is not valid, the host system105may issue a read and write operation to re-write the set of data. Additionally or alternatively, the host system105may set an EER bit in a verify error recovery mode page corresponding to the verify command. The memory system110may perform an error recovery procedure, and a duration of the error recovery procedure may be based or in response to a value of the EER bit. Setting the EER bit and performing the error recovery procedures may, however, result in relatively high power consumption and latency associated with performing a verify operation. Moreover, the EER bit may indicate a duration of an error handling operation, but the EER bit may not correspond to maintaining or ensuring a target level of reliability for the set of data. Systems, devices, and techniques are described to provide for the host system105to indicate one or more target levels of reliability and one or more corresponding error management operations via the verify command. The host system105may determine a target level of reliability for a set of data based on or in response to one or more commands to be executed for the set of data, a mode of operation corresponding to the set of data, physical stressors the set of data may experience, or the like. In some examples, the target level of reliability may be selected from a set of possible target levels of reliability. A bit, a bit field, or an attribute associated with the verify command may be configured to indicate the target level of reliability. The bit, the bit field, or the attribute may indicate one or more error management operations associated with the target level of reliability, or the memory system110may be configured with a set of error management operations corresponding to each target level of reliability. The memory system110may read and verify the set of data stored by the memory device130. The memory system110may perform the indicated error management operations based on or in response to the verify command and a current level of reliability of the data. The error management operations may include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an error handling operation, or any combination thereof, in accordance with the target level of reliability. Indicating a target level of reliability and corresponding error management operations via a verify command, as described herein, may provide for reduced power consumption and reduced latency, thereby improving error management procedures and a reliability of data stored at the memory device130. FIG.2illustrates an example of a system200that supports a configurable verify level in accordance with examples as disclosed herein. The system200may be an example of a system100as described with reference toFIG.1or aspects thereof. The system200may include a memory system210configured to store data received from the host system205and to send data to the host system205, if requested by the host system205using access commands (e.g., read commands or write commands). The system200may implement aspects of the system100as described with reference toFIG.1. For example, the memory system210and the host system205may be examples of the memory system110and the host system105, respectively. The memory system210may include memory devices240to store data transferred between the memory system210and the host system205, e.g., in response to receiving access commands from the host system205, as described herein. The memory devices240may include one or more memory devices as described with reference toFIG.1. For example, the memory devices240may include NAND memory, PCM, self-selecting memory, 3D cross point, other chalcogenide-based memories, FERAM, MRAM, NOR (e.g., NOR flash) memory, STT-MRAM, CBRAM, RRAM, or OxRAM. The memory system210may include a storage controller230for controlling the passing of data directly to and from the memory devices240, e.g., for storing data, retrieving data, and determining memory locations in which to store data and from which to retrieve data. The storage controller230may communicate with memory devices240directly or via a bus (not shown) using a protocol specific to each type of memory device240. In some cases, a single storage controller230may be used to control multiple memory devices240of the same or different types. In some cases, the memory system210may include multiple storage controllers230, e.g., a different storage controller230for each type of memory device240. In some cases, a storage controller230may implement aspects of a local controller135as described with reference toFIG.1. The memory system210may additionally include an interface220for communication with the host system205and a buffer225for temporary storage of data being transferred between the host system205and the memory devices240. The interface220, buffer225, and storage controller230may be for translating data between the host system205and the memory devices240, e.g., as shown by a data path250, and may be collectively referred to as data path components. Using the buffer225to temporarily store data during transfers may allow data to be buffered as commands are being processed, thereby reducing latency between commands and allowing arbitrary data sizes associated with commands. This may also allow bursts of commands to be handled, and the buffered data may be stored or transmitted (or both) once a burst has stopped. The buffer225may include relatively fast memory (e.g., some types of volatile memory, such as SRAM or DRAM) or hardware accelerators or both to allow fast storage and retrieval of data to and from the buffer225. The buffer225may include data path switching components for bi-directional data transfer between the buffer225and other components. The temporary storage of data within a buffer225may refer to the storage of data in the buffer225during the execution of access commands. That is, upon completion of an access command, the associated data may no longer be maintained in the buffer225(e.g., may be overwritten with data for additional access commands). In addition, the buffer225may be a non-cache buffer. That is, data may not be read directly from the buffer225by the host system205. For example, read commands may be added to a queue without an operation to match the address to addresses already in the buffer225(e.g., without a cache address match or lookup operation). The memory system210may additionally include a memory system controller215for executing the commands received from the host system205and controlling the data path components in the moving of the data. The memory system controller215may be an example of the memory system controller115as described with reference toFIG.1. A bus235may be used to communicate between the system components. In some cases, one or more queues (e.g., a command queue260, a buffer queue265, and a storage queue270) may be used to control the processing of the access commands and the movement of the corresponding data. This may be beneficial, e.g., if more than one access command from the host system205is processed concurrently by the memory system210. The command queue260, buffer queue265, and storage queue270are depicted at the interface220, memory system controller215, and storage controller230, respectively, as examples of a possible implementation. However, queues, if used, may be positioned anywhere within the memory system210. Data transferred between the host system205and the memory devices240may take a different path in the memory system210than non-data information (e.g., commands, status information). For example, the system components in the memory system210may communicate with each other using a bus235, while the data may use the data path250through the data path components instead of the bus235. The memory system controller215may control how and if data is transferred between the host system205and the memory devices240by communicating with the data path components over the bus235(e.g., using a protocol specific to the memory system210). If a host system205transmits access commands to the memory system210, the commands may be received by the interface220, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). Thus, the interface220may be considered a front end of the memory system210. Upon receipt of each access command, the interface220may communicate the command to the memory system controller215, e.g., via the bus235. In some cases, each command may be added to a command queue260by the interface220to communicate the command to the memory system controller215. The memory system controller215may determine whether an access command has been received based on or in response to the communication from the interface220. In some cases, the memory system controller215may determine that the access command has been received by retrieving the command from the command queue260. The command may be removed from the command queue260after it has been retrieved therefrom, e.g., by the memory system controller215. In some cases, the memory system controller215may cause the interface220, e.g., via the bus235, to remove the command from the command queue260. Upon the determination that an access command has been received, the memory system controller215may execute the access command. For a read command, this may mean obtaining data from the memory devices240and transmitting the data to the host system205. For a write command, this may mean receiving data from the host system205and moving the data to the memory devices240. In either case, the memory system controller215may use the buffer225for, among other things, temporary storage of the data being received from or sent to the host system205. The buffer225may be considered a middle end of the memory system210. In some cases, buffer address management (e.g., pointers to address locations in the buffer225) may be performed by hardware (e.g., dedicated circuits) in the interface220, buffer225, or storage controller230. To process a write command received from the host system205, the memory system controller215may first determine whether the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the write command. In some cases, a buffer queue265may be used to control a flow of commands associated with data stored in the buffer225, including write commands. The buffer queue265may include the access commands associated with data currently stored in the buffer225. In some cases, the commands in the command queue260may be moved to the buffer queue265by the memory system controller215and may remain in the buffer queue265while the associated data is stored in the buffer225. In some cases, each command in the buffer queue265may be associated with an address at the buffer225. That is, pointers may be maintained that indicate where in the buffer225the data associated with each command is stored. Using the buffer queue265, multiple access commands may be received sequentially from the host system205and at least portions of the access commands may be processed concurrently. If the buffer225has sufficient space to store the write data, the memory system controller215may cause the interface220to transmit an indication of availability to the host system205(e.g., a “ready to transfer” indication), e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). As the interface220subsequently receives from the host system205the data associated with the write command, the interface220may transfer the data to the buffer225for temporary storage using the data path250. In some cases, the interface220may obtain from the buffer225or buffer queue265the location within the buffer225to store the data. The interface220may indicate to the memory system controller215, e.g., via the bus235, if the data transfer to the buffer225has been completed. Once the write data has been stored in the buffer225by the interface220, the data may be transferred out of the buffer225and stored in a memory device240. This may be done using the storage controller230. For example, the memory system controller215may cause the storage controller230to retrieve the data out of the buffer225using the data path250and transfer the data to a memory device240. The storage controller230may be considered a back end of the memory system210. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, that the data transfer to a memory device of the memory devices240has been completed. In some cases, a storage queue270may be used to aid with the transfer of write data. For example, the memory system controller215may push (e.g., via the bus235) write commands from the buffer queue265to the storage queue270for processing. The storage queue270may include entries for each access command. In some examples, the storage queue270may additionally include a buffer pointer (e.g., an address) that may indicate where in the buffer225the data associated with the command is stored and a storage pointer (e.g., an address) that may indicate the location in the memory devices240associated with the data. In some cases, the storage controller230may obtain from the buffer225, buffer queue265, or storage queue270the location within the buffer225from which to obtain the data. The storage controller230may manage the locations within the memory devices240to store the data (e.g., performing wear-leveling, garbage collection, and the like). The entries may be added to the storage queue270, e.g., by the memory system controller215. The entries may be removed from the storage queue270, e.g., by the storage controller230or memory system controller215upon completion of the transfer of the data. To process a read command received from the host system205, the memory system controller215may again first determine whether the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the read command. In some cases, the buffer queue265may be used to aid with buffer storage of data associated with read commands in a similar manner as discussed above with respect to write commands. For example, if the buffer225has sufficient space to store the read data, the memory system controller215may cause the storage controller230to retrieve the data associated with the read command from a memory device240and store the data in the buffer225for temporary storage using the data path250. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, in response to completion of the data transfer to the buffer225. In some cases, the storage queue270may be used to aid with the transfer of read data. For example, the memory system controller215may push the read command to the storage queue270for processing. In some cases, the storage controller230may obtain from the buffer225or storage queue270the location within the memory devices240from which to retrieve the data. In some cases, the storage controller230may obtain from the buffer queue265the location within the buffer225to store the data. In some cases, the storage controller230may obtain from the storage queue270the location within the buffer225to store the data. In some cases, the memory system controller215may move the command processed by the storage queue270back to the command queue260. Once the data has been stored in the buffer225by the storage controller230, the data may be transferred out of the buffer225and sent to the host system205. For example, the memory system controller215may cause the interface220to retrieve the data out of the buffer225using the data path250and transmit the data to the host system205, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). For example, the interface220may process the command from the command queue260and may indicate to the memory system controller215, e.g., via the bus235, that the data transmission to the host system205has been completed. The memory system controller215may execute received commands according to an order (e.g., a first-in, first-out order, according to the order of the command queue260). For each command, the memory system controller215may cause data corresponding to the command to be moved into and out of the buffer225, as discussed above. As the data is moved into and stored within the buffer225, the command may remain in the buffer queue265. A command may be removed from the buffer queue265, e.g., by the memory system controller215, if the processing of the command has been completed (e.g., if data corresponding to the access command has been transferred out of the buffer225). If a command is removed from the buffer queue265, the address previously storing the data associated with that command may be available to store data associated with a new command. The memory system controller215may additionally be configured for operations associated with the memory devices240. For example, the memory system controller215may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., LBAs) associated with commands from the host system205and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices240. That is, the host system205may issue commands indicating one or more LBAs and the memory system controller215may identify one or more physical block addresses indicated by the LBAs. In some cases, one or more contiguous LBAs may correspond to noncontiguous physical block addresses. In some cases, the storage controller230may be configured to perform one or more of the above operations in conjunction with or instead of the memory system controller215. In some cases, the memory system controller215may perform the functions of the storage controller230and the storage controller230may be omitted. In some cases, the host system205may transmit a verify command to the memory system210. The verify command may indicate a set of data stored in a memory device240of the memory system210and a request to verify a reliability of the set of data. The memory system210may read data from the media and transmit a verify command status response to the host system205indicating the validity of the data based on or in response to the verify command. In some examples, if the data is not valid, the host system205may perform a read and write procedure to write-back the data. In some examples, the host system205may set an EER bit in a verify error recovery mode page corresponding to the verify command. The EER bit may indicate a request for the memory system210to perform an error handling operation for the set of data and a duration for performing the error handling operation. For example, an EER bit set to one may indicate a shorter duration for performing the error handling operation than an EER bit set to zero. The host system205may set the EER bit based on or in response to a target level of reliability for the set of data. In some examples, however, the EER bit may not provide sufficient granularity for the host system205to accurately indicate the target level of reliability. Additionally or alternatively, setting the EER bit to detect marginal data may prevent the memory system210from recovering data. In such cases, the host system205may read data after resetting the EER bit and write back the read data to the memory system210(e.g., to a memory device240in the memory system210) to force refresh and restore data reliability. The extra read and write operations associated with the EER bit may result in increased latency and relatively high power consumption. In some examples, the host system205may indicate a target level of reliability and one or more error management operations corresponding to the target level of reliability via the verify command, which may provide for finer granularity, reduced latency, and reduced power consumption. The host system205may determine the target level of reliability for a set of data based on or in response to one or more commands to be executed for the set of data, a mode of operation corresponding to the set of data, physical stressors the set of data may experience, or the like. In some examples, the target level of reliability may be selected from a set of possible target levels of reliability. A bit, a bit field, or an attribute associated with the verify command may be configured to indicate the target level of reliability. The bit, the bit field, or the attribute may indicate one or more error management operations associated with the target level of reliability, or the memory system210may be configured with a set of error management operations corresponding to each target level of reliability. The memory system210may read and verify the set of data and perform the indicated error management operations based on or in response to the verify command and a current level of reliability of the data. The error management operations may include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an error handling operation, or any combination thereof, in accordance with the target level of reliability. Indicating a target level of reliability and corresponding error management operations via a verify command, as described herein, may provide for reduced power consumption and reduced latency, thereby improving error management procedures and a reliability of data stored at the memory device. FIG.3illustrates an example of a system300that supports a configurable verify level in accordance with examples as disclosed herein. The system300may be an example of a system100or a system200as described with reference toFIGS.1and2. The system300may include a memory system310and a host device305. The memory system310may include a command handler315and a memory device330configured to store data. The system300may support an interface for the host device305to transmit a verify command320(e.g., among other commands, such as read or write commands) to the memory system310and receive a status report from the memory system310in response to the verify command320. The memory system310may include the command handler315which may function as an interface between the host device305and memory system310. The command handler315may be an example of one or a combination of exemplary devices as described with reference toFIG.2. For example, the command handler315may be an example of an interface220, a memory system controller215, a storage controller230, or a combination thereof. The memory system310may include a memory device330, which may be an example of the memory devices described with reference toFIGS.1and2. The host device305may transmit one or more commands, such as read commands, write commands, or verify commands320, which may indicate operations to be performed by the memory system310. The host device305may transmit the commands to the command handler315, and the command handler315may decode the commands and access the relevant data stored in the memory device330or forward the commands to the memory device330(e.g., a controller of the memory device330). The host device305may transmit the verify command320to the memory system310to request the memory system310to check for errors in data stored in the memory device330. In some examples, the host device305may transmit the verify command320to determine whether the data is corrupted by other operations performed on the memory device330, or the host device305may transmit the verify command320to identify which data, of a set of data that may be duplicated in the memory device330, is associated with a highest reliability. The verify command320may indicate a starting logical block address in the memory device330and a quantity of contiguous logical blocks of data (e.g., a range, or set, of data) that are targeted for the corresponding verify operation. The command handler315may receive and decode the verify command320to identify the set of data. The command handler315, the memory system310, the memory device330, or any combination thereof may read the indicated set of data to determine a level of reliability associated with the set of data. In some examples, the command handler315may determine a reliability of the data by reading a logic level associated with the set of data and comparing the logic level with a threshold level, which may be referred to as a read margin. The command handler315may thereby determine whether there are errors in the set of data or not (e.g., whether the data is valid or not). The command handler315may transmit a verify command status response325to the host device305to indicate whether the set of data is valid or not. In some examples, the status response325may transmitted via a UFS protocol interface unit (UPIU). The host device305may thereby receive the status response325indicating a quality or reliability of the set of data without initiating a read operation, which may reduce processing and latency. In some examples, the host device305may determine, prior to issuing the verify command320, a target level of reliability for the set of data based on one or more operations to be performed for the set of data. The target level of reliability may correspond to a level of reliability that may provide for the set of data to retain stored information properly during the one or more operations or other physical stressors, such as an over-the-air (OTA) update operation, or rework on a board of the memory device330(e.g., if the set of data is subject to extended time in high temperatures, soldering or other rework to the memory device330, or other stressors). In some cases, if the status response325indicates that the set of data is not valid, the host device305may initiate one or more error management procedures for the set of data. In some cases, however, it may be beneficial for the host device305to utilize the verify command320to determine whether the target level of reliability is met for the set of data and to indicate error management recovery procedures for correcting any errors in the set of data. In some cases, the host device305may utilize a verify error recovery mode page corresponding to the verify command320to indicate error recovery parameters the memory device330may use while performing the verify operation. The verify error recovery mode page may be configured to convey the one or more error recovery parameters to the memory device330for use in conjunction with the verify command320. The host device305may, in some cases, set one or more of the error recovery parameters in the verify error recovery mode page based on or in response to the target level of reliability for the set of data. For example, the host device305may set an EER bit in the verify error recovery mode page high or low (e.g., 1 or 0) based on or in response to the target level of reliability for the set of data. The EER bit may indicate a rate of an error recovery procedure to be performed by the memory device330in response to the verify command320. The error recovery procedure may include one or more reads of the set of data by the memory device330. If the memory device330identifies that the data is not corrupt after a first read, the memory device330may terminate the error recovery procedure and indicate that the data is valid to the command handler315. The command handler315may transmit the status response325indicating a status of GOOD to the host device305. If the memory device330identifies an error, the memory device330may continue to read the data for a configured quantity of reads (e.g., re-read attempts) using different array and error correcting-codes (ECC) settings to recover the set of data. If the host device305sets the EER bit low in the verify command recovery mode page, the memory device330will perform an error recovery procedure that may reduce a risk of error mis-detection, which may be a relatively slow error recovery procedure. That is, the memory device330may perform each of the configured quantity of reads before using ECC. If the EER bit is set high, the memory device330will perform a relatively expedient error recovery procedure to correct the identified errors in the data. That is, the memory device330may not exhaust the quantity of reads before using ECC, which may increase a probability of an error mis-detection but may reduce latency and power consumption. The host device305may set the EER bit high if the host device305determines a relatively high target level of reliability for the set of data (e.g., if the set of data includes critical data, or is subject to relatively high stressors, such as extended time in high temperatures, soldering or other rework to a board, or other stressors). If the set of data is marginally reliable and the EER bit is set high, the memory device330may refresh the set of data (e.g., re-write or correct the set of data) more often than if the EER bit is not set. The host device305may set the EER bit low if the host device305determines a relatively low target level of reliability for the set of data. If the memory device330identifies an error in the set of data while performing the verify operation based on or in response to the EER bit, the memory device330will indicate the error to the command handler315. The command handler315may transmit a failed status response325to the host device305(e.g., a status response of CHECK CONDITION). In some cases, in response to a failed status report, the host device305may issue a read command to retrieve data from the memory device330and re-write the data in response to the failed verify command, which may further increase latency and power consumption. Setting the EER bit in the verify command recovery mode page may thereby result in relatively large power consumption and increased latency associated with the verify operation. Additionally, the EER bit indicates a rate at which the memory device330is to perform an error recovery procedure. The EER bit does not directly correspond to a reliability of the set of data or to different error management operations that may be used to recover the set of data if an error is present. That is, although the host device305may adjust a duration of error recovery and power consumption associated with performing error recovery by setting the EER bit, the host device305may not indicate the target level of reliability to the memory system310using the EER bit. Additionally or alternatively, the EER bit may not provide sufficient granularity for the host device305. For example, the EER bit may not provide for the host device305to indicate more than two levels of error recovery. As described herein, a command descriptor block (CDB) of the verify command320may indicate a target level of reliability for the set of data and one or more error management operations corresponding to the target level of reliability. As such, the memory device330may perform the indicated error management operations in response to the verify command320to verify that the set of data meets the target level of reliability. By using the verify command320to indicate different levels of verify operations, the host device305may improve latency and power consumption associated with verify operations. For example, the host device305may balance reliability, power consumption, and latency by selecting the target level of reliability from a set of configured target levels of reliability based on or in response to one or more commands to be executed by the memory device330, a mode of operation of the memory device330or the memory system310, other physical stressors the memory device330may experience, a time available for performing the verify operation, a reliability, or any combination thereof. Each target level of reliability may correspond to one or more error management operations, such as an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an error handling operation, one or more other error management operations, or any combination thereof. The reduced error correction capability may correspond to the error correction operation being performed in a configured time period. The error correction operation may represent an example of one or more types of error correction operations that use ECC to correct errors in the set of data. For example, the memory device330may use low-density parity-check (LDPC) codes, or other error handling algorithms to adjust a read margin for each read attempt for the set of data. In some examples, the error correction operations may take more time to recover data that is more marginal than other data that is less marginal. The reduced error correction capability may indicate a configured time period for performing the error correction operation, which may reduce a level of marginality of the data that may be recovered. However, the configured time period may reduce latency and power consumption associated with performing the error correction operation. The read margin adjustment operation may correspond to the memory device330adjusting a read margin during one or more marginal reads to verify the set of data (e.g., check a reliability of the set of data). The read margin, which may be referred to as a threshold read margin, may correspond to a reference voltage value that may indicate a logic value stored in a memory cell. For example, if a potential that is read from a memory cell (e.g., a single-level memory cell) is above the threshold read margin, the memory device330determines that the memory cell stores a logic 1. If the potential is below the threshold read margin, the memory device330determines that the memory cell stores a logic 0. In some examples, error may distort a threshold voltage distribution of voltages corresponding to each logic value, such that a potential associated with a memory cell that stores a logic value of 1 may be below the threshold read margin, or vice versa. By performing the read margin adjustment operation, the memory device330may increase and/or decrease the threshold read margin for one or more reads to determine how marginal the set of data is, which may indicate a level of reliability of the set of data. The read margin adjustment operation may indicate a validity of the data to ensure whether the set of data will be accurately stored in the future (e.g., after operations are performed for the set of data or the data is exposed to physical stressors). A quantity by which the threshold read margin may be adjusted (e.g., an adjustment range in volts, or millivolts) during the operation may correspond to the target level of reliability. For example, a relatively high target level of reliability may correspond to a greater adjustment range than a relatively low target level of reliability. In some examples, the adjustment range may be configured for each target level of reliability. Additionally or alternatively, the adjustment range for each target level of reliability may be determined for the memory device330at design time or based on or in response to one or more defined parameters or settings for the memory device330. The host device305may thereby transmit the verify command320indicating a target level of reliability from a set of target levels of reliability for a verify operation performed for a set of data in the memory device330. The verify command320may indicate a request for the memory device330to perform one or more of the error management operations that correspond to the target level of reliability, which may reduce latency and power consumption for verify operations. The verify command and the target levels of reliability are described in further detail elsewhere herein, including with reference toFIG.4. FIG.4illustrates an example of a flow diagram400that supports a configurable verify level in accordance with examples as disclosed herein. The flow diagram400may illustrate a process that may be implemented by a system100(or one or more components thereof), a system200(or one or more components thereof), or a system300(or one or more components thereof) as described with reference toFIGS.1-3. The flow diagram may illustrate a process for performing verify operations based on or in response to a configurable verify command. Aspects of the flow diagram400may be implemented by one or more controllers, among other components (e.g., a memory system controller of a memory system, a command handler of the memory system, a host system controller of a host system). Additionally or alternatively, aspects of the flow diagram400may be implemented as instructions stored in memory (e.g., firmware stored in a memory coupled with the memory system). For example, the instructions, if executed by a controller, may cause the controller to perform the operations of the flow diagram400. Alternative examples of the following may be implemented in which some operations are performed in a different order than described or are not performed at all. In some cases, operations may include features not mentioned below, or additional operations may be added. At405, a target level of reliability for a set of data stored in a memory device may be determined. In some examples, the host device may determine the target level of reliability for the set of data. The host device may determine the target level or reliability based on or in response to one or more commands to be executed on the set of data, a mode of operation associated with the memory device or the memory system that includes the memory device, time available for the host device to execute the verify command and corresponding verify operation, a size of the set of data (e.g., a logic range), physical stressors associated with the set of data, or the like. That is, the target level of reliability may be based on or in response to a tradeoff between time and reliability for the set of data. In some examples, the memory device may indicate a level of reliability for the set of data to the host device, and the host device may determine the target level of reliability based on or in response to the indicated level of reliability. At410, a command may be received. In some examples, the memory device may receive the command from the host device based on or in response to the host device determining the target level of reliability. The command may be a verify command and may indicate the target level of reliability for the set of data. The command may indicate a request to perform one or more error management operations for the set of data based on or in response to the target level of reliability. The memory device may identify, in the command (e.g., in a CDB of the command), a bit, a bit field, or an attribute configured to indicate the target level of reliability. An example verify command CDB is illustrated in Table 1. TABLE 1Example Verify Command CDBBitByte765432100OPERATION CODE (2Fh)1VRPROTECT =DPO =ReservedBYTCHK =Obsolete =000b0b0b0b23LOGICAL BLOCK ADDRESS456ReservedGROUP NUMBER = 00000b7VERIFICATION LENGTH89CONTROL = 00h As illustrated by the example verify command CDB in Table 1, a verify command CDB may include a set of bits configured to indicate an LBA for the set of data selected for the verify operation. For example, the verify command CDB may indicate a least significant bit and a most significant bit corresponding to the LBA for the set of data. The verify command CDB may additionally or alternatively include an operation code indicative of the type of command and one or more sets of reserved bits. In some examples, the bit, the bit field, or both may be selected from the set of reserved bits within the verify command CDB to convey the target level of reliability and the one or more error management operations for the verify operation. If the target level of reliability is conveyed via the bit field, each target level of reliability and each error management operation may be associated with one or more bits in the bit field, and the host device may select, or indicate, which target level or error management operations are to be performed during the verify operation using a bit mask to select the respective bits. In other words, each target level of reliability may correspond to a different combination of bits in the bit field. In some examples, the target level of reliability and the one or more corresponding error management operations may be indicated via an attribute associated with the verify command. An example attribute dedicated for indicating a level of reliability for a verify operation is illustrated in Table 2. The attribute may provide for the host device to indicate the target level of reliability and corresponding error management operations without changing or altering the verify command CDB. In some examples, the attribute may include parameters for a UFS device. TABLE 2Example Verify Command AttributeType #Ind. #IDNNameAccess PropertySizeSel.MDVDescriptionNotesbVerifyOpQualifiersRead/Persistent1D00 = Normal Etc. At415, the target level of reliability may be determined from a set of target levels of reliability. In some examples, the memory device may determine which target level of reliability of the set of target levels of reliability is indicated for the set of data based on or in response to the command (e.g., based on or in response to a value of the bit, a value of the bit field, or one or more parameters conveyed via the attribute). The set of target levels of reliability may be configured for the memory device, and each target level of reliability of the set may correspond to a respective set of the one or more error management operations. Additionally or alternatively, the memory device may determine the one or more error management operations based on an indication in the verify command. The one or more error management operations may include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an error handling operation, one or more other error management operations, or any combination thereof. The error management operations may represent examples of the error management operations described with reference toFIG.3. The memory device may thereby identify one or more error management operations corresponding to the target level of reliability for the set of data based on or in response to the command. At420, if the target level of reliability is a first target level of reliability of the set of target levels of reliability, one of the read margin adjustment operation or the error correction operation may be performed. In some examples, the memory device may perform the one of the read margin adjustment operation or the error correction operation for the set of data based on or in response to the command and the first target level of reliability, where the first target level of reliability may correspond to one of the read margin adjustment operation or the error correction operation. At425, if the target level of reliability is a second target level of reliability of the set of target levels of reliability, both the read margin adjustment operation and the error correction operation may be performed. In some examples, the memory device may perform the read margin adjustment operation and the error correction operation for the set of data based on or in response to the command and the second target level of reliability, where the second target level of reliability may correspond to both of the read margin adjustment operation and the error correction operation. At430, if the target level of reliability is a third target level of reliability of the set of target levels of reliability, one of the read margin adjustment operation or the error correction operation may be performed. In some examples, the memory device may perform the one of the read margin adjustment operation or the error correction operation for the set of data based on or in response to the command and the third target level of reliability, where the third target level of reliability may correspond to one of the read margin adjustment operation or the error correction operation. The third target level of reliability may additionally or alternatively correspond to a threshold read margin associated with a refresh operation for the set of data. At435, if the target level of reliability is a fourth target level of reliability of the set of target levels of reliability, both the read margin adjustment operation and the error correction operation may be performed. In some examples, the memory device may perform the read margin adjustment operation and the error correction operation for the set of data based on or in response to the command and the fourth target level of reliability, where the fourth target level of reliability may correspond to both of the read margin adjustment operation and the error correction operation. The fourth target level of reliability may additionally or alternatively correspond to a threshold read margin associated with a refresh operation for the set of data. At440, it may be determined whether a read margin for the set of data satisfies the threshold read margin corresponding to the third and fourth target levels of reliability. In some examples, the memory device may determine whether the read margin satisfies the threshold read margin by determining whether a logic level (e.g., a potential) of the set of data is above or below a threshold logic level corresponding to the threshold read margin. The memory device may determine whether the read margin satisfies the threshold read margin after performing the read margin adjustment operation, the error correction operation, or both, corresponding to the third or fourth target level of reliability. At445, if the read margin fails to satisfy the threshold read margin, the refresh operation may be performed for the set of data. In some examples, the memory device may perform the refresh operation based on or in response to the read margin failing to satisfy the threshold and determining that the target level of reliability is the third or fourth target level of reliability. The read margin failing to satisfy the threshold may indicate one or more errors are present in the set of data. The refresh operation may be performed to ensure that the errors in the set of data are fixed, such that the set of data may achieve the third or fourth target level of reliability. Although four example target levels of reliability are illustrated, it is to be understood that any quantity of target levels of reliability may be configured and indicated via the verify command. Additionally or alternatively, the target levels of reliability may correspond to any error management operations not shown, or any combination of error management operations. In one example, the verify command may indicate a fifth target level of reliability. The fifth target level of reliability may be similar to the host device setting the EER bit low (e.g., to 0) in the verify error recovery mode page, as described with reference toFIG.3. At450, if the read margin satisfies the threshold read margin, or if the target level of reliability is the first or second target level of reliability, or after the refresh operation is performed, an indication of a level of reliability of the set of data may be transmitted. In some examples, the memory device may transmit the indication of the level of reliability of the set of data to the host device based on or in response to the verify command and performing the one or more error management operations. The indication of the level of reliability may, in some examples, be transmitted via a response UPIU. The response UPIU may indicate a status response of GOOD, BUSY, or CHECK CONDITION. A status response of GOOD may indicate that the set of data is valid, or not corrupt, and the error management operations were performed successfully. A status response of BUSY may be transmitted if the memory device is still processing the verify command. A status response of CHECK CONDITION may indicate that a failure occurred during the verify operation. The memory device may transmit a SENSE KEY in addition to the status response of CHECK CONDITION to indicate, to the host device, a type of failure that occurred during the verify operation. As such, the verify command status response may indicate a level of reliability of the set of data after the verify operations and error management operations are performed. In some examples, the host device may determine a second target level of reliability for the set of data based on or in response to the indicated level of reliability. In such cases, the host device may transmit a second verify command to the memory device based on or in response to the second target level of reliability. A system as described herein may thereby support configurable verify procedures such that a host device may transmit a verify command to a memory device indicating a target level of reliability from a set of configured target levels of reliability. The memory device may perform one or more error management operations configured for the indicated target level of reliability. Such verification techniques may provide for the memory device to verify a set of data and perform configured levels of error management based on a status of the data and a target level of reliability for the data. Verify operations performed according to the configurable verify levels may reduce latency and power consumption and improve a reliability of stored data as compared with other verification techniques in which a verify command may indicate one or two operations to be performed for a set of data. FIG.5shows a block diagram500of a memory system520that supports a configurable verify level in accordance with examples as disclosed herein. The memory system520may be an example of aspects of a memory system as described with reference toFIGS.1through4. The memory system520, or various components thereof, may be an example of means for performing various aspects of configurable verify level as described herein. For example, the memory system520may include a command component525, a target reliability component530, an error management component535, a reliability component540, a read margin component545, a refresh component550, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The command component525may be configured as or otherwise support a means for receiving, from a host device, a command indicating a target level of reliability associated with a set of data stored in a memory device, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability. The target reliability component530may be configured as or otherwise support a means for determining the target level of reliability from a set of target levels of reliability each corresponding to a respective one or more error management operations based on the command, where the target level of reliability corresponds to the one or more error management operations. The error management component535may be configured as or otherwise support a means for performing the one or more error management operations for the set of data based on the target level of reliability. The reliability component540may be configured as or otherwise support a means for transmitting, to the host device in response to the command, an indication of a level of reliability associated with the set of data based on performing the one or more error management operations. In some examples, the target reliability component530may be configured as or otherwise support a means for identifying, in the command, a bit, a bit field, or an attribute configured to indicate the target level of reliability, where determining the target level of reliability from the set of target levels of reliability is based on a value of the bit, a value of the bit field, or one or more parameters conveyed via the attribute. In some examples, the command is a verify command including a set of reserved bits within a CDB, and the bit, the bit field, or both, are selected from the set of reserved bits to convey the target level of reliability corresponding to a verify operation. In some examples, the one or more error management operations include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an enable early recovery operation, or any combination thereof, in accordance with the target level of reliability. In some examples, the reduced error correction capability corresponds to the error correction operation being performed in a configured time period. In some examples, performing the read margin adjustment operation includes adjusting a read margin to verify the set of data. In some examples, the target reliability component530may be configured as or otherwise support a means for determining that the target level of reliability is a first target level of reliability of the set of target levels of reliability based on the command, where the first target level of reliability corresponds to the read margin adjustment operation or the error correction operation according to the reduced error correction capability. In some examples, the error management component535may be configured as or otherwise support a means for performing one of the read margin adjustment operation or the error correction operation in accordance with the first target level of reliability. In some examples, the target reliability component530may be configured as or otherwise support a means for determining that the target level of reliability is a second target level of reliability of the set of target levels of reliability based on the command, where the second target level of reliability corresponds to the read margin adjustment operation and the error correction operation according to the reduced error correction capability. In some examples, the error management component535may be configured as or otherwise support a means for performing the read margin adjustment operation and the error correction operation in accordance with the second target level of reliability. In some examples, the target reliability component530may be configured as or otherwise support a means for determining that the target level of reliability is a third target level of reliability of the set of target levels of reliability based on the command, where the third target level of reliability corresponds to the refresh operation and one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability, and where the third target level of reliability indicates a threshold read margin associated with the refresh operation. In some examples, the error management component535may be configured as or otherwise support a means for performing one of the read margin adjustment operation or the error correction operation in accordance with the third target level of reliability. In some examples, the read margin component545may be configured as or otherwise support a means for determining whether a read margin associated with the set of data satisfies the threshold read margin based on the third target level of reliability and performing the one of the read margin adjustment operation or the error correction operation. In some examples, the refresh component550may be configured as or otherwise support a means for performing the refresh operation for the set of data based on determining that the read margin associated with the set of data fails to satisfy the threshold read margin. In some examples, the target reliability component530may be configured as or otherwise support a means for determining that the target level of reliability is a fourth target level of reliability of the set of target levels of reliability based on the command, where the fourth target level of reliability corresponds to the read margin adjustment operation, the error correction operation according to the reduced error correction capability, and the refresh operation, and where the fourth target level of reliability indicates a threshold read margin associated with the refresh operation. In some examples, the error management component535may be configured as or otherwise support a means for performing the read margin adjustment operation and the error correction operation in accordance with the fourth target level of reliability. In some examples, the read margin component545may be configured as or otherwise support a means for determining whether a read margin associated with the set of data satisfies the threshold read margin based on the fourth target level of reliability and performing the read margin adjustment operation and the error correction operation. In some examples, the refresh component550may be configured as or otherwise support a means for performing the refresh operation for the set of data based on determining that the read margin associated with the set of data fails to satisfy the threshold read margin. In some examples, the command component525may be configured as or otherwise support a means for receiving, from the host device, a second command indicating a second target level of reliability associated with the set of data based on the indication of the level of reliability of the set of data and the one or more error management operations. FIG.6shows a block diagram600of a host system620that supports a configurable verify level in accordance with examples as disclosed herein. The host system620may be an example of aspects of a host system as described with reference toFIGS.1through4. The host system620, or various components thereof, may be an example of means for performing various aspects of configurable verify level as described herein. For example, the host system620may include a target reliability component625, a command component630, a reliability component635, an error management component640, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The target reliability component625may be configured as or otherwise support a means for determining a target level of reliability associated with a set of data stored in a memory device coupled with a memory system. The command component630may be configured as or otherwise support a means for transmitting, to the memory device, a command indicating the target level of reliability, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability. The reliability component635may be configured as or otherwise support a means for receiving, from the memory device in response to the command, an indication of a level of reliability associated with the set of data based on the one or more error management operations. In some examples, the target reliability component625may be configured as or otherwise support a means for transmitting, in the command, a bit, a bit field, or an attribute configured to indicate the target level of reliability selected from a set of target levels of reliability, where the target level of reliability is indicated via a value of the bit, a value of the bit field, or one or more parameters conveyed via the attribute. In some examples, the command is a verify command including a set of reserved bits within a CDB and the bit, the bit field, or both, are selected from the set of reserved bits to convey the target level of reliability corresponding to a verify operation. In some examples, the target level of reliability is selected from a set of target levels of reliability. In some examples, the one or more error management operations include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an enable early recovery operation, or any combination thereof, in accordance with the selected target level of reliability. In some examples, the reduced error correction capability corresponds to the error correction operation performed in a configured time period. In some examples, the read margin adjustment operation corresponds to an adjustment of a read margin to verify the set of data. In some examples, to support transmitting the command, the command component630may be configured as or otherwise support a means for transmitting the command indicating a first target level of reliability of the set of target levels of reliability, where the first target level of reliability corresponds to one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability. In some examples, to support transmitting the command, the command component630may be configured as or otherwise support a means for transmitting the command indicating a second target level of reliability of the set of target levels of reliability, where the second target level of reliability corresponds to the read margin adjustment operation and the error correction operation according to the reduced error correction capability. In some examples, to support transmitting the command, the command component630may be configured as or otherwise support a means for transmitting the command indicating a third target level of reliability of the set of target levels of reliability, where the third target level of reliability corresponds to the refresh operation and one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability, and where the third target level of reliability indicates a threshold read margin associated with the refresh operation. In some examples, to support transmitting the command, the command component630may be configured as or otherwise support a means for transmitting the command indicating a fourth target level of reliability of the set of target levels of reliability, where the fourth target level of reliability corresponds to the read margin adjustment operation, the error correction operation according to the reduced error correction capability, and the refresh operation, and where the fourth target level of reliability indicates a threshold read margin associated with the refresh operation. In some examples, the command component630may be configured as or otherwise support a means for transmitting, to the memory device, a second command indicating a second target level of reliability associated with the set of data based on the indication of the level of reliability of the set of data and the one or more error management operations. FIG.7shows a flowchart illustrating a method700that supports a configurable verify level in accordance with examples as disclosed herein. The operations of method700may be implemented by a memory system or its components as described herein. For example, the operations of method700may be performed by a memory system as described with reference toFIGS.1through5. In some examples, a memory system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory system may perform aspects of the described functions using special-purpose hardware. At705, the method may include receiving, from a host device, a command indicating a target level of reliability associated with a set of data stored in a memory device, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability. The operations of705may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of705may be performed by a command component525as described with reference toFIG.5. At710, the method may include determining the target level of reliability from a set of target levels of reliability each corresponding to a respective one or more error management operations based on the command, where the target level of reliability corresponds to the one or more error management operations. The operations of710may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of710may be performed by a target reliability component530as described with reference toFIG.5. At715, the method may include performing the one or more error management operations for the set of data based on the target level of reliability. The operations of715may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of715may be performed by an error management component535as described with reference toFIG.5. At720, the method may include transmitting, to the host device in response to the command, an indication of a level of reliability associated with the set of data based on performing the one or more error management operations. The operations of720may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of720may be performed by a reliability component540as described with reference toFIG.5. In some examples, an apparatus as described herein may perform a method or methods, such as the method700. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, from a host device, a command indicating a target level of reliability associated with a set of data stored in a memory device, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability, determining the target level of reliability from a set of target levels of reliability each corresponding to a respective one or more error management operations based on the command, where the target level of reliability corresponds to the one or more error management operations, performing the one or more error management operations for the set of data based on the target level of reliability, and transmitting, to the host device in response to the command, an indication of a level of reliability associated with the set of data based on performing the one or more error management operations. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for identifying, in the command, a bit, a bit field, or an attribute configured to indicate the target level of reliability, where determining the target level of reliability from the set of target levels of reliability may be based on a value of the bit, a value of the bit field, or one or more parameters conveyed via the attribute. In some examples of the method700and the apparatus described herein, the command may be a verify command including a set of reserved bits within a CDB, and the bit, the bit field, or both, may be selected from the set of reserved bits to convey the target level of reliability corresponding to a verify operation. In some examples of the method700and the apparatus described herein, the one or more error management operations include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an enable early recovery operation, or any combination thereof, in accordance with the target level of reliability. In some examples of the method700and the apparatus described herein, the reduced error correction capability corresponds to the error correction operation being performed in a configured time period and performing the read margin adjustment operation includes adjusting a read margin to verify the set of data. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining that the target level of reliability may be a first target level of reliability of the set of target levels of reliability based on the command, where the first target level of reliability corresponds to the read margin adjustment operation or the error correction operation according to the reduced error correction capability and performing one of the read margin adjustment operation or the error correction operation in accordance with the first target level of reliability. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining that the target level of reliability may be a second target level of reliability of the set of target levels of reliability based on the command, where the second target level of reliability corresponds to the read margin adjustment operation and the error correction operation according to the reduced error correction capability and performing the read margin adjustment operation and the error correction operation in accordance with the second target level of reliability. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining that the target level of reliability may be a third target level of reliability of the set of target levels of reliability based on the command, where the third target level of reliability corresponds to the refresh operation and one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability, and where the third target level of reliability indicates a threshold read margin associated with the refresh operation, performing one of the read margin adjustment operation or the error correction operation in accordance with the third target level of reliability, and determining whether a read margin associated with the set of data satisfies the threshold read margin based on the third target level of reliability and performing the one of the read margin adjustment operation or the error correction operation. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for performing the refresh operation for the set of data based on determining that the read margin associated with the set of data fails to satisfy the threshold read margin. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining that the target level of reliability may be a fourth target level of reliability of the set of target levels of reliability based on the command, where the fourth target level of reliability corresponds to the read margin adjustment operation, the error correction operation according to the reduced error correction capability, and the refresh operation, and where the fourth target level of reliability indicates a threshold read margin associated with the refresh operation, performing the read margin adjustment operation and the error correction operation in accordance with the fourth target level of reliability, and determining whether a read margin associated with the set of data satisfies the threshold read margin based on the fourth target level of reliability and performing the read margin adjustment operation and the error correction operation. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for performing the refresh operation for the set of data based on determining that the read margin associated with the set of data fails to satisfy the threshold read margin. Some examples of the method700and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for receiving, from the host device, a second command indicating a second target level of reliability associated with the set of data based on the indication of the level of reliability of the set of data and the one or more error management operations. FIG.8shows a flowchart illustrating a method800that supports a configurable verify level in accordance with examples as disclosed herein. The operations of method800may be implemented by a host system or its components as described herein. For example, the operations of method800may be performed by a host system as described with reference toFIGS.1through4and6. In some examples, a host system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the host system may perform aspects of the described functions using special-purpose hardware. At805, the method may include determining a target level of reliability associated with a set of data stored in a memory device coupled with a memory system. The operations of805may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of805may be performed by a target reliability component625as described with reference toFIG.6. At810, the method may include transmitting, to the memory device, a command indicating the target level of reliability, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability. The operations of810may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of810may be performed by a command component630as described with reference toFIG.6. At815, the method may include receiving, from the memory device in response to the command, an indication of a level of reliability associated with the set of data based on the one or more error management operations. The operations of815may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of815may be performed by a reliability component635as described with reference toFIG.6. In some examples, an apparatus as described herein may perform a method or methods, such as the method800. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for determining a target level of reliability associated with a set of data stored in a memory device coupled with a memory system, transmitting, to the memory device, a command indicating the target level of reliability, where the command indicates a request to perform one or more error management operations for the set of data based on the target level of reliability, and receiving, from the memory device in response to the command, an indication of a level of reliability associated with the set of data based on the one or more error management operations. Some examples of the method800and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for transmitting, in the command, a bit, a bit field, or an attribute configured to indicate the target level of reliability selected from a set of target levels of reliability, where the target level of reliability may be indicated via a value of the bit, a value of the bit field, or one or more parameters conveyed via the attribute. In some examples of the method800and the apparatus described herein, the command may be a verify command including a set of reserved bits within a CDB and the bit, the bit field, or both, may be selected from the set of reserved bits to convey the target level of reliability corresponding to a verify operation. In some examples of the method800and the apparatus described herein, the target level of reliability may be selected from a set of target levels of reliability and the one or more error management operations include an error correction operation according to a reduced error correction capability, a read margin adjustment operation, a refresh operation, an enable early recovery operation, or any combination thereof, in accordance with the selected target level of reliability. In some examples of the method800and the apparatus described herein, the reduced error correction capability corresponds to the error correction operation performed in a configured time period and the read margin adjustment operation corresponds to an adjustment of a read margin to verify the set of data. In some examples of the method800and the apparatus described herein, transmitting the command may include operations, features, circuitry, logic, means, or instructions for transmitting the command indicating a first target level of reliability of the set of target levels of reliability, where the first target level of reliability corresponds to one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability. In some examples of the method800and the apparatus described herein, transmitting the command may include operations, features, circuitry, logic, means, or instructions for transmitting the command indicating a second target level of reliability of the set of target levels of reliability, where the second target level of reliability corresponds to the read margin adjustment operation and the error correction operation according to the reduced error correction capability. In some examples of the method800and the apparatus described herein, transmitting the command may include operations, features, circuitry, logic, means, or instructions for transmitting the command indicating a third target level of reliability of the set of target levels of reliability, where the third target level of reliability corresponds to the refresh operation and one of the read margin adjustment operation or the error correction operation according to the reduced error correction capability, and where the third target level of reliability indicates a threshold read margin associated with the refresh operation. In some examples of the method800and the apparatus described herein, transmitting the command may include operations, features, circuitry, logic, means, or instructions for transmitting the command indicating a fourth target level of reliability of the set of target levels of reliability, where the fourth target level of reliability corresponds to the read margin adjustment operation, the error correction operation according to the reduced error correction capability, and the refresh operation, and where the fourth target level of reliability indicates a threshold read margin associated with the refresh operation. Some examples of the method800and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for transmitting, to the memory device, a second command indicating a second target level of reliability associated with the set of data based on the indication of the level of reliability of the set of data and the one or more error management operations. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to a condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. If a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other if the switch is open. If a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The terms “if” “when,” “based on,” or “based at least in part on” may be used interchangeably. In some examples, if the terms “if,” “when,” “based on,” or “based at least in part on” are used to describe a conditional action, a conditional process, or connection between portions of a process, the terms may be interchangeable. The term “in response to” may refer to one condition or action occurring at least partially, if not fully, as a result of a previous condition or action. For example, a first condition or action may be performed, and second condition or action may at least partially occur as a result of the previous condition or action occurring (whether directly after or after one or more other intermediate conditions or actions occurring after the first condition or action). Additionally, the terms “directly in response to” or “in direct response to” may refer to one condition or action occurring as a direct result of a previous condition or action. In some examples, a first condition or action may be performed, and second condition or action may occur directly as a result of the previous condition or action occurring independent of whether other conditions or actions occur. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring, such that no other intermediate conditions or actions occur between the earlier condition or action and the second condition or action or a limited quantity of one or more intermediate steps or actions occur between the earlier condition or action and the second condition or action. Any condition or action described herein as being performed “based on,” “based at least in part on,” or “in response to” some other step, action, event, or condition may additionally or alternatively (e.g., in an alternative example) be performed “in direct response to” or “directly in response to” such other condition or action unless otherwise specified. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” if a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” if a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a hyphen and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. For example, the various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 115,908 |
11861178 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to applying a hybrid error recovery process in a memory sub-system. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Another example of non-volatile memory devices is a three-dimensional cross-point (“3D cross-point”) memory device that is a cross-point array of non-volatile memory that can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. For example, a single level cell (SLC) can store one bit of information and has two logic states. The various logic states have corresponding threshold voltage levels. A threshold voltage (VT) is the voltage applied to the cell circuitry (e.g., control gate voltage at which a transistor becomes conductive) to set the state of the cell. A cell is set to one of its logic states based on the VT that is applied to the cell. For example, if a high VT is applied to an SLC, a charge will be present in the cell, setting the SLC to store a logic 0. If a low VT is applied to the SLC, a charge will be absent in the cell, setting the SLC to store a logic 1. Certain memory devices have threshold voltage programming distributions that move or “drift” higher over time. At a given read voltage level (i.e., a value of the voltage applied to a memory cell as part of a read operation), if the threshold voltage programming distributions move, then certain reliability statistics can also be affected. One example of a reliability statistic is a raw bit error rate (RBER). The RBER can be defined as the ratio of the number of erroneous bits to the number of all bits stored in a unit of the memory sub-system, where the unit can be the entire memory sub-system, a die of the memory device, a collection of codewords, or any other meaningful portion of the memory sub-system. A read operation can be performed with a read voltage level. The read voltage level or value (herein the “read voltage level”) can be a particular voltage that is applied to memory cells of a memory device to read the data stored at the memory cells. For example, if a threshold voltage of a particular memory cell is identified as being below the read voltage level that is applied to the particular memory cell, then the data stored at the particular memory cell can be a particular value (e.g., 1). If the threshold voltage of the particular memory cell is identified as being above the read voltage level, then the data stored at the particular memory cell can be another value (e.g., 0). Thus, the read voltage level can be applied to memory cells to determine values stored at the memory cells. In a conventional memory sub-system, when the threshold voltage programming distributions of a memory cell change, the application of the read voltage level can be inaccurate relative to the changed threshold voltage. For example, a memory cell can be programmed to have a threshold voltage below the read voltage level. The programmed threshold voltage can change over time and can shift to be above the read voltage level. For example, the threshold voltage of the memory cell can shift from initially being below the read voltage level to being above the read voltage level. As a result, when the read voltage level is applied to the memory cell, the data stored at the memory cell can be misread or misinterpreted to be at a wrong value as compared to the value as originally stored when the threshold voltage had not yet shifted. The speed or rate of the drift of the threshold voltage programming distributions and corresponding RBER of a conventional memory sub-system can be affected by one or more characteristics of the memory sub-system, such as cycling conditions, changes in die temperature, and read/write disturb conditions. For example, a set (e.g., establishing a “1” value) and reset (e.g., establishing a “0” value) cycling in a 3D cross-point system can degrade the memory cells having wider threshold voltage distributions. The set distribution can have a first leading edge (E1) and a second trailing edge (E2). Further, the reset distribution can have a first leading edge (E3) and a second trailing edge (E4). Cycling conditions can cause longer edge tails between adjacent programming distributions (e.g., the E2 and E3 tails). Longer edge tails, particularly the E2 and E3 tails, can be caused by severe write disturb, read disturb, or both. In addition, temperature conditions and delays between cycles can cause degradation variation from memory device to memory device. For example, higher temperatures and longer delays between cycles can lead to greater threshold voltage drift and degradation. The threshold voltage drift and degradation cause errors during the performance of a memory access operation (e.g., a read operation, write operation etc.) at a memory device. For example, while performing a read operation, a memory sub-system controller can misread bits representing data stored at the memory device (i.e., the stored value is read incorrectly). In another example, one or more bits representing data stored at the memory device can contain errors (i.e., the value is stored incorrectly). Either situation can result in an error during performance of a read operation (e.g., a memory access operation error). Upon detecting that a memory access operation error has occurred, the memory sub-system controller can perform an error correction operation to correct the errors in the data and perform the memory access operation again to access the corrected data. In some instances, an error correction operation can be a memory scrubbing operation, where the memory sub-system controller corrects an error in the data and writes the corrected data back to the memory device. To address errors due to threshold voltage drift, conventional memory sub-systems typically employ a predefined error recovery process including a preset sequence of read retry operations at different read retry voltage levels (hereinafter also referred to as “demarcation voltages”) to enable error correction and data recovery. The error recovery process can include the execution of a sequence of multiple read retry operations to re-read data as part of an error correction process. In an embodiment, the error recovery process can include the use of multiple different read retry demarcation voltages including a base value (e.g., read retry demarcation voltage1(Vt1)), a second value offset from the base value (e.g., read retry demarcation voltage2(Vt2)), and a third value offset from the base value (e.g., read retry demarcation voltage3(Vt3)). The three read retry demarcation voltages are intended to cover the voltage drift range over different periods of time. For example, read retry operation1(i.e., execution of a read retry operation at Vt1) is employed during a first time period of the error recovery process (e.g., a time range of 1 microsecond to a few seconds), read retry operation2(i.e., execution of the read retry operation at Vt2) is employed during a second time period (e.g., a time range of a few seconds to a few hours), and read retry operation3(i.e., execution of a read retry operation at Vt3) is employed during a third time period exceeding a few hours to cover longer voltage drifts. However, executing a fixed sequence of read retry operations can result in the application of a read retry demarcation voltage that causes read corruption. For example, applying a higher-than-optimal read retry demarcation voltage to one or more memory cells of a memory device can result in a reset cell that is incorrectly read as a set cell, and thus will be further pushed lower to the set cell region. This can cause a further mixing of reset and set threshold voltage distributions and data corruption. There is thus an impact on the reliability of data of the memory device. In addition, as the memory sub-system seeks to reduce impact on the reliability of data, the memory sub-system can perform additional sequences of read retry operations at lower demarcation voltages to ensure that the data is read correctly. However, performing additional read retry operations can impact performance by utilizing more resources of the memory device. Thus, conventional memory sub-systems face a trade-off between achieving high performance and high reliability of the memory device when using the same predefined error recovery process to address errors for all memory access operations. Aspects of the present disclosure address the above and other deficiencies by providing a memory sub-system that manages the execution of a hybrid error recovery process. In certain embodiments, a memory sub-system controller can receive a request to perform a memory access operation on a set of memory cells of a memory device. The memory sub-system controller can determine the request type of the memory access operation (e.g., whether the request to perform the memory access operation is from a host system or a background operation such as a data refresh operation). The memory sub-system controller can perform an error recovery operation based on the request type. For example, the memory sub-system controller can perform one type of error recovery operation based on whether the request is from the host system, and the memory sub-system controller can perform another type of error recovery operation based on whether the request is from a background operation. In one example, performing the error recovery operation for a request from the host system can include determining a set of demarcation voltages associated with the request from the host system. The set of demarcation voltages can include a number of demarcation voltages that is lower than a number of demarcation voltages associated with the request from the background operation. A starting voltage in the set of demarcation voltages can further correspond to a higher voltage than a starting voltage in a set of demarcation voltages associated with the request from the background operation. The memory sub-system controller can perform a read retry operation of a set of read retry operations on the set of memory cells using the set of demarcation voltages. The memory sub-system controller can determine an error rate associated with performing the read retry operation. The memory sub-system controller can determine that the error rate satisfies an error rate threshold (e.g., the read retry operation results in an error such as a UECC error, or uncorrectable error correction code error). In response to determining that the error rate satisfies the error rate threshold, the memory sub-system controller can perform another read retry operation using a sequential demarcation voltage of the set of demarcation voltages. In some embodiments, the memory sub-system controller can determine the error rate associated with performing each read retry operation of the set of read retry operations. In response to determining that the error rate associated with performing each read retry operation of the set of read retry operations satisfies the error rate threshold (e.g., each read retry operation results in an error such as a UECC), the memory sub-system controller can perform the error recovery operation associated with a request from a background operation. In one example, performing the error recovery operation for a request from a background operation can include determining a set of demarcation voltages associated with the request from the background operation. The set of demarcation voltages can include a number of demarcation voltages that is greater than the number of demarcation voltages associated with the request from the host system. A starting voltage in the set of demarcation voltages can further correspond to a lower voltage than a starting voltage in the set of demarcation voltages associated with the request from the host system. The memory sub-system controller can perform a read retry operation of a set of read retry operations on the set of memory cells using the set of demarcation voltages. The memory sub-system controller can determine an error rate associated with performing the read retry operation. The memory sub-system controller can determine that the error rate satisfies an error rate threshold (e.g., the read retry operation results in an error such as a UECC). In response to determining that the error rate satisfies the error rate threshold, the memory sub-system controller can perform another read retry operation using a sequential demarcation voltage of the set of demarcation voltages. Advantages of the present disclosure include, but are not limited to improving the reliability and performance of memory devices by managing a hybrid error recovery process based on a request type for each memory access operation (e.g., from a host system or from a background operation). By having different sequences of read retry operations depending on the request type for each memory access operation, a memory sub-system can balance the trade-off between reliability and performance that conventional memory sub-systems face. Advantageously, the memory sub-system can have one type of error recovery process for memory access operations from a host system aimed at enhancing performance by reducing the number of read retry operations and increasing the demarcation voltage levels. The memory sub-system can have another type of error recovery process for memory access operations from a background operation aimed at increasing data reliability by increasing the number of read retry operations and decreasing demarcation voltage levels. Accordingly, both the performance and reliability of the memory device can be improved by applying a hybrid error recovery process that takes into account the type of memory access operation. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to multiple memory sub-systems110of different types.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Some types of memory, such as 3D cross-point, can group pages across dice and channels to form management units (MUs). Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical MU address, physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system110includes an error recovery management component113that can be used to apply a hybrid error recovery process for a memory device (e.g., the memory device130). In some embodiments, the memory sub-system controller115includes at least a portion of the error recovery management component113. In some embodiments, the error recovery management component113is part of the host system110, an application, or an operating system. In other embodiments, local media controller135includes at least a portion of the error recovery management component113and is configured to perform the functionality described herein. The error recovery management component113can receive a request to perform a memory access operation on a set of memory cells of the memory device. The error recovery management component113can determine a request type associated with the memory access operation. For example, the request type could be one of a host input/output (I/O) operation or a background operation. In response to determining that the request type associated with the memory access operation is a certain request type (e.g., a host I/O operation), the error recovery management component113can perform an error recovery operation associated with the certain request type. In response to determining that the request type associated with the memory access operation is another request type (e.g., a background operation), the error recovery management component113can perform an error recovery operation associated with the other request type. The error recovery management component113can determine the request type associated with the memory access operation based on a bit of a set of bits for the memory access operation. The request types can include a request from a host system and a request from a background operation. Performing the error recovery operation for a request from the host system can include determining a set of demarcation voltages associated with the request from the host system. The set of demarcation voltages can include a number of demarcation voltages that is less than a number of demarcation voltages associated with a request from the background operation. The set of demarcation voltages can further include a base value demarcation voltage that corresponds to a higher voltage than a base value demarcation voltage of a set of demarcation voltages associated with a request from the background operation. The error recovery management component113can perform a read operation (e.g., a read retry operation) of a set of read operations on the set of memory cells using a demarcation voltage of the set of demarcation voltages. The error recovery management component113can determine an error rate associated with performing the read operation. In response to determining that the error rate associated with performing the read operation satisfies an error rate threshold (e.g., the read operation results in a read error or UECC error), the error recovery management component113can perform another read operation on the set of memory cells using a sequential demarcation voltage of the set of demarcation voltages. In some embodiments, the error recovery management component113can determine the error rate associated with performing each read operation of the set of read operations. In response to determining that the error rate associated with performing each read operations of the set of read operations satisfies the error rate threshold (e.g., the read operation results in a read error or UECC error), the error recovery management component113can perform the error recovery operation associated with the other request type (i.e., the background operation). Performing the error recovery operation for a request from the background operation can include determining a set of demarcation voltages associated with the request from the host system. The set of demarcation voltages can include a number of demarcation voltages that is greater than the number of demarcation voltages associated with the request from the host system. The set of demarcation voltages can further include a base value demarcation voltage that corresponds to a lower voltage than the base value demarcation voltage of the set of demarcation voltages associated with the request from the host system. The error recovery management component113can perform a read operation (e.g., a read retry operation) on the set of memory cells using a demarcation voltage of the set of demarcation voltages. The error recovery management component113can determine an error rate associated with performing the read operation. In response to determining that the error rate associated with performing the read operation satisfies an error rate threshold (e.g., the read operation results in a read error or UECC error), the error recovery management component113can perform another read operation on the set of memory cells using a sequential demarcation voltage of the set of demarcation voltages. Further details with regards to the operations of the error recovery management component113are described below. FIG.2illustrates an example graph of threshold voltage programming distributions with drifts over time for a memory device, in accordance with some embodiments of the present disclosure. Threshold voltage programming distributions for memory cells can drift over time due to one or more characteristics of a memory sub-system, such as cycling conditions, changes in die temperature, and read/write disturb conditions.FIG.2illustrates the threshold voltage programming distributions for two regions of memory cells, a set cell region210and a reset cell region220. The set cell region210can establish a logic state value of “1,” and the reset cell region220can establish a logic state value of “0.” Each region can experience a threshold voltage that drifts over time. For example, the threshold voltage programming distribution for the set cell region210can drift from drift250to drift260and further. In conventional memory sub-systems, there can be an error recovery process that include the use of multiple different read retry demarcation voltages. For example, as illustrated inFIG.2, there can be an error recovery process that includes a base demarcation voltage201, a second demarcation voltage202, and a third demarcation voltage203. Each demarcation voltage can cover the voltage drift range over different periods of time. For example, the demarcation voltage201can cover a time range of 1 microsecond to a few seconds. The demarcation voltage202can cover a time range of a few second to a few hours. The demarcation voltage203can cover a time range of a few hours to longer periods of time. However, as discussed herein above, executing a fixed sequence of read retry operations can result in the application of a read retry demarcation voltage that causes read corruption. For example, applying a higher-than-optimal read retry demarcation voltage, such as the demarcation voltage202, to one or more memory cells of the set cell region210can result in a an incorrectly read cell. FIG.3is a flow diagram of an example method300for managing a hybrid error recovery flow process in a memory device, in accordance with some embodiments of the present disclosure. The method300can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method300is performed by the error recovery management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation304, the processing logic receives a request to perform a memory access operation on a set of memory cells of a memory device. The memory access operation can include operations such as read, write, wear leveling, data refresh, and erase operations. In some embodiments, the processing logic can receive the request from a host system or a controller component of the memory device. In some embodiments, in response to receiving the request to perform the memory access operation, the processing logic can determine that the set of memory cells is associated with a disturb error, e.g., a write disturb error and/or a read disturb error. In some embodiments, in response to receiving the request to perform the memory access operation, the processing logic can determine that the set of memory cells is associated with a data degradation. At operation306, the processing logic determines a request type associated with the memory access operation. In some embodiments, determining the request type associated with the memory access operation can include determining the request type based on a bit of a set of bits representing the memory access operation. For example, the memory access operation can include a bit indicating the request type based on a fixed value represented by the bit (e.g.,0or1). In some embodiments, the request type can include a request from the host system and/or a request from the controller component to perform a background operation. In some embodiments, the processing logic can determine that the request type associated with the memory access operation is a request from the host system. In some embodiments, the processing logic can determine that the request type associated with the memory access operation is a request to perform a background operation. At operation308, the processing logic performs an error recovery process associated with the request from the host system. In some embodiments, the processing logic performs the error recovery process in response to determining that the request type associated with the memory access operation is the request from the host system. In some embodiments, performing the error recovery operation associated with the request from the host system can include identifying a set of demarcation voltages associated with the request from the host system. For example, the set of demarcation voltages associated with the request from the host system can be based on predefined media characterization of the memory device. The set of demarcation voltages associated with the request from the host system can vary based on the characteristics of the memory device. In some embodiments, the set of demarcation voltages can include a number of demarcation voltages less than a number of demarcation voltages associated with the request to perform the background operation. For example, as illustrated inFIG.2, the set of demarcation voltages associated with the request from the host system can include a set of read operations (e.g., read retry operations) including demarcation voltage202and demarcation voltage203(e.g., demarcation voltage202—demarcation voltage202—demarcation voltage203). In some embodiments, a base demarcation voltage (i.e., starting value demarcation voltage) of the set of demarcation voltages can correspond to a higher voltage than a base demarcation voltage of a set of demarcation voltages associated with the request to perform the background operation. For example, as illustrated inFIG.2, the base demarcation voltage can be demarcation voltage202. In some embodiments, the processing logic can perform a read retry operation of the set of read retry operations. The processing logic can perform the read retry operation using a demarcation voltage of the set of demarcation voltages. For example, the processing logic can perform the read retry operation using the base demarcation voltage. Performing the read retry can include reading the data stored at the set of memory cells using the base demarcation voltage. In some embodiments, the processing logic can determine an error rate associated with performing the read retry operation. In some embodiments, the processing logic can determine that the error rate associated with performing the read retry operation satisfies an error rate threshold. Determining that the error rate satisfies the error rate threshold can include determining that the error rate associated with performing the read retry operation resulted in a read error or a UECC error. In some embodiments, in response to determining the error rate satisfies the error rate threshold, the processing logic can perform another read retry operation of the set of read operations on the set of memory cells. In some embodiments, the other read retry operation can be performed using a sequential demarcation voltage of the set of demarcation voltages. For example, as illustrated inFIG.2, the sequential demarcation voltage to base demarcation voltage202is demarcation voltage203. In some embodiments, the processing logic can determine the error rate associated with performing the other read retry operation. If the processing logic determines that the error rate associated with performing the other read retry operation satisfies the error rate threshold, the processing logic can perform a third read retry operation on the set of memory cells. In some embodiments, the processing logic can repeat the steps at operation308for each read retry operation of the set of read retry operations. In some embodiments, the processing logic can determine an error rate for a final read retry operation of the set of read retry operations. In response to determining that the error rate for the final read retry operation satisfies the error rate threshold, the processing logic can perform an error recovery process associated with the request to perform the background operation. Performing the error recovery process associated with the request to perform the background operation can include identifying the set of demarcation voltages associated with the request to perform the background operation. For example, the set of demarcation voltages associated with the request to perform the background operation can be based on predefined media characterization of the memory device. The set of demarcation voltages associated with the request to perform the background operation can vary based on the characteristics of the memory device. In some embodiments, the set of demarcation voltages can include a number of demarcation voltages greater than the number of demarcation voltages associated with the request from the host system. For example, as illustrated inFIG.2, the set of demarcation voltages associated with the request to perform the background operation can include a set of read retry operations including demarcation voltage201, demarcation voltage202, and demarcation voltage203(e.g., demarcation voltage201—demarcation voltage201—demarcation voltage201—demarcation voltage202—demarcation voltage202—demarcation voltage203). In some embodiments, a base demarcation voltage (i.e., starting value demarcation voltage) of the set of demarcation voltages can correspond to a lower voltage than the base demarcation voltage of the set of demarcation voltages associated with the request from the host system. For example, as illustrated inFIG.2, the base demarcation voltage can be demarcation voltage201. In some embodiments, the processing logic can perform a read retry operation of the set of read retry operations. The processing logic can perform the read retry operation using a demarcation voltage of the set of demarcation voltages. For example, the processing logic can perform the read retry operation using the base demarcation voltage. Performing the read retry can include reading the data stored at the set of memory cells using the base demarcation voltage. In some embodiments, the processing logic can determine an error rate associated with performing the read retry operation. In some embodiments, the processing logic can determine that the error rate associated with performing the read retry operation satisfies the error rate threshold. Determining that the error rate satisfies the error rate threshold can include determining that the error rate associated with performing the read retry operation resulted in a read error or a UECC error. In some embodiments, in response to determining the error rate satisfies the error rate threshold, the processing logic can perform another read retry operation of the set of read operations on the set of memory cells. In some embodiments, the other read retry operation can be performed using a sequential demarcation voltage of the set of demarcation voltages. For example, as illustrated inFIG.2, the sequential demarcation voltage to base demarcation voltage201is demarcation voltage202. In some embodiments, the processing logic can determine the error rate associated with performing the other read retry operation. If the processing logic determines that the error rate associated with performing the other read retry operation satisfies the error rate threshold, the processing logic can perform a third read retry operation on the set of memory cells. In some embodiments, the processing logic can repeat the steps at operation308for each read retry operation of the set of read retry operations. At operation310, the processing logic performs the error recovery process associated with the request to perform the background operation. In some embodiments, the processing logic performs the error recovery process in response to determining that the request type associated with the memory access operation is the request to perform the background operation. FIG.4is a flow diagram of an example method400for managing a hybrid error recovery flow process in a memory device, in accordance with some embodiments of the present disclosure. The method400can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by the error recovery management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation404, the processing logic receives a request to perform a memory access operation on a set of memory cells of a memory device. The memory access operation can include operations such as read, write, wear leveling, data refresh, and erase operations. In some embodiments, the processing logic can receive the request from a host system or a controller component of the memory device. In some embodiments, in response to receiving the request to perform the memory access operation, the processing logic can determine that the set of memory cells is associated with a disturb error, e.g., a write disturb error and/or a read disturb error. In some embodiments, in response to receiving the request to perform the memory access operation, the processing logic can determine that the set of memory cells is associated with a data degradation. At operation406, the processing logic determines a request type associated with the memory access operation. In some embodiments, determining the request type associated with the memory access operation can include determining the request type based on a bit of a set of bits representing the memory access operation. For example, the memory access operation can include a bit indicating the request type based on a fixed value represented by the bit (e.g.,0or1). In some embodiments, the request type can include a request from the host system and/or a request from the controller component to perform a background operation. In some embodiments, the processing logic can determine that the request type associated with the memory access operation is a request from the host system. In some embodiments, the processing logic can determine that the request type associated with the memory access operation is a request to perform a background operation. At operation408, the processing logic performs an error recovery process associated with the request from the host system. In some embodiments, the processing logic performs the error recovery process in response to determining that the request type associated with the memory access operation is the request from the host system. In some embodiments, performing the error recovery operation associated with the request from the host system can include identifying a set of demarcation voltages associated with the request from the host system. For example, the set of demarcation voltages associated with the request from the host system can be based on predefined media characterization of the memory device. The set of demarcation voltages associated with the request from the host system can vary based on the characteristics of the memory device. In some embodiments, the set of demarcation voltages can include a number of demarcation voltages less than a number of demarcation voltages associated with the request to perform the background operation. For example, as illustrated inFIG.2, the set of demarcation voltages associated with the request from the host system can include a set of read operations (e.g., read retry operations) including demarcation voltage202and demarcation voltage203(e.g., demarcation voltage202—demarcation voltage202—demarcation voltage203). In some embodiments, a base demarcation voltage (i.e., starting value demarcation voltage) of the set of demarcation voltages can correspond to a higher voltage than a base demarcation voltage of a set of demarcation voltages associated with the request to perform the background operation. For example, as illustrated inFIG.2, the base demarcation voltage can be demarcation voltage202. In some embodiments, the processing logic can perform a read retry operation of the set of read retry operations. The processing logic can perform the read retry operation using a demarcation voltage of the set of demarcation voltages. For example, the processing logic can perform the read retry operation using the base demarcation voltage. Performing the read retry can include reading the data stored at the set of memory cells using the base demarcation voltage. In some embodiments, the processing logic can determine an error rate associated with performing the read retry operation. In some embodiments, the processing logic can determine that the error rate associated with performing the read retry operation satisfies an error rate threshold. Determining that the error rate satisfies the error rate threshold can include determining that the error rate associated with performing the read retry operation resulted in a read error or a UECC error. In some embodiments, in response to determining the error rate satisfies the error rate threshold, the processing logic can perform another read retry operation of the set of read operations on the set of memory cells. In some embodiments, the other read retry operation can be performed using a sequential demarcation voltage of the set of demarcation voltages. For example, as illustrated inFIG.2, the sequential demarcation voltage to base demarcation voltage202is demarcation voltage203. In some embodiments, the processing logic can determine the error rate associated with performing the other read retry operation. If the processing logic determines that the error rate associated with performing the other read retry operation satisfies the error rate threshold, the processing logic can perform a third read retry operation on the set of memory cells. In some embodiments, the processing logic can repeat the steps at operation308for each read retry operation of the set of read retry operations. At operation410, the processing logic determines an error rate associated with performing a set of read retry operations. In some embodiments, the processing logic determines the error rate associated with performing the set of read retry operations in response to performing the error recovery operation associated with the request from the host system. In some embodiments, determining the error rate associated with performing the set of read retry operations can include determining an error rate for each read retry operation of the set of read retry operations. The processing logic can determine that an error rate for a final read retry operation of the set of read retry operations satisfies the error rate threshold (e.g., performing the read retry operation results in a read error or a UECC error). At operation412, the processing logic performs an error recovery operation associated with the request to perform the background operation. In some embodiments, the processing logic performs the error recovery operation associated with the request to perform the background operation in response to determining that the error rate associated with performing the set of read retry operations satisfies the error rate threshold. Performing the error recovery process associated with the request to perform the background operation can include identifying the set of demarcation voltages associated with the request to perform the background operation. For example, the set of demarcation voltages associated with the request to perform the background operation can be based on predefined media characterization of the memory device. The set of demarcation voltages associated with the request to perform the background operation can vary based on the characteristics of the memory device. In some embodiments, the set of demarcation voltages can include a number of demarcation voltages greater than the number of demarcation voltages associated with the request from the host system. For example, as illustrated inFIG.2, the set of demarcation voltages associated with the request to perform the background operation can include a set of read retry operations including demarcation voltage201, demarcation voltage202, and demarcation voltage203(e.g., demarcation voltage201—demarcation voltage201—demarcation voltage201—demarcation voltage202—demarcation voltage202—demarcation voltage203). In some embodiments, a base demarcation voltage (i.e., starting value demarcation voltage) of the set of demarcation voltages can correspond to a lower voltage than the base demarcation voltage of the set of demarcation voltages associated with the request from the host system. For example, as illustrated inFIG.2, the base demarcation voltage can be demarcation voltage201. In some embodiments, the processing logic can perform a read retry operation of the set of read retry operations. The processing logic can perform the read retry operation using a demarcation voltage of the set of demarcation voltages. For example, the processing logic can perform the read retry operation using the base demarcation voltage. Performing the read retry can include reading the data stored at the set of memory cells using the base demarcation voltage. In some embodiments, the processing logic can determine an error rate associated with performing the read retry operation. In some embodiments, the processing logic can determine that the error rate associated with performing the read retry operation satisfies the error rate threshold. Determining that the error rate satisfies the error rate threshold can include determining that the error rate associated with performing the read retry operation resulted in a read error or a UECC error. In some embodiments, in response to determining the error rate satisfies the error rate threshold, the processing logic can perform another read retry operation of the set of read operations on the set of memory cells. In some embodiments, the other read retry operation can be performed using a sequential demarcation voltage of the set of demarcation voltages. For example, as illustrated inFIG.2, the sequential demarcation voltage to base demarcation voltage201is demarcation voltage202. In some embodiments, the processing logic can determine the error rate associated with performing the other read retry operation. If the processing logic determines that the error rate associated with performing the other read retry operation satisfies the error rate threshold, the processing logic can perform a third read retry operation on the set of memory cells. In some embodiments, the processing logic can repeat the steps at operation308for each read retry operation of the set of read retry operations. FIG.5illustrates an example machine of a computer system500within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system500can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the write disturb management component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system500includes a processing device502, a main memory504(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory506(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system518, which communicate with each other via a bus530. Processing device502represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device502can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device502is configured to execute instructions526for performing the operations and steps discussed herein. The computer system500can further include a network interface device508to communicate over the network520. The data storage system518can include a machine-readable storage medium524(also known as a computer-readable medium) on which is stored one or more sets of instructions526or software embodying any one or more of the methodologies or functions described herein. The instructions526can also reside, completely or at least partially, within the main memory504and/or within the processing device502during execution thereof by the computer system500, the main memory504and the processing device502also constituting machine-readable storage media. The machine-readable storage medium524, data storage system518, and/or main memory504can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions526include instructions to implement functionality corresponding to an error recovery management component (e.g., the error recovery management component113ofFIG.1). While the machine-readable storage medium524is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 63,916 |
11861179 | DETAILED DESCRIPTION XIP, as described above, can introduce security challenges. In some cases, the components are controllers. In other cases, the components are any suitable components that access and/or execute data or executable code from flash storage. XIP code of the flash storage may be verified during a boot operation of the controller, but not at a time of reading the code (e.g., run-time) from the flash storage for execution by the controller. For example, some approaches for verifying the XIP code of the flash storage at run-time, such as approaches that verify an entirety of the XIP code or the flash storage, may degrade performance of the controller an amount sufficient to render the controller unsuitable for certain application environments. This may create a run-time attack vector in which, after the validation of the code during the boot operation, a malicious actor could modify the code that is to be executed by the controller or inject malicious code into a data stream provided from the flash storage to the controller. Some approaches exist for mitigating these attacks, such as validation of the code at run-time. However, such approaches can be disadvantageous in that they require comparatively large amounts of time, storage capacity (and therefore physical device space cost associated with such physical space), and increase latency in operation of the controller. Aspects of this description provide for validating integrity of code snippets during the boot operation of the controller. For example, for each block of data on the flash storage, a digest may be formed that is an output of a hash function. A block of data, as used herein, is a unit of data of the flash storage having a programmed size. A bit pattern, based on a randomly generated number, may be determined for each region of the flash storage and a portion of the digest may be preserved as the snippet according to that bit pattern. A region of the flash storage, as used herein is an addressable portion of the flash storage that is defined by a start offset and an end offset, and may vary in size and/or location, from one implementation to another, in the flash storage. Data, and a block of data as described above, may belong to only one region of the flash storage such that regions do not overlap or conflict. In some examples, the randomly generated number, and therefore the bit pattern, may be changed responsive to a device or system reset. The preserved bits of the digest may be stored as the snippet for a respective block of data of the flash storage. The bit pattern may vary in length from one digest to another. In this way, a greater level of security may be applied to some blocks of data of the flash storage than to other blocks of data. At run time, a hash of a block of data being provided to the controller for execution may be determined and data bits of that hash, as determined by the bit pattern, compared to the stored snippet for the respective block of data on the flash storage. Responsive to a determination that the data is a match, execution of the data may proceed. Responsive to a determination that the data is not a match, execution of the data may be halted. In other examples, the flash storage may receive and/or provide a bus fault responsive to a determination that the data is not a match. FIG.1is a block diagram of an electronic device100, in accordance with various examples. In at least some examples, the electronic device100includes flash storage device102and a circuit104. The electronic device100may be generally representative of any suitable device that includes des flash storage device102and the circuit104, such as a computer, a notebook, a smartphone, a sensor, a wearable device, a tablet device, an Internet of Things device, or the like. The flash storage device102is off-device storage from a perspective of the circuit104(e.g., the flash storage device102and the circuit104are not in a same electrical component package). The circuit104may be a system on a chip (SoC) or other circuit that includes a combination of components. For example, the circuit104may include a flash interface106, a data integrity engine108, and a controller110. Although not shown inFIG.1, in some examples, the circuit104also includes a cache, such as for temporary storage of data for, or by, the controller110. The circuit104may also include additional circuitry (not shown) such as an on-the-fly decryption circuit that decrypts encrypted data received from the flash storage device102via the flash interface106for use by the data integrity engine108and/or the controller110. The flash interface106is any suitable interface that enables the data integrity engine108and/or the controller110to interact with, communicate with, and/or read data from the flash storage device102. In some examples, the flash interface106is implemented according to a serial peripheral interface (SPI), such as quad-SPI or octal-SPI, methodology. The data integrity engine108is configured to determine data integrity of data provided by the flash storage device102to the controller110. Data integrity, as used herein, may indicate fidelity of the data across a period of time, such as indicating at a second time at which the data (or a portion of the data) is analyzed whether the data remains intact (e.g., unchanged) since a first time that the data (or a portion of the data) was analyzed. For each block of the flash storage device102, the data integrity engine108may determine a snippet. As used herein, a snippet is an extract of data. For example, a snippet determined by the data integrity engine108for a block of the flash storage device102may include an extract or portion of data of the digest (e.g., hash result) of that block of the flash storage device102. In some examples, the snippet is extracted from the block according to a determined bit pattern. The snippet may be in plain text, encrypted, hashed, or in any other suitable format. An amount of data of the block that is included in the snippet (e.g., such as a length or size of the determined bit pattern) may be programmable. In this way, a first snippet for a first block of the flash storage device102and belonging to a first region may have a different size than a second snippet of a second block of the flash storage device102belonging to a second region. A snippet of larger size, or greater length, than another snippet may have a greater level of security (or higher level of confidence in determined integrity) than the other snippet. In some examples, the data integrity engine108determines a snippet for each block of the flash storage device102, or each block of the flash storage device102that includes data, during a boot or startup sequence of the electronic device100, circuit104, and/or controller110. The data integrity engine108may store each snippet locally, such as in a storage circuit (not shown) of the data integrity engine108. Responsive to the controller110accessing data from the flash storage device102, the data integrity engine108may validate integrity of the data. For example, the data integrity engine108may obtain a test snippet corresponding to a block of the flash storage device102in which the data is stored. In some examples, the test snippet is formed in a manner substantially the same as the snippet described above. Responsive to a determination that the data is a match, the controller110may receive the data and/or execute the data, such as if the data is executable code. Responsive to a determination that the data is not a match, the data integrity engine108may prevent the controller110from receiving the data and/or executing the data, or may provide an indication of a validation failure to the controller110to enable the controller110to make, or solicit the providing of, a determination for how to proceed with respect to the data. The controller110may be any suitable controller, microcontroller, processor, or logic that receives, manipulates, processes, executes, or otherwise interacts with data or executable code. In at least some examples, the controller110provides control information to the data integrity engine108. For example, the controller110may provide control information to the data integrity engine108including the randomly generated number, a number of bits for the bit pattern, etc. In some examples, the controller110provides a signal to the data integrity engine108that causes the data integrity engine108to reset, changing the bit pattern and/or the number of bits for the bit pattern. In at least some examples, the data integrity engine108determining whether the snippet stored by the data integrity engine108matches data from a read block of the flash storage device102provides for determining integrity and/or validation of the data. The determination of integrity or validation of the data may indicate whether the data remains intact (e.g., unchanged) since the boot or startup sequence, or reset, of the electronic device100, circuit104, data integrity engine108, and/or controller110. In some examples, the determination is not absolute. For example, in implementations in which a snippet contains fewer than all bits of data of a block of the flash storage device102, the determination may be with respect to the data bits included in the snippet, with those data bits functioning as a proxy for all data bits of the block for the purpose of determining integrity and/or validation of the data. Generally, a level of confidence in the determined integrity of data of a block of the flash storage device102may be related to a size of the snippet, where a larger snippet corresponds to a higher level of confidence in integrity of the data on which the snippet is based. By basing the determination on snippets of the data of blocks of the flash storage device102, storage overhead of the data integrity engine108and latency in the controller110acting on data from the flash storage device102may be reduced in comparison to basing the determination on the full data of the blocks of the flash storage device102. In some examples, the bit pattern, and therefore the snippets, may be refreshed. For example, the controller110may reset the data integrity engine108, or the electronic device100, circuit104, data integrity engine108, and/or controller110may be reset. Responsive to the reset, the controller110may provide another random number to the data integrity engine108and the data integrity engine108may determine a new bit pattern based on the another random number. The data integrity engine108may determine new snippets for each block of the flash storage device102based on the new bit pattern, without the electronic device100, circuit104, and/or controller110performing the secure boot sequence, described above. In this way, data security and integrity validation provided by the data integrity engine108by periodically refreshing the snippets to be determined according to a new bit pattern based on a new random number. FIG.2is a block diagram of the data integrity engine108, in accordance with various examples. In at least some examples, the data integrity engine108includes storage202and a processing circuit204. The storage202may be referred to as a local memory for the processing circuit204, for example, such that the storage202and the processing circuit204may be implemented in a same electrical component package, in a same integrated circuit (IC), on a same semiconductor die, etc. In at least some examples, the storage202is a random-access memory (RAM). The processing circuit204may be any component suitable for performing data processing according to instructions stored in the storage202or programmed to the processing circuit204, such as a processor, field programmable gate array (FPGA), controller, microcontroller, logic circuit, application-specific integrated circuit (ASIC), etc. In operation, the data integrity engine108determines snippets of data of blocks of the flash storage device102ofFIG.1and validates the integrity of data read from the flash storage device102by the controller110based on those determined snippets. For example, during a boot or startup sequence of the electronic device100, circuit104, and/or controller110, the data integrity engine108may determine a digest for each block of the flash storage device102. A digest, as used herein, may be a value determined based on the data of a block of the flash storage device102and a unique identifier of the flash storage device102. In some examples, the unique identifier is a key or signature of the flash storage device102, such as a public key of the flash storage device102. In some implementations, the digest for a block of the flash storage device102is determined by keyed-hashing of the data of the block. For example, the data integrity engine108may perform keyed-hash message authentication code (HMAC) hashing in which the data integrity engine108determines a hash value (e.g., the digest) for a block of the flash storage device102based on the data of the block and the unique identifier of the flash storage device102. In some examples, the hashing is performed according to HMAC Secure Hash Algorithm (SHA)256encrypted hashing. In some examples, the hash value is determined by the processing circuit204. In other examples, the hash value is determined by a dedicated circuit (not shown) such as an accelerator (e.g., such as a hash accelerator specifically programmed to perform a specific type of hashing). By determining the hash value according to both the data of the block of the flash storage device102and the unique identifier of the flash storage device102, the data integrity engine108validating the integrity of data read from the flash storage device102both authenticates the data as being unchanged since the boot or startup sequence and provides attestation that at run-time the flash storage device102is a genuine and/or authenticated device. The data integrity engine108determines a bit pattern based on a random number. In some examples, the random number is received from the controller110. In other examples, the random number is provided by a random number generator (not shown), such as a pseudorandom binary sequence (PRBS) circuit or other circuit suitable of hardware-based generation and providing of a random number. In yet other examples, the random number is provided through executable code or instructions, such as executed by the processing circuit204. In at least some examples, the random number may be changed each time the electronic device100is power cycled, such as each time the boot or startup sequence of the electronic device100, circuit104, and/or controller110is executed. Based on the random number, the data integrity engine108determines a bit pattern. For example, the bit pattern may be a sequence of data bits where a portion of the data bits have a value of logical 1 and a portion of the data bits have a value of logical 0. Bits of the digest are preserved according to the bit pattern (or alternatively, bits of the digest are discarded according to the bit pattern). For example, the processing circuit204processes the digest to preserve bits of the digest having a position corresponding to a bit of the bit pattern that has a value of logical 1. In other examples, the processing circuit204processes the digest to discard bits of the digest having a position corresponding to a bit of the bit pattern that has a value of logical 0. In some examples, the same random number, and therefore same bit pattern, is used for each block of the flash storage device102. In other examples, different random numbers are used for at least some of the blocks of the flash storage device102. Each snippet may be stored in the storage202for subsequent recall by the data integrity engine108for validating the integrity of data read from the flash storage device102. At run-time of the controller110, the data integrity engine108determines a digest, as described above, for a block of the flash storage device102that includes data requested by the controller110and that is to be provided to the controller110. Subsequent to determining the digest, the data integrity engine108obtains a previously stored snippet corresponding to the block of the flash storage device102from the storage202and compares the snippet to a portion of the newly determined digest. Responsive to determining that the snippet matches the portion of the newly determined digest, the data integrity engine108determines that the block of the flash storage device102is validated (e.g., the block passes the integrity validation of the data integrity engine108). Responsive to determining that the snippet does not match the portion of the newly determined digest, the data integrity engine108determines that the block of the flash storage device102is not validated (e.g., the block fails the integrity validation of the data integrity engine108). FIG.3is a block diagram of a method300, in accordance with various examples. In at least some examples, the method300is implemented in a device, such as the electronic device100ofFIG.1, or system. Accordingly, reference may be made to at least some components ofFIG.1in describing the method300. For example, the method300may be implemented by various components of the circuit104, including at least the data integrity engine108. In at least some examples, the method300is a boot sequence, such as a secure boot sequence, or method for the electronic device100, circuit104, and/or controller110. At operation302, a random number is generated. In some examples, the random number is generated by the data integrity engine108, as described above. In other examples, the random number is generated by the controller110and provided to the data integrity engine108. At operation304, the data integrity engine108is configured. For example, the controller110provides information to the data integrity engine108to specify the regions and the number of bits per region for the snippets. At operation306, the controller110reads a signature of the flash storage device102. In at least some examples, the signature is the unique identifier of the flash storage device102, such as a public authentication key or a private authentication key of the flash storage device102. At operation308, the controller110reads a block of the flash storage device102. The block of the flash storage device102may be read, in some examples, via the flash interface106or by any other suitable process. In at least some examples, the data integrity engine108hangs off a data line or bus between the flash storage device102and the controller110such that the data integrity engine108also receives the block. At operation310, the data integrity engine108determines and stores a snippet for the block. In at least some examples, the snippet may be determined and stored as described above with respect toFIG.2. For example, the data integrity engine108may determine a digest for the block and preserve (or discard) bits of the digest according to a bit pattern to determine the snippet, the details of which are not repeated here with respect toFIG.3. At operation312, the block is added to a signature calculation. In at least some examples, the block may be read from the flash storage device102a single time for both determination of the snippet by the data integrity engine108and addition to the signature calculation. The signature calculation may be determined according to any suitable process, the scope of which is not limited herein. At operation314, a determination is made as to whether the block is a last block of the flash storage device102. Responsive to the block not being the last block of the flash storage device102, the method300returns to operation308and reads a next block of the flash storage device102. Responsive to the block being the last block of the flash storage device102, the method300proceeds to operation316. At operation316, the signature is verified. The signature may be verified by the data integrity engine108according to any suitable process, the scope of which is not limited herein. At operation318, the secure boot sequence ends and the circuit104transitions into normal operation. In normal operation, the controller110may request data from the flash storage device102and integrity of the data may be verified by the data integrity engine108according to the determined snippets prior to execution or manipulation of the data by the controller110. While the operations of the method300described herein have been described and labeled with numerical reference, in various examples, the method300includes additional operations that are not recited herein. In some examples, any one or more of the operations recited herein include one or more sub-operations. In some examples, any one or more of the operations recited herein is omitted. In some examples, any one or more of the operations recited herein is performed in an order other than that presented herein (e.g., in a reverse order, substantially simultaneously, overlapping, etc.). Each of these alternatives falls within the scope of the present description. FIG.4is a block diagram of a method400, in accordance with various examples. In at least some examples, the method400is implemented in a device, such as the electronic device100ofFIG.1, or system. Accordingly, reference may be made to at least some components ofFIG.1in describing the method400. For example, the method400may be implemented by various components of the circuit104, including at least the data integrity engine108. In at least some examples, the method400is a method for verifying integrity of data read from the flash storage device102subsequent to completion of a secure boot sequence of the electronic device100. At operation402, a digest of a received block of data is determined. In at least some examples, the block of data is received from the flash storage device102. The block of data may be received responsive to the controller110requesting the block of data, or data included in the block of data, from the flash storage device102. The data integrity engine108may determine the digest for the block as described above with respect toFIG.2, the details of which are not repeated here with respect toFIG.4. For example, the data integrity engine108may determine the digest by hashing the block according to a crypto-hashing function, such as HMAC-SHA256, based on the block and a unique identifier of the flash storage device102. At operation404, the data integrity engine108reads a previously stored snippet. The snippet may be read, for example, from a storage circuit of the data integrity engine108, such as the storage202, described above with respect toFIG.2. At operation406, the data integrity engine108compares the read snippet to the determined digest to determine whether the compared values are the same. Responsive to determining that the snippet matches the digest, the data integrity engine108determines that the block of the flash storage device102is validated (e.g., the block passes the integrity validation of the data integrity engine108). Responsive to determining that the snippet does not match the digest, the data integrity engine108determines that the block of the flash storage device102is not validated (e.g., the block fails the integrity validation of the data integrity engine108). While the operations of the method400described herein have been described and labeled with numerical reference, in various examples, the method400includes additional operations that are not recited herein. In some examples, any one or more of the operations recited herein include one or more sub-operations. In some examples, any one or more of the operations recited herein is omitted. In some examples, any one or more of the operations recited herein is performed in an order other than that presented herein (e.g., in a reverse order, substantially simultaneously, overlapping, etc.). Each of these alternatives falls within the scope of the present description. The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal generated by device A. A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. A circuit or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described circuitry or device. For example, a structure described as including one or more semiconductor elements (such as transistors), one or more passive elements (such as resistors, capacitors, and/or inductors), and/or one or more sources (such as voltage and/or current sources) may instead include only the semiconductor elements within a single physical device (e.g., a semiconductor die and/or integrated circuit (IC) package) and may be adapted to be coupled to at least some of the passive elements and/or the sources to form the described structure either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party. Unless otherwise stated, “about,” “approximately,” or “substantially” preceding a value means +/−10 percent of the stated value. Modifications are possible in the described examples, and other examples are possible within the scope of the claims. | 26,363 |
11861180 | DETAILED DESCRIPTION Embodiments provide a memory system that implements data error correction capable of coping with chip failure while maintaining a low latency. In general, according to an embodiment, a memory system includes a plurality of non-volatile memory chips and a controller configured to communicate with a host and control the plurality of non-volatile memory chips. The controller is configured to write a data frame that includes write data and first parity for error detection and correction of the write data into first memory chips of the non-volatile memory chips in a distributed manner. The first memory chips includes N (N is a natural number of two or more) memory chips. The controller is configured to write second parity used in restoring data stored in one of the N first memory chips, together with data read from the other N−1 of the N first memory chips, into a second memory chip of the non-volatile memory chips that is different from any of the first memory chips. Hereinafter, embodiments will be described with reference to the accompanying drawings. First Embodiment First, a first embodiment will be described. FIG.1illustrates a configuration example of a memory system1according to a first embodiment.FIG.1also illustrates a configuration example of an information processing system that includes the memory system1, and a host2connected to the memory system1. The host2is an information processing apparatus such as a server or a personal computer. As illustrated inFIG.1, the memory system1includes a controller11and non-volatile memory chips12. The controller11is a device that controls writing of data into the non-volatile memory chip12or reading of data from the non-volatile memory chip12in response to a command from the host2. The controller11is configured as, for example, a system-on-a-chip (SoC). The non-volatile memory chip12is, for example, an SCM. The SCM is a phase-change memory (PCM), a magnetoresistive random access memory (MRAM), a resistive RAM (ReRAM), or a ferroelectric RAM (FeRAM). That is, here, an example is illustrated where the memory system1is implemented as the SCM module. The controller11includes an error correction circuit100. A data error may occurs in the non-volatile memory chip12. The error correction circuit100is, for example, a device that detects and corrects an error in read data when the corresponding data is read from the non-volatile memory chip12. In the memory system1according to the first embodiment, regarding a data frame that is written into a plurality of non-volatile memory chips12in a distributed manner, data error correction capable of coping with chip failure is implemented by the error correction circuit100while low latency of the memory system1is maintained. Hereinafter, this point will be described in detail. Here, first, descriptions will be made on a comparative method of writing a data frame into the non-volatile memory chips (SCM)12in the memory system1, with reference toFIGS.2A and2B. As described above, a data error may occur in the non-volatile memory chip12. The controller11writes data into the non-volatile memory chip12in units of a data frame (ECC frame) in which an error correction bit (ECC parity) is added to the data, referred to herein as user data. At the time of reading of data from the non-volatile memory chip12, the controller11detects an error by using the error correction bit. Then, when an error is detected, the controller11executes correction of the corresponding detected error. In general, a write request from the host2is performed for, for example, every 64B (bytes) or 128B. According to this unit, the controller11calculates an error correction bit for detecting and correcting an error for, for example, every 64B or 128B of data. Meanwhile, in certain cases, the minimum access unit of the non-volatile memory chip12may be smaller than the size of an access request from the host2, and may be, for example, 8B, 16B, etc. In this case, as illustrated inFIG.2A, the controller11may perform memory access a plurality of times so as to write data corresponding to the size of the access request from the host2(more specifically, a data frame including user data and an error correction bit), into one non-volatile memory chip12. However, this method is not suitable for a system that requires a low latency performance. To address such an issue, as illustrated inFIG.2B, the controller11can access the non-volatile memory chips12in a parallel manner so as to reduce latency. That is, a method of writing data into the plurality of non-volatile memory chips12in a distributed manner is becoming a mainstream. In the memory system1according to the first embodiment as well, the controller11writes data received from the host2(more specifically, a data frame including user data and an error correction bit) into the plurality of non-volatile memory chips12in a distributed manner. The user data may be data received from the host2, or may be data obtained after a predetermined process such as a compression process is executed on data received from the host2. Data necessary for management in the memory system1may be added to the data received from the host2. That is, the user data may be data obtained based on the data received from the host2, data including data based on the data received from the host2, or any data that needs to be written into the plurality of non-volatile memory chips12when a write request is received from the host2, and the user data may be referred to as write data. In the example illustrated inFIG.2B, the user data (data) and the error correction bit (ECC parity) are separately arranged in different non-volatile memory chips12, but may be mixed on the same non-volatile memory chip12.FIGS.3A and3Billustrate examples where user data and an error correction bit are arranged in a mixed manner in one or more non-volatile memory chips12. FIG.3Aillustrates an example where in addition to user data, an error correction bit is also written into the non-volatile memory chips12in a distributed manner. Meanwhile,FIG.3Billustrates an example where an error correction bit is collectively written into an area remaining after an end portion of user data is written, so that the user data and the error correction bit are mixed in the same non-volatile memory chip12. Since the controller11manages storage locations of user data and an error correction bit, the arrangement of the user data and the error correction bit on the plurality of non-volatile memory chips12is not limited to the examples illustrated inFIG.2BandFIGS.3A and3B, and may be performed in various ways. Next,FIGS.4and5illustrate comparative examples, and descriptions will be made on problems when a data frame is written into a plurality of non-volatile memory chips in a distributed manner. FIG.4illustrates a state where a data frame including user data and an error correction bit (ECC Parity) is written into 10 non-volatile memory chips connected to 10 channels #1to #10, respectively, in a distributed manner. The channels #1to #10include communication lines (memory bus) by which a controller communicates with the non-volatile memory chips. To each of the channels #1to #10, one or more non-volatile memory chips are connected. For example, when a read command is received from a host, the controller executes reading of data from non-volatile memory chips on the basis of a logical address specified by the read command. Here, exemplified is a case where a data frame including data corresponding to an address X (addr X) is read from 10 non-volatile memory chips. T is an error correction ability of an ECC decoder that corrects error bits in the data frame by using an error correction bit included in the corresponding data frame. The error correction ability T is generally about several bits to several tens of bits. Therefore, even when there is a bit error in a data frame (or an input data frame) read from the 10 non-volatile memory chips, when the number of error bits of the entire data frame is equal to or less than the error correction ability T of the ECC decoder (i.e., when the number of error bits ≤T), a data frame (or an output data frame) in which the corresponding error bits are corrected and no bit error is present may be obtained. Meanwhile, likeFIG.4,FIG.5illustrates a case where a data frame including data corresponding to an address X (addr X) is read from 10 non-volatile memory chips, in which a chip failure occurs in one of the 10 corresponding non-volatile memory chips. The failed chip is a non-volatile memory chip connected to a channel #6. When a chip failure occurs, a portion of the data frame corresponding to the failed chip is entirely lost. Thus, the corresponding portion becomes error bits, and the error bits of the entire data frame largely exceed the error correction ability T of the ECC decoder. Therefore, correction using an error correction bit is no longer possible. When all data frames, not only the data frame including the data corresponding to the address X, are written into the 10 non-volatile memory chips including the corresponding failed chip in a distributed manner, all data in an SCM module may be lost. As the number of non-volatile memory chips on which data frames are written in the distributed manner increases, the risk of data loss increases. On the basis of these comparative examples, next, descriptions will be made on a method of distributed writing in the memory system1according to the first embodiment, with reference toFIG.6. In the memory system1according to the first embodiment, the controller11writes a data frame including an error correction bit into the plurality of non-volatile memory chips12in a distributed manner, and moreover, further writes bit XOR (XOR parity) of the corresponding entire data frame on a separate non-volatile memory chip12from the plurality of non-volatile memory chips12into which the data frame is written. Here, in addition to the 10 channels #1to #10, to which the 10 non-volatile memory chips12on which the data frame is distributed arranged are connected, a channel #11to which the non-volatile memory chip12on which the XOR parity is written is connected is newly prepared. Hereinafter, the channel to which the non-volatile memory chip12on which the XOR parity is written is connected may be referred to as an XOR channel. Writing the XOR parity into the non-volatile memory chip12connected to the XOR channel may be referred to as writing the XOR parity into the XOR channel. When writing a data frame into the non-volatile memory chips12, the controller11writes the XOR parity of the corresponding data frame via the XOR channel. Meanwhile, when a data frame is read from the non-volatile memory chips12, and when a chip failure occurs in any of the non-volatile memory chips12, the controller11(more specifically, the error correction circuit100) restores data read from the failed chip, using data read from the non-volatile memory chips12(on which the data frame is distributed) excluding the failed chip, and data read from the non-volatile memory chip12into which the XOR parity is written. Here, the data read from the non-volatile memory chip12as the failed chip connected to the channel #6is restored from the data (the data frame) read from the non-volatile memory chips12connected to the channels #1to #5and #7to #10, and the data read from the non-volatile memory chip12connected to the channel #11(XOR parity). That is, in the memory system1according to the first embodiment, the error correction circuit100of the controller11includes an XOR restoration circuit102in addition to an ECC decoder101. When a chip failure occurs in the non-volatile memory chip12connected to the XOR channel (here, the channel #11), loss of the data frame does not occur. Thus, a subsequent process including detection and correction of error bits may be continued. There is a possibility that a bit error may also be present in the data frame whose portion of data read from the failed chip is restored by using the XOR parity. When the number of error bits of the entire data frame is equal to or less than the error correction ability T of the ECC decoder101(i.e., when the number of error bits ≤T), the error correction circuit100may obtain a data frame (an output data frame) in which the corresponding error bits are corrected and no bit error is present. Restoring a missing portion in a data frame by using the XOR parity may be applied to not only the failure of the entire non-volatile memory chip12but also a case where only a limited area within the non-volatile memory chip12is failed. That is, the chip failure referred to here includes not only the failure of the entire memory chip, but also the failure of only the limited area in the memory chip. Meanwhile,FIG.6illustrates an example where a chip failure occurs in the non-volatile memory chip12connected to the channel #6. However, when the failed chip cannot be identified, as illustrated inFIG.7, the error correction circuit100needs to repeat a loop of (1) XOR restoration by the XOR restoration circuit102, and (2) ECC decoding trial by the ECC decoder101((1)→(2)) until ECC decoding is successful. That is, it is necessary to repeat the above loop of (1)→(2) until the ECC decoding is successful while changing the non-volatile memory chip12subjected to the XOR restoration, which is indicated by reference numeral “a1” inFIG.7. When a data frame is distributed into the 10 non-volatile memory chips12, there is a possibility that the loop may be repeated 10 times at worst. FIG.8is a flow chart illustrating an error correction procedure when non-volatile memory chips on which a data frame is written in a distributed manner are assumed to be failed chips, one by one. Here, 1 to 10 are channels through which the data frame is written, and 11 is a channel through which an XOR parity is written. The error correction circuit100reads data from the SCMs12of all the channels (S101). The error correction circuit100generates a data frame “Frame #0” from the data of the channels1to10, excluding the XOR channel11(S102). The error correction circuit100performs ECC decoding on the corresponding generated “Frame #0” (S103). When the ECC decoding of the “Frame #0” is successful (S104: Yes), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is successful. When the ECC decoding of the “Frame #0” is failed (S104: No), the error correction circuit100first sets data of the channel1as a restoration target (S105), and generates a data frame “Frame #N” in which data of a channel N (initially1) as the restoration target is restored by bit XOR of the other channels (S106). The error correction circuit100performs ECC decoding on the corresponding generated “Frame #N” (S107). When the ECC decoding of the “Frame #N” is successful (S108: Yes), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is successful. When the ECC decoding of the “Frame #N” is failed (S108: No), the error correction circuit100determines whether the restoration target is data of the channel10(S109). When the restoration target is not the data of the channel10(S109: No), after increment by one restoration target channel (S110), the process of S106to S108is repeated. Meanwhile, when the restoration target is the data of the channel10(S109: Yes), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. As described above, when one of the non-volatile memory chips is assumed to be a failed chip, one by one, a loop of (1) XOR restoration by the XOR restoration circuit102, and (2) ECC decoding trial by the ECC decoder101((1)→(2)) is repeated until ECC decoding is successful. This repetition of the loop may increase the latency at reading of data. In order to eliminate the above-described loop, in the memory system1according to the first embodiment, the error correction circuit100performs ECC decoding on a plurality of data frames that may be corrected, in a parallel manner without identifying a failed chip.FIG.9illustrates a configuration example of the error correction circuit100in the memory system1according to the first embodiment. As illustrated inFIG.9, for example, in correspondence with writing a data frame into channels #1to #10in a distributed manner, the error correction circuit100of the memory system1according to the first embodiment includes 10 XOR restoration circuits102for performing XOR restoration on data of each channel. The error correction circuit100includes a total of 11 ECC decoders101, that is, one ECC decoder101that performs ECC decoding on a data frame “Frame #0” generated from data of the channels #1to #10, and 10 ECC decoders101that perform ECC decoding on a data frame “Frame #N” in which data of any of the channels is restored by the XOR restoration circuit102. The error correction circuit100, which includes the corresponding 10 XOR restoration circuits102and the corresponding 11 ECC decoders101, executes error correction on 11 data frames “Frames #0to #10” in a parallel manner. When the error correction is successful for any one of the data frames, the error correction circuit100outputs the data frame. FIG.10is a flow chart illustrating an error correction procedure by the error correction circuit100of the memory system1according to the first embodiment. The error correction circuit100reads data from the SCMs12of all the channels (S201). The error correction circuit100generates a data frame “Frame #0” from the data of the channels1to10, excluding the XOR channel11(S202). Regarding data of each of the channels1to10, the error correction circuit100generates a data frame “Frame #N” in which data of a channel N is restored by bit XOR of the other channels (S203). The error correction circuit100performs ECC decoding on the data frames “Frame #0” to “Frame #10” in a parallel manner (S204). The error correction circuit100determines whether there is a data frame for which error correction is successful (S205). When it is determined that there is a data frame for which error correction is successful (S205: Yes), the error correction circuit100outputs any one of data frames for which the corresponding error correction is successful (S206), and then terminates the corresponding error correction process on the assumption that the error correction is successful. Meanwhile, when it is determined that there is no data frame for which error correction is successful (S205: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. Accordingly, in the memory system1according to the first embodiment, since the above-described loop for identifying the failed chip is eliminated, it is possible to implement data error correction capable of coping with a chip failure while maintaining low latency. However, for example, as illustrated inFIG.9, the error correction circuit100, which includes the 10 XOR restoration circuits102and the 11 ECC decoders101, increases in the circuit scale. As the circuit scale of the error correction circuit100increases, for example, the chip area of the controller11implemented as a SoC increases. Further, the increase of the chip area of the controller11leads to an increase in the cost of the memory system1. In order to address such an issue, the error correction circuit100of the memory system1according to the first embodiment may be further configured to prevent the increase of the circuit scale. Next, this point will be described in detail. There are various coding methods for ECC decoding. In the memory system1according to the first embodiment, as an example, the ECC decoder101of the error correction circuit100employs a Bose-Chaudhuri-Hocquenghem (BCH) code. Descriptions will be made on the outline of BCH code decoding, with reference toFIG.11. As illustrated inFIG.11, the BCH code decoding is executed through four steps (1) to (4), that is, (1) syndrome calculation, (2) error location polynomial calculation, (3) error bit position calculation, and (4) error correction. In (1) syndrome calculation, syndrome calculation is performed with respect to a data frame, and a plurality of syndrome values is output. When all the syndrome values are 0, it is determined that there is no error bit. When any of the syndrome values is not 0, it is determined that there is an error bit. From the number of syndrome values which are not 0, it is possible to determine the number of error bits. The syndrome value may be easily obtained by bit shift and XOR multiplication, and a circuit scale and a latency also tend to be small. In (2) error location polynomial calculation, error location polynomials are obtained from the syndrome values obtained from the syndrome calculation. As for a specific algorithm, a Berlekamp-Massey method is used in many cases. This process needs to be repeated as many times as the number of error bits. Since the amount of calculation at one time is small, a circuit scale is small. However, a latency tends to be high due to iterative processing. In (3) error bit position calculation, an actual error bit position is obtained by calculating the root of an error location polynomial. As for a specific algorithm, a Chien search method is used in many cases. Since this process performs round-robin calculation on all bits, the amount of calculation becomes enormous. To perform the error bit position calculation with a low latency, it is necessary to increase the number of bits processed at one time (to increase the number in parallel calculation), and thus a circuit scale tends to increase. In (4) error correction, an error of the obtained error bit position is corrected. In general, since it is only necessary to invert the bit at the error location, both a circuit scale and a latency are negligibly small. Therefore, relationships between a circuit scale and a latency in each step of the BCH code decoding are summarized as follows. (1) syndrome calculation circuit scale: small, latency: low (2) error location polynomial calculation circuit scale: small, latency: high (3) error bit position calculation circuit scale: large, latency: low (4) error correction circuit scale: small, latency: low These are merely tendencies, and vary depending on design parameters. Meanwhile, in a configuration where decoding is performed with a low latency, in certain cases, the circuit scale of (3) the error bit position calculation may reach nearly 90% of the entire circuit scale of the BCH code decoding. In consideration of the relationships between the circuit scale and the latency in each step of the BCH code decoding, the error correction circuit100of the memory system1according to the first embodiment has a configuration with a reduced circuit scale. Specifically, in relation to a configuration example of the error correction circuit100ofFIG.9, a portion of the ECC decoders101indicated by reference numeral “b1” inFIG.12is improved. FIG.13illustrates a configuration example of the error correction circuit100in which the portion of the ECC decoders101is improved. As illustrated inFIG.13, the ECC decoder101is configured to execute only (1) the syndrome calculation, and (2) the error location polynomial calculation in a parallel manner among four steps of the BCH code decoding, that is, steps which tend to require a small circuit scale. (3) The error bit position calculation and (4) the error correction subsequent to (3), which tend to require a large circuit scale, are sequentially executed. Since improvement is made such that sequential execution of (3) the error bit position calculation tending to require a large circuit scale is performed, an increase in the circuit scale of the error correction circuit100is prevented. Since (3) the error bit position calculation and (4) the error correction tend to cause a low latency, the influence of sequential execution of these is limited. Meanwhile, parallel execution of (2) the error location polynomial calculation tending to cause a high latency is maintained. Hereinafter, descriptions will be made on the operation of the ECC decoder101in which such an improvement is made. The ECC decoder101inputs all data frames that may be corrected successfully, in a parallel manner (C1). The ECC decoder101calculates syndrome values on all the data frames in a parallel manner (C2). The ECC decoder101calculates numbers of error bits on all the syndrome values (C3). When a valid number of error bits is not obtained, the correction is failed herein. When a data frame with a syndrome value of 0 is found, the data frame is output without error correction and the process ends (C4). When a data frame with a syndrome value of 0 is not found while a valid number of error bits is obtained, the ECC decoder101calculates error location polynomials on all the syndrome values in a parallel manner (C5). Then, the ECC decoder101rearranges the error location polynomials in ascending order of the number of error bits (C6). The ECC decoder101calculates an error bit position by using an error location polynomial with the smallest number of error bits (C7). When there is no contradiction in the calculated error bit position, the ECC decoder101performs error correction on the basis of the obtained error bit position (C8), and outputs the data, and then the process ends. Meanwhile, when an error bit position cannot be calculated or there is any contradiction in the calculated error bit position, the ECC decoder101recalculates an error bit position by using an error location polynomial with the next smallest number of error bits (C7). When there is no uncalculated error location polynomial, the correction is failed. FIG.14is a flow chart illustrating an error correction procedure by the error correction circuit100in which the portion of the ECC decoder101is improved. The error correction circuit100reads data from the SCMs12of all the channels (S301). The error correction circuit100generates a data frame “Frame #0” from the data of the channels1to10, excluding the XOR channel11(S302). Regarding data of each of the channels1to10, the error correction circuit100generates a data frame “Frame #N” in which data of a channel N is restored by bit XOR of the other channels (S303). The error correction circuit100calculates syndromes “Synd #0” to “Synd #10” of the data frames “Frame #0” to “Frame #10”, in a parallel manner (S304). The error correction circuit100determines whether any of the syndromes “Synd #0” to “Synd #10” is 0 (S305). When it is determined that any one is 0 (S305: Yes), the error correction circuit100outputs a data frame “Frame #N” with a syndrome of 0 (S306). When it is determined that none of the syndromes “Synd #0” to “Synd #10” is 0 (S305: No), the error correction circuit100calculates numbers of error bits “t #0” to “t #10” in a parallel manner from the syndromes “Synd #0” to “Synd #10”, respectively (S307). The error correction circuit100determines whether any of the numbers of error bits “t #0” to “t #10” is successfully calculated (S308). When it is determined that there is no successful calculation (S308: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. When it is determined that any of the numbers of error bits “t #0” to “t #10” is successfully calculated (S308: Yes), the error correction circuit100calculates error location polynomials “Poly #0” to “Poly #10” in a parallel manner from the syndromes “Synd #0” to “Synd #10”, respectively (S309). The error correction circuit100determines whether any of the error location polynomials “Poly #0” to “Poly #10” is successfully calculated (S310). When it is determined that there is no successful calculation (S310: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. When it is determined that any of the error location polynomials “Poly #0” to “Poly #10” is successfully calculated (S310: Yes), the error correction circuit100sorts the successfully calculated error location polynomials “Poly #N” in ascending order of the number of error bits “t #N” (S311). The error correction circuit100calculates an error bit position “ErrVec #N” from an uncalculated error location polynomial “Poly #N” with the smallest number of error bits “t #N” (S312). The error correction circuit100determines whether the error bit position “ErrVec #N” falls within a data frame range (S313). When it is determined that the error bit position “ErrVec #N” is not within the data frame range (S313: No), the error correction circuit100determines whether there is an error location polynomial “Poly #N” for which the error bit position is has not been calculated (S314). In the case of being determined as absence (S314: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. In the case of being determined as presence (S314: Yes), the error bit position “ErrVec #N” is recalculated from the corresponding error location polynomial “Poly #N” (S312). Meanwhile, when it is determined that the error bit position “ErrVec #N” falls within the data frame range (S313: Yes), the error correction circuit100corrects data at a bit position corresponding to the error bit position “ErrVec #N”, regarding the data frame “Frame #N” (S315). Then, the error correction circuit100outputs the corresponding corrected data frame “Frame #N” (S316), and terminates the corresponding error correction process on the assumption that the error correction is successful. FIGS.15A and15Billustrate an estimate of the latency required for error correction in the worst case. FIG.15Aillustrates a case where under the assumption of a failed chip, XOR calculation [XOR], and BCH code decoding, that is, (1) syndrome calculation [SYND], (2) error location polynomial calculation [ELP] and (3) error bit position calculation [CS], are sequentially and cyclically executed. Meanwhile,FIG.15Billustrates a case where a failed chip is not identified, and an attempt of error correction is made on all data frames for which error correction may be successful, in a parallel manner. In this case, XOR calculation [XOR], and BCH code decoding, that is, all of (1) syndrome calculation [SYND], (2) error location polynomial calculation [ELP] and (3) error bit position calculation [CS], are executed in a parallel manner. For example, assuming that latencies required for individual calculations are 1 cycle (=XOR calculation), 5 cycles (=syndrome calculation), 25 cycles (=error location polynomial calculation), and 5 cycles (=error bit position calculation), the latency inFIG.15Ain the worst case becomes 396 cycles (=(1+5+25+5)×11). Meanwhile, the latency in the case ofFIG.15Balways becomes 36 cycles (=1+5+25+5). FIGS.16A and16Billustrate another estimate of the latency required for error correction in the worst case. FIG.16Ais the same asFIG.15A. Meanwhile,FIG.16Billustrates a case where a failed chip is not identified, and an attempt of error correction is made on all data frames for which error correction may be successful, in a parallel manner. In this case, XOR calculation [XOR], and BCH code decoding, that is, (1) syndrome calculation [SYND] and (2) error location polynomial calculation [ELP], are executed in a parallel manner. In the same manner as described above, for example, assuming that latencies required for individual calculations are 1 cycle (=XOR calculation), 5 cycles (=syndrome calculation), 25 cycles (=error location polynomial calculation), and 5 cycles (=error bit position calculation), the latency inFIG.16Bin the worst case becomes 86 cycles (=1+5+25+5×11). FIG.17illustrates relationships between the latency and the circuit scale in the case ofFIG.15A, the latency and the circuit scale in the case ofFIG.15B, and the latency and the circuit scale in the case ofFIG.16B.FIG.17illustrates an example in which assuming that each of the latency and the circuit scale is 1 in the case ofFIG.15A, the latency and the circuit scale in the case of FIG.15B and the latency and the circuit scale in the case ofFIG.16Bare represented by ratios relative to the latency and the circuit scale in the case ofFIG.15A. As illustrated inFIG.17, the latency in item “(B)” is 0.09 relative to 1 in item “(A)”, resulting in great improvement in the performance, whereas the circuit scale is 11.0 relative to 1 in item “(A)” and is significantly increased. In contrast, in item “(C)”, the latency is 0.22, and the circuit scale is 1.99. In this manner, item “(C)” is practical due to a good balance between the latency and the circuit scale. As described above, in the memory system1according to the first embodiment, the error correction circuit100parallelizes only the syndrome calculation and the error location polynomial calculation in consideration of the relationships between the circuit scale and the latency in each step of BCH code decoding. Accordingly, the memory system1according to the first embodiment implements data error correction capable of coping with chip failure while maintaining a low latency without causing a cost increase accompanying an increase in the circuit scale. Second Embodiment Next, a second embodiment will be described. An example is illustrated where the memory system according to the second embodiment is also implemented as the SCM module. The same reference numerals are used for the same elements as those in the first embodiment, and descriptions thereof will be omitted. FIG.18illustrates an error correction circuit of the memory system1according to the second embodiment. A case where a data frame is restored by using an XOR parity of an XOR channel (for example, the channel #11) and then ECC decoding is performed is a limited case. For that reason, it is wasteful to perform ECC decoding on all data frames for which error correction may be made, every time, for example, from the viewpoint of power consumption or heat generation. To address such an issue, in the memory system1according to the second embodiment, first, the error correction circuit100performs ECC decoding on a normal data frame for which XOR restoration is not performed (d1). Then, only when the ECC decoding on the data frame is failed, the error correction circuit100performs ECC decoding on all XOR-restored data frames, in a parallel manner (d2). In the memory system1according to the second embodiment, when a chip failure occurs, ECC decoding is performed through two steps of (1) a normal data frame→(2) XOR-restored data frames. Thus, as compared to in the memory system1according to the first embodiment, the latency may increase with chip failure, whereas the power consumption or the heat generation may be largely reduced at a normal condition with no chip failure. FIG.19is a flow chart illustrating an error correction procedure by the error correction circuit100of the memory system1according to the second embodiment. The error correction circuit100reads data from the SCMs12of all the channels (S401). The error correction circuit100generates a data frame “Frame #0” from the data of the channels1to10, excluding the XOR channel11(S402). The error correction circuit100performs ECC decoding on the data frame “Frame #0” (S403). Then, the error correction circuit100determines whether error correction of the data frame “Frame #0” is successful (S404). When it is determined that the error correction is successful (S404: Yes), the error correction circuit100outputs the corresponding data frame “Frame #0” for which the error correction is successful (S405), and terminates the corresponding error correction process on the assumption that the error correction is successful. Meanwhile, when it is determined that the error correction of the data frame “Frame #0” is failed (S404: No), regarding data of each of the channels1to10, the error correction circuit100generates a data frame “Frame #N” in which data of a channel N is restored by bit XOR of the other channels (S406). Then, the error correction circuit100performs ECC decoding on the data frames “Frame #0” to “Frame #10” in a parallel manner (S407). The error correction circuit100determines whether there is a data frame for which error correction is successful (S408). When it is determined that there is a data frame for which error correction is successful (S408: Yes), the error correction circuit100outputs any one of data frames for which the corresponding error correction is successful (S409), and then terminates the corresponding error correction process on the assumption that the error correction is successful. Meanwhile, when it is determined that there is no data frame for which error correction is successful (S408: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. In this manner, the memory system1according to the second embodiment may implement data error correction capable of coping with chip failure while maintaining a low latency without causing a cost increase accompanying an increase in the circuit scale, and further may largely reduce power consumption or heat generation. Third Embodiment Next, a third embodiment will be described. An example is illustrated where the memory system according to the third embodiment is also implemented as the SCM module. The same reference numerals are used for the same elements as those in the first embodiment and the second embodiment, and descriptions thereof will be omitted. In the memory system1, in general, power consumption of reading data from the non-volatile memory chip12is larger than power consumption of calculation in the controller11. As mentioned in the second embodiment, a case where a data frame is restored by using an XOR parity of an XOR channel (for example, the channel #11) and ECC decoding is performed thereafter is a limited case. That is, it is highly likely that the XOR parity read from the non-volatile memory chip12connected to the XOR channel is not used. That is, reading the XOR parity from the non-volatile memory chip12is highly likely to be wasted in many cases. To address such an issue, in the memory system1according to the third embodiment, first, the error correction circuit100does not perform reading of an XOR parity, and performs ECC decoding on a data frame read from the non-volatile memory chips12connected to the channels, excluding the XOR channel. Then, only when the ECC decoding on the data frame is failed, the error correction circuit100performs reading of the XOR parity, and executes XOR restoration and ECC decoding in a parallel manner. FIG.20is a flow chart illustrating an error correction procedure by the error correction circuit100of the memory system1according to the third embodiment. The error correction circuit100reads data from the SCMs12of all the channels, excluding the XOR channel (S501), and generates a data frame “Frame #0” (S502). Then, the error correction circuit100performs ECC decoding on the data frame “Frame #0” (S503). The error correction circuit100determines whether error correction of the data frame “Frame #0” is successful (S504). When it is determined that the error correction is successful (S504: Yes), the error correction circuit100outputs the corresponding data frame “Frame #0” for which the error correction is successful (S505), and terminates the corresponding error correction process on the assumption that the error correction is successful. When it is determined that the error correction of the data frame “Frame #0” is failed (S504: No), the error correction circuit100reads data (XOR parity) from the SCM12of the XOR channel (S506). Regarding data of each of the channels1to10, the error correction circuit100generates a data frame “Frame #N” in which data of a channel N is restored by bit XOR of the other channels (S507). Then, the error correction circuit100performs ECC decoding on the data frames “Frame #0” to “Frame #10” in a parallel manner (S508). The error correction circuit100determines whether there is a data frame for which error correction is successful (S509). When it is determined that there is a data frame for which error correction is successful (S509: Yes), the error correction circuit100outputs any one of data frames for which the corresponding error correction is successful (S510), and then terminates the corresponding error correction process on the assumption that the error correction is successful. Meanwhile, when it is determined that there is no data frame for which error correction is successful (S509: No), the error correction circuit100terminates the corresponding error correction process on the assumption that the error correction is failed. In this manner, the memory system1according to the third embodiment may further reduce power consumption at a normal condition with no chip failure. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. | 42,030 |
11861181 | Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent in light of this disclosure. DETAILED DESCRIPTION Techniques are provided herein for a triple modular redundancy (TMR) memory system configured to provide radiation hardened memory operation. As noted above, memory systems deployed in space-based applications are subjected to relatively high radiation levels which can cause a significant increase in single-bit, multi-bit, and single event functional interrupt (SEFI) errors. In some cases, these errors can render the memory at least temporarily non-functional. Shielding can reduce the radiation exposure, but this approach is impractical due to the added weight (e.g., requiring lead or similarly dense materials), particularly in space-based applications where stringent weight constraints may be imposed. One approach to solving this problem is to modify the design of a commercially available memory controller and/or the associated physical interface (PHY) to a commercial memory chip (e.g., integrated circuit or IC), to include circuitry to correct bit errors. This approach, however, is generally not capable of correcting lengthier sequences of bit errors (e.g., Single Event Functional Interrupt or SEFI conditions). This approach also tends to slow down the operation of the memory, making it incompatible with increasingly faster memory chips such as faster Double Data Rate 4 (DDR4) and DDR5 memory chips. Additionally, PHY development is costly and time consuming. For example, it can take a year or more to develop a PHY. To this end, and in accordance with an embodiment of the present disclosure, a TMR memory system is disclosed which interfaces between a processor and a commercially available memory controller/PHY for the memory chip without requiring modification of the memory controller/PHY. The TMR memory system provides improved reliability for memory operation in higher radiation environments such as space-based applications. The disclosed system employs triple redundant memories and associated circuitry to perform memory scrubbing and error recovery in balance with the execution, by the processor, of mission software from those memories, as will be explained in greater detail below. The disclosed TMR memory system can be used, for instance, with electronic systems in a wide variety of applications including, for example, radar systems and communication systems that can be deployed in space-based applications (e.g., satellite-based platform) or other high radiation environments, although other applications will be apparent. In accordance with an embodiment, the TMR memory system, which interfaces between a processor and a memory controller/PHY, includes a redundancy comparator configured to detect differences between data stored redundantly in a first memory, a second memory, and a third memory. The redundancy comparator is further configured to identify a memory error based on the detected differences. The memory system also includes an error collection buffer configured to store a memory address associated with the memory error. The memory system further includes a memory scrubber circuit configured to overwrite, at the memory address associated with the memory error, erroneous data with corrected data. The corrected data is based on a majority vote among the three memories. The memory system further includes a priority arbitrator configured to arbitrate between the memory scrubber overwriting operations and functional memory accesses associated with software execution performed by a processor configured to utilize the memory system (e.g., the mission application). It will be appreciated that the techniques described herein may provide improved error correction and recovery capabilities, in terms of cost, reliability, and operational speed, compared to systems that provide physical shielding or that require modification of the memory controller/PHY. Numerous embodiments and applications will be apparent in light of this disclosure. System Architecture FIG.1illustrates an implementation100of a TMR memory system, in accordance with certain embodiments of the present disclosure. The implementation100is shown to include a processor110, the TMR system120, and three redundant memory systems160a,160b, and160c. Each of the redundant memory systems160is shown to include a memory controller130, a memory PHY140and a memory chip150. In some embodiments, these components130,140, and150may be any suitable chips including commercially available chips. The redundant memory systems160are each configured to store and provide read/write access to a copy of the software and data used by the processor110, for example in the execution of a mission application (e.g., signal processing, radar processing, communications, etc.). In the absence of any bit errors, the copies of software and data stored in the three memory chips150a,150b, and150cwould be identical, and so the redundancy serves to identify such errors based on differences that may be detected. While the use of three memory chips allow for the use of a majority voting scheme, as will be described below, in some embodiments, additional memory chips may be employed for increased redundancy. The operation of the TMR system120will be described in greater detail below, but at a high level, the TMR system is configured to interface between the processor110and the triple redundant memories and to exploit the redundancy to detect and correct errors (e.g., errors caused by radiation effects or from other sources). FIG.2is a block diagram of the TMR memory system120ofFIG.1, configured in accordance with certain embodiments of the present disclosure. The TMR system is shown to include a TMR reliability controller210and a microcontroller230. The operation of the TMR reliability controller210and microcontroller230will be described in greater detail below. At a high level, however, the TMR reliability controller210is configured to allow the processor110to read and write data over data bus200to the memory chips150, through the memory controller130and memory PHY140, and to provide increased protection against memory errors. In some embodiments, the microcontroller230communicates with the processor110over a configuration bus220and is configured to adjust various parameters associated with the operation of the TMR reliability controller210and provide overall control of the TMR system120, as will be explained in greater detail below. FIG.3is a block diagram of the TMR reliability controller210ofFIG.2, configured in accordance with certain embodiments of the present disclosure. TMR reliability controller210is shown to include an error collection buffer300, a redundancy comparator310, a priority arbitrator320, and a memory scrubber330. The components of the TMR reliability controller operate under the control of the microcontroller230. The redundancy comparator310is configured to detect differences between data that is stored redundantly in the three memories160and to identify memory errors based on the detected differences. The redundancy comparator310accesses the redundant memory chips150through the memory controllers130and PHYs140using any appropriate technique associated with the particular memory controller/PHY that has been selected for use in the application. Errors that are discovered are stored in the error collection buffer300which is configured to store the memory address associated with each memory error. In some embodiments, the error collection buffer300may be implemented as a circular buffer which can be monitored340by the microcontroller230. In some embodiments, the error collection buffer may be configured to generate an interrupt to the microcontroller230to signal the presence of new errors. The memory scrubber circuit330is configured to overwrite erroneous data with corrected data, at the memory address associated with the memory error. The corrected data is generated based on a majority vote performed among the first memory160a, the second memory160b, and the third memory160c. A random bit error, caused for example by radiation, is relatively unlikely to occur in the same address and bit position of two or more of the memories, and so a majority vote can be used to correct the error in one of the memories based on a consensus value obtained from the other two memories. Since only the memory address associated with the memory error is stored, the memory overwrite operation is a Read-Modify-Write memory operation, allowing the erroneous data to be re-read, repaired, and written back to the memory at that address. In some embodiments, the rate at which memory scrubbing is performed can be set by the microcontroller230through scrub rate signaling370. For example, the scrub rate can be increased when a SEFI occurs, as will be described below. The memory scrubber circuit is also configured to monitor traffic on the data bus200to determine if a functional memory write is being performed by the processor110to a memory address that is in the process of being scrubbed (e.g., about to be overwritten for correction). If that is the case, then the memory scrubber cancels the overwrite since it will not be necessary and could potentially corrupt the memory if performed after the processor completes the functional write. The priority arbitrator320is configured to arbitrate between the overwriting performed by the memory scrubber and the functional memory accesses associated with software execution performed by the processor110. The priority arbitrator throttles the scrubbing rate based on guidance (priority signaling360) provided by the microcontroller230, as will be described below. In some embodiments, the TMR reliability controller210may also be configured to employ an Error Correction Coding (ECC) technique of any suitable type to detect and repair errors as an additional mechanism to the memory scrubbing process. This function may be switched on or off based on a mode setting350provided by the microcontroller230. FIG.4is a block diagram of the microcontroller230ofFIG.2, configured in accordance with certain embodiments of the present disclosure. The microcontroller230is shown to include a microprocessor400, a local memory410, and an ECC system430. The microprocessor400is configured to monitor340the error collection buffer300of the TMR reliability controller210and trigger operation of the memory scrubber circuit330in response to memory errors that have been stored in the buffer. In some embodiments, the monitoring may be performed by reading the buffer. In some embodiments, the monitoring may be accomplished through interrupts generated by the buffer when errors are stored. The microprocessor may clear the errors from the buffer after detection. The microprocessor400is also configured to determine whether the errors retrieved from the buffer are relatively simple single bit errors, or whether the errors are multi-bit errors or are associated with a more serious SEFI condition that requires reinitialization of one or more of the memory controllers130in addition to memory scrubbing (e.g., operating in a SEFI recovery mode). The local memory410is configured to store copies of the contents420of the configuration register of the memory controllers130so that these copies may be used to restore or refresh the memory controllers in response to detection of a SEFI condition. In some embodiments, the microprocessor may increase the scrub rate370during SEFI recovery and/or turn off error collection for the memory that is undergoing SEFI recovery, to allow for a faster recovery from this more serious error condition. In some embodiments, the microprocessor may set the priority360for the priority arbitrator320based on a tradeoff between the overwriting performed by the memory scrubber and the functional memory accesses of the processor110. That priority may be determined based on guidance from the processor110, provided on the configuration bus220, and may be related to mission parameters or other considerations. For example, in some cases, correcting errors may be of primary importance to mission success, and so overwriting by the memory scrubber may be set to a higher priority. However, in other cases, allowing the processor to execute mission software with minimal interruption due to error correction may be of primary importance to mission success, and so functional memory accesses by the processor may be set to a higher priority. In some embodiments, the microcontroller230may power cycle the external memory150a,150b, or150cto clear invalid states that are non-recoverable by reset or command. In some embodiments, the microprocessor may control the mode setting350to cause the TMR reliability controller to include or exclude ECC functionality as an additional operation to the scrubbing function. Determination of the mode setting may be made, in part by based on guidance from the processor110, also provided on the configuration bus220. In some embodiments, the microprocessor may be configured to monitor the rate at which errors are being detected through the error collection buffer300and detect an increase or decrease in those error rates. In response to detection of such a change in error rates, the microprocessor may increase or decrease the scrub rate370accordingly. In some embodiments, an ECC system430is configured to maintain the integrity of the local memory by employing any suitable ECC technique to detect and correct errors that may occur in the local memory410. In some embodiments, the microprocessor400may be configured to detect stuck bit errors, in which the bit remains stuck in a one or zero state despite repeated scrubbing attempts. Stuck bits may be reported back to the processor110over the configuration bus220so that the processor may attempt to avoid using memory locations that contain stuck bits. Methodology FIG.5is a flowchart illustrating a methodology500for providing TMR radiation hardened memory, in accordance with an embodiment of the present disclosure. As can be seen, example method500includes a number of phases and sub-processes, the sequence of which may vary from one embodiment to another. However, when considered in aggregate, these phases and sub-processes form a process for providing TMR radiation hardened memory, in accordance with certain of the embodiments disclosed herein, for example as illustrated inFIGS.1-4, as described above. However other system architectures can be used in other embodiments, as will be apparent in light of this disclosure. To this end, the correlation of the various functions shown inFIG.5to the specific components illustrated in the figures, is not intended to imply any structural and/or use limitations. Rather other embodiments may include, for example, varying degrees of integration wherein multiple functionalities are effectively performed by one system. Numerous variations and alternative configurations will be apparent in light of this disclosure. In one embodiment, method500commences, at operation510, by detecting differences between data stored redundantly in a first memory, a second memory, and a third memory. At operation520, a memory error is identified based on the detected differences and at operation530, a memory address associated with the memory error is stored in an error collection memory. At operation540, memory scrubbing is performed in which the erroneous data is overwritten with corrected data, at the memory address associated with the memory error. In some embodiments, the memory overwrite operation is a Read-Modify-Write memory operation. In some embodiments, the corrected data is generated based on a majority vote performed among the first memory, the second memory, and the third memory. At operation550, arbitration is performed between the memory scrubbing operation and functional memory access associated with mission software execution (e.g., as performed by a processor configured to utilize the memory system). The arbitration is based on a priority associated with a tradeoff between the error correction activities of the memory scrubber to increase data reliability versus timely execution of the mission software. Of course, in some embodiments, additional operations may be performed, as previously described in connection with the system. For example, if the memory error is associated with a Single Event Functional Interrupt (SEFI) condition, as opposed to a single bit error, the rate of operation of the memory scrubber circuit may be increased. In some embodiments, copies of configuration registers of controllers associated with the first, second, and third memories may be stored in a local memory, and the configuration registers may be restored from the local memory copies in response to a determination that the memory error is associated with a SEFI condition. In some embodiments, the overwriting may be cancelled in response to a detection that a functional write is being performed at the memory address associated with the memory error. Example System FIG.6is a block diagram of a processing platform600configured to provide TMR radiation hardened memory, in accordance with an embodiment of the present disclosure. In some embodiments, platform600, or portions thereof, may be hosted on, or otherwise be incorporated into the electronic systems of a space-based platform, including data communications systems, radar systems, computing systems, or embedded systems of any sort, where radiation hardening is particularly useful. The disclosed techniques may also be used to improve memory reliability in other platforms including data communication devices, personal computers, workstations, laptop computers, tablets, touchpads, portable computers, handheld computers, cellular telephones, smartphones, or messaging devices. Any combination of different devices may be used in certain embodiments. In some embodiments, platform600may comprise any combination of a processor110, memories160a,160b,160c, TMR system120, a network interface640, an input/output (I/O) system650, a user interface660, a display element664, and a storage system670. As can be further seen, a bus and/or interconnect690is also provided to allow for communication between the various components listed above and/or other components not shown. Platform600can be coupled to a network694through network interface640to allow for communications with other computing devices, platforms, devices to be controlled, or other resources. Other componentry and functionality not reflected in the block diagram ofFIG.6will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware configuration. Processor110can be any suitable processor, and may include one or more coprocessors or controllers, such as an audio processor, a graphics processing unit, or hardware accelerator, to assist in the execution of mission software and/or any control and processing operations associated with platform600. In some embodiments, the processor110may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a micro-processor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a tensor processing unit (TPU), a network processor, a field programmable gate array or other device configured to execute code. The processors may be multithreaded cores in that they may include more than one hardware thread context (or “logical processor”) per core. Processor110may be implemented as a complex instruction set computer (CISC) or a reduced instruction set computer (RISC) processor. In some embodiments, processor110may be configured as an x86 instruction set compatible processor. Memory160comprises three redundant memories160a,160b, and160c, as previously described, and can be implemented using any suitable type of digital storage including, for example, DDR3, DDR4, and/or DDR5 SDRAMs. Storage system670may be implemented as a non-volatile storage device such as, but not limited to, one or more of a hard disk drive (HDD), a solid-state drive (SSD), a universal serial bus (USB) drive, an optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. In some embodiments, storage670may comprise technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included. Processor110may be configured to execute an Operating System (OS)680which may comprise any suitable operating system, such as Google Android (Google Inc., Mountain View, CA), Microsoft Windows (Microsoft Corp., Redmond, WA), Apple OS X (Apple Inc., Cupertino, CA), Linux, or a real-time operating system (RTOS). As will be appreciated in light of this disclosure, the techniques provided herein can be implemented without regard to the particular operating system provided in conjunction with platform600, and therefore may also be implemented using any suitable existing or subsequently-developed platform. Network interface circuit640can be any appropriate network chip or chipset which allows for wired and/or wireless connection between other components of platform600and/or network694, thereby enabling platform600to communicate with other local and/or remote computing systems, and/or other resources. Wired communication may conform to existing (or yet to be developed) standards, such as, for example, Ethernet. Wireless communication may conform to existing (or yet to be developed) standards, such as, for example, cellular communications including LTE (Long Term Evolution) and 5G, Wireless Fidelity (Wi-Fi), Bluetooth, and/or Near Field Communication (NFC). Exemplary wireless networks include, but are not limited to, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, cellular networks, and satellite networks. I/O system650may be configured to interface between various I/O devices and other components of platform600. I/O devices may include, but not be limited to, user interface660and display element664. User interface660may include devices (not shown) such as a touchpad, keyboard, and mouse, etc., for example, to allow the user to control the system. Display element664may be configured to display information to a user. I/O system650may include a graphics subsystem configured to perform processing of images for rendering on the display element664. Graphics subsystem may be a graphics processing unit or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem and the display element. For example, the interface may be any of a high definition multimedia interface (HDMI), DisplayPort, wireless HDMI, and/or any other suitable interface using wireless high definition compliant techniques. In some embodiments, the graphics subsystem could be integrated into processor110or any chipset of platform600. It will be appreciated that in some embodiments, the various components of platform600may be combined or integrated in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software. TMR system120is configured to provide radiation hardened reliability for memories160, as described previously. TMR system120may include any or all of the circuits/components illustrated inFIGS.2-4, as described above. These components can be implemented or otherwise used in conjunction with a variety of suitable software and/or hardware that is coupled to or that otherwise forms a part of platform600. These components can additionally or alternatively be implemented or otherwise used in conjunction with user I/O devices that are capable of providing information to, and receiving information and commands from, a user. In various embodiments, platform600may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, platform600may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennae, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the radio frequency spectrum and so forth. When implemented as a wired system, platform600may include components and interfaces suitable for communicating over wired communications media, such as input/output adapters, physical connectors to connect the input/output adaptor with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted pair wire, coaxial cable, fiber optics, and so forth. Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, programmable logic devices, digital signal processors, FPGAs, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power level, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds, and other design or performance constraints. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The various embodiments disclosed herein can be implemented in various forms of hardware, software, firmware, and/or special purpose processors. For example, in one embodiment at least one non-transitory computer readable storage medium has instructions encoded thereon that, when executed by one or more processors, cause one or more of the methodologies disclosed herein to be implemented. The instructions can be encoded using a suitable programming language, such as C, C++, object oriented C, Java, JavaScript, Visual Basic .NET, Beginner's All-Purpose Symbolic Instruction Code (BASIC), or alternatively, using custom or proprietary instruction sets. The instructions can be provided in the form of one or more computer software applications and/or applets that are tangibly embodied on a memory device, and that can be executed by a computer having any suitable architecture. In one embodiment, the system can be hosted on a given website and implemented, for example, using JavaScript or another suitable browser-based technology. For instance, in certain embodiments, the system may leverage processing resources provided by a remote computer system accessible via network694. The computer software applications disclosed herein may include any number of different modules, sub-modules, or other components of distinct functionality, and can provide information to, or receive information from, still other components. These modules can be used, for example, to communicate with input and/or output devices such as a display screen, a touch sensitive surface, a printer, and/or any other suitable device. Other componentry and functionality not reflected in the illustrations will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware or software configuration. Thus, in other embodiments platform600may comprise additional, fewer, or alternative subcomponents as compared to those included in the example embodiment ofFIG.6. The aforementioned non-transitory computer readable medium may be any suitable medium for storing digital information, such as a hard drive, a server, a flash memory, and/or random-access memory (RAM), or a combination of memories. In alternative embodiments, the components and/or modules disclosed herein can be implemented with hardware, including gate level logic such as a field-programmable gate array (FPGA), or alternatively, a purpose-built semiconductor such as an application-specific integrated circuit (ASIC). Still other embodiments may be implemented with a microcontroller having a number of input/output ports for receiving and outputting data, and a number of embedded routines for carrying out the various functionalities disclosed herein. It will be apparent that any suitable combination of hardware, software, and firmware can be used, and that other embodiments are not limited to any particular system architecture. Some embodiments may be implemented, for example, using a machine readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method, process, and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, process, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium, and/or storage unit, such as memory, removable or non-removable media, erasable or non-erasable media, writeable or rewriteable media, digital or analog media, hard disk, floppy disk, compact disk read only memory (CD-ROM), compact disk recordable (CD-R) memory, compact disk rewriteable (CD-RW) memory, optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of digital versatile disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high level, low level, object oriented, visual, compiled, and/or interpreted programming language. Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like refer to the action and/or process of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (for example, electronic) within the registers and/or memory units of the computer system into other data similarly represented as physical entities within the registers, memory units, or other such information storage transmission or displays of the computer system. The embodiments are not limited in this context. The terms “circuit” or “circuitry,” as used in any embodiment herein, are functional and may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuitry may include a processor and/or controller configured to execute one or more instructions to perform one or more operations described herein. The instructions may be embodied as, for example, an application, software, firmware, etc. configured to cause the circuitry to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on a computer-readable storage device. Software may be embodied or implemented to include any number of processes, and processes, in turn, may be embodied or implemented to include any number of threads, etc., in a hierarchical fashion. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system-on-a-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc. Other embodiments may be implemented as software executed by a programmable control device. In such cases, the terms “circuit” or “circuitry” are intended to include a combination of software and hardware such as a programmable control device or a processor capable of executing the software. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood, however, that other embodiments may be practiced without these specific details, or otherwise with a different set of details. It will be further appreciated that the specific structural and functional details disclosed herein are representative of example embodiments and are not necessarily intended to limit the scope of the present disclosure. In addition, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described herein. Rather, the specific features and acts described herein are disclosed as example forms of implementing the claims. Further Example Embodiments The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent. Example 1 is a memory system comprising: a redundancy comparator configured to detect differences between data stored redundantly in a first memory, a second memory, and a third memory, the redundancy comparator further configured to identify a memory error based on the detected differences; an error collection buffer configured to store a memory address associated with the memory error; a memory scrubber circuit configured to overwrite, at the memory address associated with the memory error, erroneous data with corrected data, the corrected data generated based on a majority vote performed among the first memory, the second memory, and the third memory; and a priority arbitrator configured to arbitrate between the overwriting performed by the memory scrubber and a functional memory access associated with software execution performed by a processor configured to utilize the memory system. Example 2 includes the memory system of Example 1, further comprising a microcontroller configured to monitor the error collection buffer and trigger operation of the memory scrubber circuit in response to the memory error. Example 3 includes the memory system of Example 2, wherein the microcontroller is configured to set a priority for the priority arbitrator based on a tradeoff between the overwriting performed by the memory scrubber and the functional memory access. Example 4 includes the memory system of Examples 2 or 3, wherein the microcontroller is configured to determine that the memory error is associated with a Single Event Functional Interrupt (SEFI) condition, and to increase a rate of operation of the memory scrubber circuit in response to the determination. Example 5 includes the memory system of Example 4, wherein the microcontroller comprises a local memory and the microcontroller is configured to store copies of configuration registers of controllers associated with the first memory, the second memory, and the third memory in the local memory, and to restore the configuration registers based on the stored copies in response to the determination that the memory error is associated with a SEFI condition. Example 6 includes the memory system of Examples 4 or 5, wherein the microcontroller is configured to power cycle one or more of the first memory, the second memory, and the third memory in response to the determination that the memory error is associated with a SEFI condition. Example 7 includes the memory system of any of Examples 1-6, wherein the memory scrubber circuit is configured to cancel the overwrite in response to detection that a functional write is being performed at the memory address associated with the memory error. Example 8 is a space-based processing system comprising: a processor configured to execute mission software; and a memory system comprising: a redundancy comparator configured to detect differences between data stored redundantly in a first memory, a second memory, and a third memory, the redundancy comparator further configured to identify a memory error based on the detected differences, an error collection buffer configured to store a memory address associated with the memory error, a memory scrubber circuit configured to overwrite, at the memory address associated with the memory error, erroneous data with corrected data, the corrected data generated based on a majority vote performed among the first memory, the second memory, and the third memory, and a priority arbitrator configured to arbitrate between the overwriting performed by the memory scrubber and a functional memory access associated with the mission software execution. Example 9 includes the space-based processing system of Example 8, wherein the memory system comprises a microcontroller configured to monitor the error collection buffer and trigger operation of the memory scrubber circuit in response to the memory error. Example 10 includes the space-based processing system of Example 9, wherein the microcontroller is configured to set a priority for the priority arbitrator based on a tradeoff between the overwriting performed by the memory scrubber and the functional memory access. Example 11 includes the space-based processing system of Examples 9 or 10, wherein the microcontroller is configured to determine that the memory error is associated with a Single Event Functional Interrupt (SEFI) condition, and to increase a rate of operation of the memory scrubber circuit in response to the determination. Example 12 includes the space-based processing system of Example 11, wherein the microcontroller comprises a local memory and the microcontroller is configured to store copies of configuration registers of controllers associated with the first memory, the second memory, and the third memory in the local memory, and to restore the configuration registers based on the stored copies in response to the determination that the memory error is associated with a SEFI condition. Example 13 includes the space-based processing system of Examples 11 or 12, wherein the microcontroller is configured to power cycle one or more of the first memory, the second memory, and the third memory in response to the determination that the memory error is associated with a SEFI condition. Example 14 includes the space-based processing system of any of Examples 8-13, wherein the memory scrubber circuit is configured to cancel the overwrite in response to detection that a functional write is being performed at the memory address associated with the memory error. Example 15 is a method for providing radiation hardened memory, the method comprising: detecting, by a redundancy comparator, differences between data stored redundantly in a first memory, a second memory, and a third memory; identifying, by the redundancy comparator, a memory error based on the detected differences; storing, by an error collection buffer, a memory address associated with the memory error; generating, by a memory scrubber circuit, corrected data based on a majority vote performed among the first memory, the second memory, and the third memory; overwriting, by the memory scrubber circuit, erroneous data with the corrected data, at the memory address associated with the memory error; and arbitrating, by a priority arbitrator, between the overwriting performed by the memory scrubber and a functional memory access associated with software execution performed by a processor configured to utilize the memory system. Example 16 includes the method of Example 15, further comprising setting a priority for the priority arbitrator based on a tradeoff between the overwriting performed by the memory scrubber and the functional memory access. Example 17 includes the method of Examples 15 or 16, further comprising determining that the memory error is associated with a Single Event Functional Interrupt (SEFI) condition and increasing a rate of operation of the memory scrubber circuit in response to the determination. Example 18 includes the method of Example 17, further comprising storing copies of configuration registers of controllers associated with the first memory, the second memory, and the third memory in a local memory, and restoring the configuration registers based on the stored copies in response to the determination that the memory error is associated with a SEFI condition. Example 19 includes the method of Examples 17 or 18, further comprising power cycling one or more of the first memory, the second memory, and the third memory in response to the determination that the memory error is associated with a SEFI condition. Example 20 includes the method of any of Examples 15-19, further comprising canceling the overwrite in response to detection that a functional write is being performed at the memory address associated with the memory error. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be appreciated in light of this disclosure. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner and may generally include any set of one or more elements as variously disclosed or otherwise demonstrated herein. | 44,832 |
11861182 | DESCRIPTION OF EMBODIMENTS The present invention embodiments present an efficient way of securing IoT (Internet of Things) devices against most cyber-attacks while having a low cost overhead. Examples of attacks are network attacks, malware attacks, fault injection etc. Currently, many different countermeasures are provided. However, they target typically a small set of the attack space and therefore cannot be efficiently used in resource-constrained IoT devices. IoT devices operate as integrated circuit devices in an untrusted environment (i.e. in the field), but rely on interaction with a central server (or gateway) in a trusted environment. The trusted environment may be a cloud-type of environment which is under specific control, e.g. implemented in the central server/gateway. The invention embodiments provide a much cheaper and more effective solution. The solution entails an additional integrated circuit device area, however, this overhead is very low as many of the components required are typically already available on a generic IoT device. The invention embodiments use a small IC design IP part that is inserted between the caches of a device and the main memory/flash memory, arranged to perform hashing and verification functionality. As data to the memory has to be either securely hashed or verified (see below) also a time penalty has to be paid. However, this time penalty is marginal as the present invention embodiments reside between the cache and main memory/flash memory of an IoT device, and does not affect the interaction of the processor and caches. A generic block diagram of an IoT device (integrated circuit device1) according to an embodiment of the present invention is shown schematically inFIG.1. An integrated circuit device1is shown comprising a processor module2in communication with a cache memory module3,4and with one or more input/output units11-14for external communication. One or more memory control modules6,8,10are present, each arranged to interface with an associated storage memory unit5,7,9. The storage memory units5,7,9can be an integral part of the integrated circuit device1, or as shown in the block diagram ofFIG.1, the storage memory units5,7,9from a device module1A separate from the rest of the integrated circuit device1. This architecture allows the integrated circuit device1to be operated in a wide variety of applications in a cloud environment, including but not limited to communications, application loading and application execution, secure booting, etc. In addition, the integrated circuit device1further comprises an authentication module15(designated with the abbreviation UA for Unique Authentication) in communication with the memory control modules6,8,10and the cache memory modules3,4. The authentication module15is arranged to provide specific functionality, i.e. to store a secure key (e.g. after generating a (unique) secure key), to read a predetermined set of data from the associated storage memory units5,7,9via the memory control modules6,8,10, and an associated stored hash value, to calculate a hash value of the predetermined set of data using the secure key, and to store the predetermined set of data in the cache memory module3,4only if the calculated hash value corresponds to the associated stored hash value. Thus, the predetermined set of data read from memory is only copied to the cache memory (and thus made accessible to the processor2) if a calculated and stored hash value match. The authentication module15can be implemented as part of the integrated circuit design, i.e. as an additional integrated circuit device area. The authentication module (unit, component, part, etc.)15can be seen as a functional part of the integrated circuit design. The secure key can be any digital key associated with the integrated circuit device1allowing any type of hash calculations on the predetermined sets of data. In a further embodiment, the authentication module15is further arranged to read a further predetermined set of data from the cache memory module3,4, to calculate a hash value of the further predetermined set of data using the secure key, to store the further predetermined set of data in the associated storage memory units5,7,9via the memory control modules6,8,10, and to store the calculated hash value. This ensures data can only be stored in memory with an associated hash value, which can be verified when retrieving data from the memory5,7,9for use by the processor2(via cache memory3,4). In an even further embodiment, the cache memory module comprises a program cache module3and a data cache module4. By making this division, the hardware design of the integrated circuit device1can be optimized (e.g. the program cache module3can be optimized for read operations by the processor2, and the data cache module4for read and write operations. Of course, the present invention embodiments also are useable in case the data and program caches3,4are shared. FIG.2shows a schematic diagram of a central server17able to communicate with two integrated circuit devices1,1′ according to any one of the present invention embodiments. In addition to the (internal) authentication function of the present invention embodiments of the integrated circuit device1embodiments, the present invention further relates to a communication protocol that further secures the integrated circuit devices1. As shown by the various types of arrows inFIG.2, the communication protocol has device to server/gateway communication and device to device communication. The communication protocol can be implemented by the processor2and e.g. may use standard crypto engines. In a further aspect, the present invention thus relates to a method for updating data in an integrated circuit device1according to any one of the present invention embodiments, the method comprising a first authentication by the integrated circuit device1of a central server (or gateway)17, a second authentication of the integrated circuit device1by the central server17, and receiving and storing data by the integrated circuit device1sent by the central server17only if the first and second authentication are successful. Communication with the gateway/server17can happen in a regular fashion (i.e. using any authentication protocol, e.g. using a public and private key). After authentication, the server17is then able to install and/or update integrated circuit devices1. As only the central server17can authenticate the integrated circuit device1, no other device or system is able to install applications on that specific integrated circuit device1. Secondly, communication between two integrated circuit devices1,1′ requires authentication of both devices. For example, when device A wants to communicate with device B, device A has to send this request to the gateway/server17. Once device A is authorized as a legitimate device, the gateway/server17will notify device B that device A wants to communicate with it. If device B is also authenticated by the gateway/server17, device A will proceed to communicate with device B. During such a communication, device A is not able to install anything on device B. However, device A is authorized to send data to device B in case this is needed. Once device B receives such data it stores it locally using the secure hash function of the authentication unit15. In a further aspect therefore, a method is provided for setting up a communication path between two integrated circuit devices1,1′ according to any one of the present invention embodiments, the method comprising authentication of the two integrated circuit devices1,1′ by a central server17, and if authentication is successful, notifying the two integrated circuit devices1,1′ of their mutual authentication by the central server17, establishing the communication path between the two integrated circuit devices1.1′, and exchanging data via the communication path. In a further embodiment, the method initially sets up the communication by having the central server17executing the following actions. Receiving a communication request, the communication request comprising a device identification of a first one1of the two integrated circuit devices1;1′, and address identification of the two integrated circuit devices1,1′, sending a challenge message to a second one1′ of the two integrated circuit devices1,1′, and receiving a device identification from the second one1′ of the two integrated circuit devices1,1′. The present invention embodiments allow to provide a very good authentication mechanism in machine to machine (M2M) applications, wherein a cloud is able to authenticate a device, and a device is able to authenticate the cloud. The authentication mechanisms applied to obtain this mutual authentication scheme, are as such available in present day IoT standards, such as CoAP, XMPP and DDS. It is noted that various embodiments as described herein may be combined as indicated by the various claim references in the attached set of claims. Some or all of the resulting embodiments provide a combination of one or more of the specific advantages. In all, the present invention embodiments prevent execution of malware in an integrated circuit device1, and also the spread of malware in a network of such integrated circuit devices1. The relevant information cannot be obtained locally (e.g. using sniffing on printed circuit board level). Moreover, typical hardware attacks (fault injection, side channel) are prevented, especially if e.g. the optional encryption is applied. In network environments, the fact that each communication channel between two integrated circuit devices uses a different key helps in preventing attacks. The actual implementation allows to re-use existing hardware blocks in integrated circuit device1architectures for IoT applications, and the additional authentication unit15hardware is readily implemented. FIG.3shows a block diagram of an exemplary embodiment of the present invention integrated circuit device1in more detail, wherein elements corresponding to components in theFIG.1embodiment have the same reference numerals. As discussed above, in this embodiment the processor2interfaces with a program cache module3and a data cache module4. Furthermore, input/output units11-14for external communication are present in the form of a wired10unit11, a wireless10unit12, sensor interface13and direct memory access unit14. As furthermore shown in the block diagram ofFIG.3, the one or more memory control modules comprise a boot loader6, and the associated storage memory unit comprises a boot read-only memory module5. These are standard component implementations in integrated circuit design, and allow to store executable data in the boot ROM module5, which at boot-up is transferred to the program cache module3(after the authentication check in the authentication unit15as discussed above, in theFIG.3embodiment also shown with the reference UA for Unique Authentication). The present invention embodiments allow to make many of the (cyber) attacks as discussed above useless, as the integrated circuit devices1allow only execution of software that is authorized by the integrated circuit device1itself (and indirectly authorized by the central server17or gateway. I.e., an application binary (executable data) is stored in the boot ROM module5with a hash value (signature) generated by the integrated circuit device1itself. As a result, this executable data can only be run by the processor2on this specific integrated circuit device1. Furthermore, possibly infected devices1cannot spread malware to other devices1. Although attackers still could gain access remotely by e.g. cracking a password (hijacking), they will not be able to download malicious applications on the hijacked IoT device1. In a further embodiment, the one or more memory control modules comprise a static memory control module8, and the associated storage memory unit comprises a static storage unit7, e.g. in the form of a Flash memory module. In addition, or alternatively, the one or more memory control modules comprise a dynamic memory control module10, and the associated storage memory unit comprises a dynamic storage unit9, e.g. in the form of a DRAM module. Using the authentication checks, and hash value generation by the authentication unit15ensures that only authenticated data can be stored and retrieved by the integrated circuit device1. FIG.4shows a functional block diagram associated with the authentication unit15of the embodiment of the integrated circuit device1ofFIG.3. The authentication unit15communicates with the various memory control modules6,8,10via the memory ports part15A, as indicated inFIG.4for the various memory variants (boot ROM module5, static storage unit7, dynamic storage unit9). Data and hash values (or message authentication codes, MAC) are transferred via the memory ports part15A. Furthermore, a control unit20is provided in order to provide the proper control signals to the various memory variants (storage memory units5,7,9via the memory control modules6,8,10). Similarly, data can be transferred from/to the cache memory modules3,4, again under control from the control unit20(left side ofFIG.4functional block diagram). The predetermined sets of data are temporarily stored in shift registers (or buffers)23,24,25, and where appropriate, hash values are calculated by respective MAC function blocks21,22. As shown in the lower part of the functional block diagram ofFIG.4, a predetermined set of data is received via the memory ports part15A from the associated storage memory units5,7,9via the memory control modules6,8,10, as well as an associated stored hash value (MAC). Using shift register24and MAC function block22, a hash value of the predetermined set of data using the secure key is calculated, the secure key being obtained via control unit20. The calculated hash value (MAC) is compared to the associated and retrieved stored hash value in comparator unit26. In cooperation with the control unit20, the predetermined set of data is transferred to the cache memory module3,4only if the calculated hash value corresponds to the associated stored hash value (using shift register25if necessary). A similar functionality is shown in the functional block diagram ofFIG.4for transferring a further predetermined set of data from the cache memory module3,4to the storage memory units5,7,9, now using shift register23and MAC function block21. In addition, the security concept of the above described embodiments is enhanced by encrypting/decrypting the data stored in the device1as well. As a result, both data and operations are further obfuscated at run-time thereby preventing malware injection through data and fault injection attacks. A huge benefit of this embodiment is that it renders all network attacks useless as long as proper encryption is used during communication with the central server17and/or other devices1. To this end, the authentication module15further comprises an encryption unit27for encrypting data before storage in the associated storage memory units5,7,9via the one or more memory control modules6,8,10. Furthermore, the authentication module15may further comprise a decryption unit28for decrypting data before storage in the cache memory module3,4. In a further embodiment, the authentication module15further comprises a (unique) secure key generation unit29. In the exemplary embodiment shown inFIG.4, this (unique) secure key generation unit29is in communication with control unit20, which ensures proper triggering of the secure key generation unit29and internal usage of the secure key to the MAC function blocks21,22. This allows to secure the application using authenticated software (stored in the boot ROM module5), i.e. using signing of the software (program data) with secret information which only is known in the integrated circuit device1itself and in the central server17(as part of the device authentication process, see above). In the embodiment shown inFIG.4, furthermore a derived key generation unit31is present, in communication with the control unit20, key generation unit29, and (external to authentication module15) an address input from the cache memory module3,4. The derived key generation unit31is used to generate multiple keys from a primary key (provided by key generation unit29). Key derivation techniques are typically used to generate unique keys per transaction/communication or for key encrypting keys. Key derivation functions are often based on MACs (message authentication codes) or hash functions. The derived keys (referred to as MAC keys) as output from the key derivation unit31can be used in this application as input keys to the MAC function blocks21,22or encryption/decryption blocks27,28. A unique MAC key is generated per memory address (e.g., Flash, DRAM). As a result, each address in the memory (e.g., Flash, DRAM) will use a MAC key that is only valid for that address. This prevents attacks where an adversarial tries to modify the location of data in the memory (with their corresponding hashes). This solution will be able to detect this, as a different key (i.e. based on the original address of that data) would be needed to validate the calculated MAC. However, as the address changed (as the data is moved to another location) the used MAC key will differ, and hence the validation of the hash will fail and the tampering can be detected. The derived key can be generated by the key derivation unit31in different ways, such as:A simple XOR operation between the unique key generated from the key generation unit29and the memory address of the read or write operation.A keyed hashed function (i.e., MAC) where the unique key from the key generation unit29is used as key and the address as message.An encryption where the unique key from the key generation unit2) is used as key and the address as message. In a further embodiment, the (unique) secure key generation unit29may be arranged to generate a secure key which is determined based on hardware features of the integrated circuit device1. Such type of hardware based unique key generation is e.g. obtained through specialized circuits known as Physically Unclonable Functions (PUFs). Other unique key generation methods can be realized by storing unique codes in E-fuses (electrical fuses) or tamper-proof memory. In a further embodiment, the authentication module15comprises a locking circuit (e.g. inside control unit20) which can be activated by the activation key unit30. The control unit20is designed with extra logic to intentionally work only if a proper activation code is applied from the activation key unit30. This technique is also described as IC metering, logic locking, logic encryption, logic obfuscation or hardware obfuscation. The activation process is applied after manufacturing of the integrated circuit device1by writing the activation code inside the activation key unit30, e.g. write the correct activation code in a tamper-proof (secure) memory part. As the integrated circuit device1can then only be activated using this activation code (which may also be the secure key as described above), various types of possible attacks are prevented. An attack during design (e.g. a company with (unauthorised) access to the chip design) cannot sell the integrated circuit devices1produced as it will not work without the secure key. An attack during manufacture (e.g. a foundry producing more chips than agreed) will also be blocked, as the correct secure key is not known to the manufacturer. As an example, the activation code of logic encryption inside control unit20may be based on the unique key generated by key generation unit29, e-fused keys at activation key unit30, or a combination of both. For activation of the authentication unit15, e.g. the key generation unit29module is challenged and based on the response a (correct) part secure key is generated, possibly in combination with a further part secure key based on E-fused hardware at activation key unit30. The combined activation code is then used to unlock the control unit20which consequently unlocks the authentication unit15, after which the boot of the entire integrated circuit device1can be started. In all embodiments, security is furthermore maintained during all the usage phases of the integrated circuit device1, i.e., installation of new program data (e.g. apps), updates thereof, and while running the application and during communication with the central server17and/or other IoT devices1. To authenticate data or applications from the storage memory units5,7,9(e.g. flash memory), the authentication unit15uses a secure key to calculate the hash value of the data before storage, or when retrieving data from the storage memory units5,7,9, a calculated hash value is matched with the stored and retrieved hashes. This makes sure that only authentic application data and other data are read. The same applies to the operating system. Data that is generated locally will be stored with a secure hash. This hash is also calculated by the authentication unit15. As described above, the authentication unit15comes with an optional encryption unit27and decryption unit28for data storage in the DRAM and/or flash to increase the security. On top of that, to be able to disable the entire integrated circuit device1, logic locking is used (e.g. using the key generation unit29/key derivation unit31and the activation key unit30to generate a unique activation key based on e.g. PUFs that are E-fused (e.g. using the secure key generation unit29described above). The generated key from key generation unit29/key derivation unit31is then also used for the secure hashing. In the embodiments shown and described with reference toFIGS.3and4, various security functions (hash and encryption) may be applied to the static storage unit7(Flash) and/or the dynamic storage unit9(DRAM), providing a total of sixteen possible combinations as shown in the below table of combinations: COMBINATIONSFLASHDRAMBOOT ROMHashEncryptionHashEncryptionHashEncryption1YESNONONOYESYES2NO3YESYES4NO5YESNOYES6NO7YESYES8NO9YESNONOYES10NO11YESYES12NO13YESNOYES14NO15YESYES16NO The various types of memory present in the present invention embodiments of the integrated circuit device1may be implemented as one of various alternatives. In one embodiment, the associated storage memory units5,7,9comprise a memory unit with no parity. The hash value can e.g. be appended to each N byte word (N is e.g.64), or alternatively can be stored in a separate memory area (requiring an additional memory access). Alternatively (or additionally), the associated storage memory units5,7,9comprise a memory unit with parity bits. In this case, the regular parity bits field can be filled with the calculated hash value, thus requiring no or minimum additional resources for storing the hash values. In an even further alternative or additional embodiment, the associated storage memory units5,7,9comprise a memory unit with error correcting code (ECC). Similar to the first implementation (memory without parity), the hash value can be added to each data word, or in a separate memory area. The present invention has been described above with reference to a number of exemplary embodiments as shown in the drawings. Modifications and alternative implementations of some parts or elements are possible, and are included in the scope of protection as defined in the appended claims. | 23,569 |
11861183 | DETAILED DESCRIPTION Embodiments provide a disk device or a storage device that appropriately checks whether a key used for encrypting data and a key obtained from a host match each other when an encryption key is generated on a host side. In general, according to an embodiment, a disk device includes a volatile memory, a nonvolatile memory, and a controller. The controller is configured to receive, from a host, a key setting request that includes a cryptographic key, a key ID thereof, and tag information of the cryptographic key and generate generation information of the cryptographic key. The controller is also configured to store a first entry including the tag information, the cryptographic key, and the generation information associated with each other in the volatile memory, and store a second entry including the key ID and the generation information associated with each other in the nonvolatile memory. First Embodiment FIG.1is a schematic diagram showing an example of a configuration of a magnetic disk device1according to a first embodiment. The magnetic disk device1is connected to a host2. The magnetic disk device1can receive, from the host2, an access command such as an encryption key set command, a write command, and a read command. The magnetic disk device1includes a magnetic disk11having a recording surface formed on a surface thereof. The magnetic disk device1writes data and reads data into/from the magnetic disk11(more precisely, the recording surface of the magnetic disk11) in response to the access command. The magnetic disk device1may include a plurality of magnetic disks11, but in the present embodiment, the magnetic disk device1includes one magnetic disk11for simplification of description and illustration. Data is written and read via a magnetic head22. Specifically, in addition to the magnetic disk11, the magnetic disk device1includes a spindle motor12, a motor driver integrated circuit (IC)21, the magnetic head22, an actuator arm15, a voice coil motor (VCM)16, a lamp13, a head IC24, a read/write channel (RWC)25, a RAM27, a flash read only memory (FROM)28, a buffer memory29, a hard disk controller (HDC)23, and a processor26. The magnetic disk11is rotated at a predetermined rotation speed by the spindle motor12attached to a rotation shaft of the magnetic disk11. The spindle motor12is driven by the motor driver IC21. The magnetic disk11is an example of the nonvolatile memory. The motor driver IC21controls rotation of the spindle motor12and rotation of the VCM16. The magnetic head22writes data and reads data into/from the magnetic disk11by a write element22wand a read element22rprovided in the magnetic head22, respectively. In addition, the magnetic head22is attached to a tip end of the actuator arm15. The magnetic head22is moved along a radial direction of the magnetic disk11by the VCM16driven by the motor driver IC21. When rotation of the magnetic disk11is stopped, the magnetic head22is moved onto the lamp13. The lamp13is configured to hold the magnetic head22at a position separated from the magnetic disk11. During a read operation, the head IC24amplifies a signal read from the magnetic disk11by the magnetic head22and supplies the amplified signal to the RWC25. In addition, the head IC24amplifies a signal corresponding to data to be written supplied from the RWC25and supplies the amplified signal to the magnetic head22. The HDC23controls transmission and reception of data to/from the host2via an I/F bus, controls the buffer memory29, and performs an error correction process on the read data. The buffer memory29is used as a buffer of data transmitted and received to/from the host2. For example, the buffer memory29is used to temporarily store data to be written into the magnetic disk11or data read from the magnetic disk11. The buffer memory29is implemented by, for example, a volatile memory capable of high-speed operation. Types of a memory implementing the buffer memory29are not limited to a specific type. The buffer memory29may be implemented by, for example, a dynamic random access memory (DRAM), a static random access memory (SRAM), or a combination thereof. The RWC25modulates data to be written, which is supplied from the HDC23, and supplies the data to the head IC24. In addition, the RWC25demodulates a signal read from the magnetic disk11and supplied via the head IC24and outputs the demodulated signal to the HDC23as digital data. The processor26is, for example, a central processing unit (CPU). The RAM27, the flash read only memory (FROM)28, and the buffer memory29are connected to the processor26. The FROM28is a nonvolatile memory. Firmware (program data) and various operation parameters are stored in the FROM28. The firmware may be stored in the magnetic disk11. The RAM27is implemented by, for example, a DRAM, an SRAM, or a combination thereof. The RAM27is used as a memory for an operation by the processor26. The RAM27is used as a region where the firmware is loaded and a region where various management data are stored. The processor26controls the magnetic disk device1according to the firmware stored in the FROM28or the magnetic disk11. For example, the processor26loads the firmware from the FROM28or the magnetic disk11into the RAM27, and controls the motor driver IC21, the head IC24, the RWC25, the HDC23, and the like according to the loaded firmware. A configuration including the RWC25, the processor26, and the HDC23can also be regarded as a controller30. In addition to these elements, the controller30may include other elements (for example, the RAM27, the FROM28, the buffer memory29, or the RWC25). When a write command is received from the host2, the magnetic disk device1encrypts user data, which is the data to be written, and stores the encrypted user data in the magnetic disk11. There is a specification in which a host, instead of a magnetic disk device, generates an encryption key, and the magnetic disk device receives the encryption key from the host, encrypts the user data by using the encryption key, and does not store the encryption key in the magnetic disk. In the case of such a specification, since the encryption key is not stored in the magnetic disk, it is desirable to appropriately determine whether the encryption key received from the host is the encryption key used for encrypting the user data. To address such an issue, the magnetic disk device1according to the present embodiment determines whether the encryption key received from the host2is an appropriate encryption key by managing, using table information, generation information of the encryption key received from the host2. When a Key (,which is an encryption/decryption key), a KeyID (,which is an encryption/decryption key identifier), and a KeyTag (,which is tag information of the encryption/decryption key) are received from the host2, and an encryption key set command is received from the host2, the controller30of the magnetic disk device1generates Key table information and KeyID table information. The tag information of the encryption/decryption key is information indicating a storage region to which the encryption/decryption key is applied. FIG.2is a diagram showing a data structure of a KeyID table60. The KeyID table60is stored in the magnetic disk11. The KeyID table60is a table that stores record information (may be referred to as “entry”)61including a KeyID and a KeyGen. The KeyID received from the host2is included in the record information61. The generation information of the encryption/decryption key is included in the record information61. The generation information of the encryption/decryption key is, for example, information indicating an order of KeyIDs registered to the KeyID table60, and is number information for identifying the KeyIDs. The KeyID table60may be stored in a nonvolatile storage region other than the magnetic disk11. When the encryption key set command is received from the host2, the controller30searches the KeyID table60. When there is no record information61including the KeyID received from the host2, the controller30generates a new KeyGen and registers, in the KeyID table60, the record information61including the KeyID received from the host2and the new KeyGen. FIG.3is a diagram showing a data structure of a Key table70. The Key table70is stored in, for example, the buffer memory29or the RAM27. The Key table70is a table that stores record information (may be referred to as “entry”)71including a KeyTag, a Key, and a KeyGen. The KeyTag received from the host2is included in the record information71. The Key received from the host2is included in the record information71. The new KeyGen generated by the controller30is included in the record information71. When the encryption key set command is received from the host2, the controller30registers, in the Key table70, the record information71including the KeyTag received from the host2, the Key received from the host2, and the new KeyGen. When the record information71including a Key same as the Key received from the host2and a KeyTag same as the KeyTag received from the host2is stored in the Key table70, the controller30may not necessarily newly generate the record information71based on the information received from the host2. In addition, when a KeyTag and user data to be written are received from the host2and a write command is received from the host2, the controller30searches for the record information71corresponding to the KeyTag received from the host2, encrypts the user data received from the host2by using a Key in the searched record information, and stores, in the magnetic disk11, information including the KeyGen in the searched record information71and the encrypted user data that are associated with each other. In addition, when a KeyTag of data to be read is received from the host2and a read command of the user data is received from the host2, the controller30searches for the record information71corresponding to the KeyTag received from the host2. The controller30decrypts target user data by using a Key in the searched record information71, and determines whether a KeyGen corresponding to the user data and a KeyGen in the searched record information71match each other. When the two KeyGens match each other, the controller30transmits the decrypted user data to the host2. In addition, when the two KeyGens do not match each other, the controller30sends an error notification to the host2. Subsequently, an update state of information stored in the KeyID table60and the Key table70when the controller30receives the encryption key set command, the write command, and the read command will be described with reference toFIG.4,FIG.5AandFIG.5B, andFIG.6AtoFIG.6C. FIG.4is a flowchart showing a procedure for executing a command received from the host2.FIG.5AandFIG.5Bare diagrams showing information stored in the KeyID table60.FIG.6AtoFIG.6Care diagrams showing information stored in the Key table70. Here, it is assumed that no record information61is stored in the KeyID table60and no record information71is stored in the Key table70. First, in process1inFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 3, the KeyID is KeyID 0, and the Key is Key 0, and receives an encryption key set command. The controller30searches the KeyID table60for the record information61indicating that the Key is Key 0. Since there is no record information61in an initial state, the controller30generates 1 as a value of the KeyGen. Then, the controller30registers, in the KeyID table60, record information61aincluding KeyID 0 and KeyGen 1, as shown inFIG.5A. In addition, the controller30stores, in the Key table70, record information71aincluding KeyTag 3, Key 0, and KeyGen 1, as shown inFIG.6A. Returning toFIG.4, the controller30receives, from the host2, information indicating KeyTag 3 and receives a write command (process2). The controller30searches the Key table70for the record information71aincluding KeyTag 3. Then, the controller30encrypts the user data by using the Key 0 in the record information71a, and stores, in the magnetic disk11, the encrypted user data and data indicating that the KeyGen is 1. Returning toFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 3 and receives a read command (process3). The controller30searches the Key table70for the record information71aincluding KeyTag 3. Then, the controller30decrypts the user data by using the Key 0 in the record information71a, and determines that encryption keys are different when the KeyGen read along with the user data does not match the KeyGen in the record information71a. Accordingly, the controller30can determine that the read storage region is not written yet, or that the read storage region is a storage region encrypted with a different encryption key. Returning toFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 2, the KeyID is KeyID 1, and the Key is Key 1, and receives an encryption key set command (process4). The controller30searches the KeyID table60for the record information61including Key 1. As shown inFIG.5A, since there is no record information61including Key 1, the controller30generates 2 as the value of the KeyGen. Then, the controller30registers, in the KeyID table60, record information61bincluding KeyID 1 and KeyGen 2, as shown inFIG.5B. In addition, the controller30stores, in the Key table70, record information71bincluding KeyTag 2, Key 1, and KeyGen 2, as shown inFIG.6B. Returning toFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 2 and receives a write command (process5). The controller30searches the Key table70for the record information71bincluding KeyTag 2. Then, the controller30encrypts the user data by using the Key 1 in the record information71b, and stores, in the magnetic disk11, the encrypted user data and KeyGen 2. Returning toFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 2 and receives a read command (process6). The controller30searches the Key table70for the record information71bincluding the KeyTag is 2. Then, the controller30decrypts the user data by using the Key 2 in the record information71b, and determines that encryption keys are different when the KeyGen read along with the user data does not match the KeyGen in the record information71b. Returning toFIG.4, the controller30receives, from the host2, information indicating that the KeyTag is 1, the KeyID is KeyID 1, and the Key is Key 1, and receives an encryption key set command (process7). The controller30searches the KeyID table60for the record information61including Key 1. As shown inFIG.5B, since there is the record information61bincluding Key 1, the controller30does not newly register the record information61in the KeyID table60. Since there is no record information71including KeyTag 1 and Key 1, the controller30stores, in the Key table70, record information71cincluding KeyTag 1, Key 1, and KeyGen 2, as shown inFIG.6C. Subsequently, control during the read will be described with reference toFIG.7AtoFIG.7C.FIG.7Ashows an example of the Key table70. The Key table70includes the record information71cincluding KeyTag 1, Key 2, and KeyGen 3. In addition, the Key table70includes the record information71bincluding KeyTag 2, Key 1, and KeyGen 2. Further, the Key table70includes the record information71aincluding KeyTag 3, Key 0, and KeyGen 1. FIG.7Bis an example of a case when data is written during a write operation. The KeyGen is set to 1 and the data is encrypted with Key 0.FIG.7Cis an example of a case when read data is determined to be an error. As shown inFIG.7C, the KeyGen is 1, whereas the data is encrypted with Key 1. An error is found in the encryption/decryption key since the Key corresponding to KeyGen 1 is Key 0. FIG.8is a flowchart of processes when an encryption key set command is received. The controller30receives the KeyTag, the KeyID, and the Key from the host2and receives an encryption key set command (step S1). The controller30searches the KeyID table60for the record information61corresponding to the KeyID received from the host2(step S2). When there is no record information61that includes the received KeyID (No in step S3), the controller30generates a value of the KeyGen, newly adds the record information61to the KeyID table60(step S4), and then causes the process to proceed to step S5. In addition, when there is the record information61that includes the received KeyID in step S3(Yes in step S3), the controller30causes the process to proceed to step S5. In step S5, the record information71including the KeyGen generated in step S4is registered in the Key table70(step S5). FIG.9is a flowchart of processes when a write command is received. The controller30receives the KeyTag from the host2and receives a write command (step S11). The controller30searches the Key table70for the record information71corresponding to the KeyTag received from the host2(step S12). When there is no record information71that includes the received KeyTag (No in step S13), the controller30encrypts the user data by using a Key of a KeyTag other than the KeyTag received from the host2(step S15), and causes the process to proceed to step S16. In addition, when there is the record information71that includes the received KeyTag in step S13(Yes in step S13), the controller30encrypts the user data by using the Key in the record information71(step S14), and causes the process to proceed to step S16. In step S16, the encrypted user data and the KeyGen are stored in the magnetic disk11(step S16). Since the encrypted user data and a KeyGen that can be smaller in size than the KeyID are stored in association with each other, it is possible to prevent the controller30from unnecessarily using a magnetic disk region. FIG.10is a flowchart of processes when a read command is received. The controller30receives the KeyTag from the host2and receives a read command (step S21). The controller30searches the Key table70for the record information71corresponding to the KeyTag received from the host2(step S22). When there is no record information71that includes the received KeyTag (No in step S23), the controller30decrypts the user data by using a Key of a KeyTag other than the KeyTag received from the host2(step S25), and causes the process to proceed to step S26. When there is the record information71that includes the received KeyTag in step S23(Yes in step S23), the controller30decrypts the user data by using the Key in the record information71(step S24), and causes the process to proceed to step S26. In step S26, the controller30reads information including the decrypted user data and the KeyGen (step S26). When a KeyGen read along with the user data and a KeyGen in the searched record information71match each other (Yes in step S27), the controller30outputs the read data to the host2(step S28). In addition, when the KeyGen read along with the user data and the KeyGen in the searched record information71do not match each other in step S27(No in step S27), the controller30sends an error notification to the host2(step S29). In such a manner, the controller30determines whether the key used for encryption and the key used for decryption match each other by determining whether the KeyGen read along with the user data and the KeyGen in the searched record information71match each other. In the above description, when the encryption key set command is received from the host2, the controller30generates a new KeyGen, and registers, in the KeyID table60of the magnetic disk11, the record information61including the KeyID received from the host2and the new KeyGen. In addition, the controller30registers, in the Key table70of the buffer memory29or the RAM27, the record information71including the KeyTag received from the host2, the Key received from the host2, and the new KeyGen. In such a manner, by storing the KeyGen in the magnetic disk11, the buffer memory29or the RAM27, and storing the Key in the buffer memory29or the RAM27, the magnetic disk device1can manage generation information of a key without storing the Key in the magnetic disk11. Accordingly, it can be appropriately determined whether the encryption key received from the host2is an old encryption key. In addition, when there is no record information61corresponding to the KeyID received from the host2, the controller30stores, in the magnetic disk11, the record information61based on the information received from the host2, and stores, in the buffer memory29or the RAM27, the record information71based on the information received from the host2. Thus, since the record information61and the record information71are newly stored only when a new KeyID is received, it is possible to prevent the controller30from storing unnecessary data. Second Embodiment In a second embodiment, the magnetic disk device1performs output control on a result of determining whether an encryption key is appropriate when an encryption key is made unrecoverable with respect to encrypted data recorded in the magnetic disk11. A main purpose of the host2transmitting the Key and the KeyID to the magnetic disk device1is to enable the magnetic disk device1to detect whether an encryption key used for encryption and a decryption key used for decryption match each other. In the magnetic disk device1according to the first embodiment, when an encryption key used for encryption and a decryption key used for decryption do not match each other, the magnetic disk device1sends, to the host2, a notification that encryption keys do not match each other. On the other hand, in the second embodiment, the magnetic disk device1performs a secure instance erase function, which is a function of making an encryption key that has been used to encrypt user data written in the magnetic disk11unrecoverable. In this case, it is not necessary to actually erase the data in the magnetic disk11. When data written before performing the secure instance erase function on the magnetic disk11is read after performing the secure instance erase function, it is desirable to transfer the read data to the host2without detecting a mismatch between the encryption key and the decryption key. Here, update states of information stored in the KeyID table60or the Key table70before and after performing the secure instance erase function will be described with reference toFIG.11AtoFIG.11D. First, the KeyID table60before performing the secure instance erase function is shown inFIG.11A. As shown inFIG.11A, the record information61(the record information61a, the record information61b, etc.) including KeyID-0, KeyID 1000, etc., are stored in the KeyID table60. In addition, the Key table70stores a record information71including KeyGen common to the KeyGen in the record information61. When the controller30performs the secure instance erase function, the encryption key is invalidated, as shown inFIG.11B. As a result, all the record information61in the KeyID table60are deleted. In addition, the controller30deletes all the record information71in the Key table70. When an encryption key set command is received from the host2after performing the secure instance erase function and deleting the record information61in the KeyID table60, new record information61is added to the KeyID table60in response to the encryption key set command. An example in which the record information61is newly added is shown inFIG.11C. As shown inFIG.11C, the KeyID table60includes record information61dincluding KeyID-1001 and record information61eincluding KeyID-1002. In addition, the Key table70in this state includes the record information71including KeyTag 00, Key-1001, and KeyGen 1002, as shown inFIG.11D. In such a state, it is determined whether read data is data encrypted before performing the secure instance erase function, and the read data is transmitted to the host2based on the determination result. Subsequently, a process performed when a read command is received according to the second embodiment will be described with reference toFIG.12.FIG.12is a flowchart showing a process procedure when a read command is received according to the second embodiment. Here, it is assumed that data in the KeyID table60and the Key table70are updated as shown inFIG.11AtoFIG.11D. That is, the controller30adds the record information61in the KeyID table60and the record information71in the Key table70before performing the secure instance erase function. Then, the controller30performs the secure instance erase function, and then adds record information to the KeyID table60and the Key table70. The controller30receives the KeyTag from the host2and receives a read command (step S51). The controller30searches the Key table70for the record information71corresponding to the KeyTag received from the host2. The controller30reads information including the decrypted user data and the KeyGen (step S52). The controller30determines whether the KeyGen corresponding to the user data is included in any record information61in the KeyID table60by determining whether the KeyGen corresponding to the user data matches the KeyGen of any record information61in the KeyID table60(step S53). When the KeyGen corresponding to the user data is not included in any record information61in the KeyID table60(No in step S53), which indicates that the read user data is user data encrypted before performing the secure instance erase function, matching determination of the KeyGen is skipped (step S54), and the process proceeds to step S57. When the KeyGen corresponding to the user data is included in any record information61in the KeyID table60(Yes in step S53) and when the KeyGen corresponding to the user data and the KeyGen in the searched record information71match each other (Yes in step S55), the controller30outputs the read data to the host2(step S57). In addition, when the KeyGen corresponding to the user data and the KeyGen in the searched record information71do not match each other in step S55(No in step S55), the controller30sends an error notification to the host2(step S56). The magnetic disk device1according to the second embodiment does not send an error notification when there is no record information61including KeyGen read along with the user data after the secure instance erase function is performed and then a read command is received. Thus, when it is not necessary to determine whether an encryption key and a decryption key for user data before performing the secure instance erase function match each other, it is possible to prevent the magnetic disk device1from determining whether the encryption key and the decryption key match each other. (Modification) Although the above-described embodiments demonstrate a case where a KeyID is received from the host2and the received KeyID is stored in the KeyID table60, a KeyID that is not used for a long period of time may be deleted. For example, the controller30transmits the record information61to the host2in response to a request from the host2. When a KeyID is received from the host2thereafter, the controller30deletes the record information61corresponding to the KeyID. In addition, the controller30deletes the record information71including KeyGen in the record information61to be deleted. In such a manner, it is possible to prevent the magnetic disk device1from maintaining information such as unnecessary KeyID and Key. In addition, the controller30may store a KeyGen corresponding to the KeyID to be deleted from the host2, and may entirely read the magnetic disk11at a predetermined timing and update a KeyGen of user data corresponding to the KeyGen to a value indicating that there is no KeyID. Further, the magnetic disk device1may be virtually managed as a plurality of drives, similar to name space or logical unit number (LUN). In this case, the magnetic disk device1may store the KeyID table60and the Key table70for each drive (virtual region). In this case, the magnetic disk device1can narrow a range of searching the KeyID table60and the Key table70by storing the KeyID table60and the Key table70together in the virtual region, and a processing load can be reduced. In addition, although a case of an application to the magnetic disk device1is described in the above-described embodiments, applications to various other storage devices such as a solid state drive (SSD) may be possible. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. | 29,159 |
11861184 | DETAILED DESCRIPTION It is proposed to read out digital values from memory cells of a memory with the highest possible reliability by using at least two reference values. The digital values may be binary values (also referred to as bits) or multi-value digital values. The reliability of the memory cells may vary; in particular, the memory cells may have different states with regard to their reliability (also referred to as reliability states). For example, there is a low probability of a memory cell that is in a first reliability state outputting an erroneous value. Correspondingly, there is a high probability of the memory cell in a second reliability state providing an erroneous value when reading out. Thus, in the second reliability state, the memory cell may output the value 0 or 1 with a probability of 50% respectively and, in the first reliability state, the memory cell may output an erroneous value with a much lower probability. When reading out n memory cells Sp1, . . . ,Spn, a first data word W1=x11, . . . ,x1n with n components x11, . . . , x1n(also referred to as bits) and a second data word W2=x21, . . . ,x2n with n components x21, . . . , x2nmay be determined. In another example, more than two data words may also be determined when reading out the memory cells. The memory cells Sp1, . . . ,Spn may have been described with the bits x1, . . . , xn. It is an option that a codeword of an error code C comprises the bits x1, . . . , xn. For example, the bits x1, . . . , xn may form a codeword together with address bits and/or bits derived from address bits. In particular, the bits x1, . . . , xn may form a codeword of the error code C. The error code C may be an error-correcting code and/or error-detecting code. It is assumed below by way of example that the bits x1, . . . , xn form a codeword of the error code C. The first data word W1and the second data word W2produce a resultant data word Wr as follows: Wr=xrl, . . . ,xrn. For i=1 . . . ,n, two reference values Ri− and Ri+ with Ri−<Ri+ and Ri+−Ri−=Δi>0 are used when reading out a memory cell Spi. The memory cell Spi has a physical value Vi, which is for example electrical resistance. A reading current determined by the resistance of the memory cell may for example be compared both with the first reference value Ri− and with the second reference value Ri+. A first digital value αi can be determined by the reading current being smaller or greater than the first reference value Ri−. The second digital value βi may be determined by the reading current being smaller or greater than the second reference value Ri+. For example, the one reading current determined when reading out the memory cell Spi is compared both with the reference value Ri− for determining the first digital value αi and with the second reference value Ri+ for determining the second digital value βi. In this example, αi and βi are binary values. It is an option that more than two reference values, for example three reference values Ri−,Ri,Ri+, are used, with which a physical value determined when reading out the memory cell Spi is compared. The physical value may be for example a reading current, a charge or a voltage (for example associated with the reading current). Optionally, a word derived from the state of the memory cell may be compared with at least two reference values, in order in this way to determine at least two digital values. Different approaches for determining a binary value when reading out a memory cell by using reference values are known. It is known from the document U.S. Pat. No. 9,805,771 that a binary value can be determined in a time domain when reading out memory cells. For example, a reading current of a memory cell may be integrated by way of a capacitance. Similarly, a current that corresponds to a reference value may be integrated by way of a capacitance and it can be determined whether the integral of the reading current reaches a threshold value before the integral of the current of the reference value or whether the integral of the reading current reaches the threshold value after the integral of the current of the reference value. If the integral of the current of the read-out memory cell reaches the threshold value at a time before the integral of the current of the reference value, a first digital value can be determined for the memory cell when reading. If the integral of the current of the read-out memory cell reaches the threshold value at a time after the integral of the current of a reference value, a second digital value can be determined for the memory cell when reading. A comparison of a state stored in a memory cell with multiple reference values can be correspondingly carried out. When reading out a memory cell Spi, the read-out physical value or a value derived from the physical value can be compared both with the first reference value and with the second reference value without a further reading out of this physical value taking place. The read-out value may also be an indirectly read-out value, for example a value that was determined from the read-out value. A location or a position of the read-out value in relation to the first reference value and in relation to the second reference value can be determined. It is consequently established whether the read-out value is greater or smaller than the first and/or second reference value. If it is a comparison in the time domain, it can be determined whether and if so which reference value is reached earlier or later. A read-out value of the memory cell Spi is compared with the first and second reference values. In this case, values x1iand x2iof an ith component of the first data word W1and of the second data word W2are determined as follows: If the read-out value is greater than the first reference value and greater than the second reference value, the components x1iand x2iof the two data words W1and W2are equal and have a first value. If the read-out value is smaller than the first reference value and smaller than the second reference value, the components x1iand x2iof the two data words W1and W2are equal and have a second value, which is different from the first value. If the read-out value is greater than the first reference value and smaller than the second reference value, or the read-out value is smaller than the first reference value and greater than the second reference value, the components x1iand x2iare unequal or inverse to one another. For example, one of the values may be 0 and the other 1. If the memory cell Spi has been described with a 1 for example and the value read out from the memory cell is positioned between the first and second reference values, then one of the values x1ior x2iis equal to 1. Irrespective of whether a correct or an erroneous physical value was read out when reading out the memory cell Spi, in one of the two data words W1and W2the ith component is correct. For further explanation, reference should be made toFIG.1. The bits x1, . . . ,xn to be stored, which are bits of a codeword of an error code C, are written to the memory cells Sp1, . . . ,Spn. It is an option that the bits x1, . . . ,xn together with further bits are a codeword of the error code C. The further bits may be address bits or bits derived from address bits. The address is preferably the address at which the bits x1, . . . ,xn are/have been stored in the memory. For example, the address may be provided by an address generator as a write address when writing and as a read address when reading. It is also an option that the further bits comprise a password, which for example is checked when writing and/or when reading. The error code C may be a Hsiao code. For example, the error code C may be a 1-bit error-correcting and 2-bit error-detecting code or a 2-bit error-correcting and 3-bit error-detecting BCH code. As mentioned above, when reading out the memory cells Sp1, . . . , Spn, the first data word W1and the second data word W2may be determined. The first data word W1is either a codeword of the error code C or not a codeword of the error code C. Correspondingly, the second data word W2is either a codeword of the error code C or not a codeword of the error code C. If the first data word W1is a codeword of the error code C, it can be used in the resultant data word Wr. If the first data word W1is not a codeword of the error code C and differs from a codeword of the error code C in erroneous bits, so that there is a correctable error of the code C, then the data word W1is corrected to a corrected data word W1,cor. The corrected first data word W1,corcan be used in the resultant data word Wr. If for example the first data word W1differs from a codeword of the error code C in precisely one bit, then there is a 1-bit error. If the error code is a Hsiao code, this 1-bit error in the first data word W1can be corrected to the corrected first data word W1,cor. If the first data word W1is not a codeword of the error code C and differs from each codeword of the error code C in bits that form an error of the error code C that cannot be corrected, the first data word W1is marked as uncorrectable. If the second data word W2is not a codeword of the error code C and differs from a codeword of the error code C in erroneous bits, so that there is a correctable error of the code C, then the data word W2is corrected to a corrected second data word W2,cor. The corrected second data word W2,corcan then be used in the resultant data word Wr. If the second data word W2is not a codeword of the error code C and differs from each codeword of the error code C in bits that form an error of the error code C that cannot be corrected, the second data word W2is marked as uncorrectable. An error may occur in the bits of the first data word for example if an ith bit x1ifor which the two digital values αi and βi are equal and which is determined as x1i=αi=βi is erroneous. If such an error occurs with a low probability, the memory cell Spi is in a reliable state. An error may occur in the bits of the first data word for example if a jth bit x1jfor which the two digital values αj and βj are unequal and which is determined as x1jis erroneous. The binary value 0 of the first data word is for example determined for all memory cells for which the corresponding first digital value is unequal to the second digital value. The component x2jof the second data word W2is then equal to 1. The memory cell Spj is in an unreliable state. In its unreliable state, the memory cell Spj outputs the binary value 0 or the binary value 1 for example with a probability of 50% respectively. One of the output values is correct, the other is erroneous. Since the component x1jof the first data word is equal to 0 and the component x2jof the second data word W2is equal to 1, either the corresponding component of the first data word or the corresponding component of the second data word is error-free. If in an example only a few memory cells are in an unreliable state, there is a low probability pu that a memory cell is in an unreliable state. The memory cell in the unreliable state in this case outputs an erroneous value with a probability of ½ ·pu. FIG.1shows distributions of physical values V for memory cells of a memory. A distribution11is assigned to a binary value 1 and a distribution12is assigned to a binary value 0. The distributions11and12overlap in an overlapping region13. Also shown are a reference value R+17and a reference value R−18, where R+−R−>0. For a memory cell Spi with i=1, . . . , n, a first digital value α1is obtained as αi={1forVi≤R-0forVi>R-, and a second digital value β1is obtained as βi={1forVi≤R+0forVi>R+. It should be assumed below that the reference values for three memory cells Spj14, Spk15and Spl16(j≠k≠l) under consideration are equal. By way of example, three physical values Vj, Vk and Vl are respectively compared with the reference values17and18. The physical value Vj of the memory cell Spj14is smaller than each of the reference values17and18. The digital values are consequently obtained as αj=βj=1. It follows from this that x1j=x2j=1 for the jth bit x1jof the first data word W1and for the jth bit x2jof the second data word W2. The physical value Vj does not lie in the overlapping region13. The digital values αj and βj of the memory cell Spj14are both equal to 1. The memory cell Spj14may be regarded as reliable; there is only a low probability that a reading error should be expected when reading out the memory cell Spj14. The physical value Vk of the memory cell Spk15is greater than each of the reference values17and18. The digital values are consequently obtained as αk=βk=0. It follows from this that x1k=x2k=0 for the kth bit x1kof the first data word W1and for the kth bit x2kof the second data word W2. The physical value Vk does not lie in the overlapping region13. The digital values αk and βk of the memory cell Spk15are both equal to 0. The memory cell Spk15may be regarded as reliable; there is only a low probability that a reading error should be expected when reading out the memory cell Spk15. The physical value Vl of the memory cell Spl16is greater than the reference value18and smaller than the reference value17. The digital values are consequently obtained as αl=0≠βl=1. It follows from this that x1l=0 for the lth bit x1lof the first data word W1and x2l=1 for the lth bit x2lof the second data word W2. The physical value Vl lies in the overlapping region13. There is a greater probability that a reading error should be expected when reading out the memory cell Spl16than when reading out the memory cells Spj14and Spk15. The memory cell Spl may be regarded as unreliable. If there is no error in the n-1 components x11, . . . ,x1l−1,x1l+1, . . . ,x1n of the first data word W1and in the n-1components x21, . . . ,x2l−1,x2l−1, . . . ,x2n of the second data word W2, then either the first data word W1or the second data word W2is a codeword of the error code C. The first data word W1and the second data word W2differ in the bits x1lof W1and x2lof W2. Since αl≠βl, x1l=0 and x2l=1 were set. Either xl=0 or xl=1 was written to the memory cell Spl. If xl=0 was written to the memory cell Spl16, then W1is error-free and a codeword of the error code C. If xl=1 was written to the memory cell Spl16, then W2is error-free and a codeword of the error code C. If W1is error-free and a codeword of the error code C, then the resultant data word Wr is determined as W1. If W2is error-free and a codeword of the error code C, then the resultant data word Wr is determined as W2. By way of example, it is assumed that xl=0 was written to the memory cell Spl16and that consequently W1is error-free and a codeword of the error code C. If then in addition a further bit, for example the first bit with 1≠l is erroneous both in W1and in W2, although x1l=x2land αl=βl, then the first data word W1has a 1-bit error in the first bit. The second data word W2has a 2-bit error, with the first bit and the Ith bit being erroneous. If then for example the error code C is a known 1-bit error-correcting and 2-bit error-detecting Hsiao code, then the first data word W1can be corrected by using the code C. Correspondingly, the erroneous first data word W1can be corrected to the first corrected data word W1,corby using the Hsiao code. The second data word W2cannot be corrected by using the error code C assumed here by way of example. On account of the 2-bit error present in W2, the erroneous second data word is detected as not correctable by the error code C, which is by way of example a 1-bit error-correcting and 2-bit error-detecting Hsiao code. The resultant data word Wr is then equal to W1,cor. If for example the error code C is a 2-bit error-correcting and 3-bit error-detecting BCH code, the first data word W1, which has a 1-bit error in the first bit, is corrected to the corrected first data word W1,cor. Also the second data word W2, which has a 2-bit error in the first bit and in the Ith bit, is corrected by the error code C to the second corrected data word W2,cor. The resultant data word Wr is in this case determined as Wr=W1,cor=W2,cor. If, however, for example a 2-bit error additionally occurs in the first and second bits both in the first data word W1and in the second data word W2, and if l≠1, 2, this 2-bit error in the first data word W1can be corrected to the corrected first data word W1,cor. By contrast, the second data word has 3-bit errors in the first, second and Ith bits, which cannot be corrected by the error code C. The erroneous second data word W2is detected as not correctable. The resultant data word Wr is obtained as Wr=W1,cor. In another example it is assumed that the fifth bit position and the seventh bit position of the corresponding digital values are unequal, and therefore α5≠β5and α7≠β7. For the first data word W1, x5=x7=0 is then determined, and for the second data word W2, x5=x7=1 applies. The bits x5and x7were written to the memory cells Sp5and Sp7. There are four possible assignments of the bits x5and x7, which can be distinguished as follows:x5=x7=0: the first data word W1in the bits x15and x17is error-freex5=x7=1: the second data word W2in the bits x25and x27is error-freex5=0 and x7=1: both the first data word W1has a 1-bit error in the bits x15and x17and the second data word W2has a 1-bit error in the bits x25and x27x5=1 and x7=0: both the first data word W1has a 1-bit error in the bits x15and x17and the second data word W2has a 1-bit error in the bits x25and x27. Dependent on which values have been written to the memory cells Sp5and Sp7and irrespective of whether bits read out from these memory cells are erroneous or correct, it applies that: either one of the data words W1or W2is error-free in the corresponding bits x15and x17or in the bits x25and x27, or both data words have a 1-bit error in these bits. Either no error or at most a 1-bit error occurs in one of the data words, so that in at least one of the data words the number of errors can be reduced by at least one error. FIG.2shows distributions of physical values V for memory cells of a memory. One distribution21is assigned to a binary value 1 and one distribution22is assigned to a binary value 0. The distributions21and22overlap in an overlapping region23. Also shown are a reference value R+27, a reference value R−28and a reference value R29, where R+>R>R−>0. The reference value R−28may be referred to as the left reference value, the reference value R29may be referred to as the middle reference value and the reference value R+27may be referred to as the right reference value. Three memory cells Spj24, Spk25and Spl26, with their corresponding physical values Vj, Vk and Vl, are considered. These may be physical states of the memory cells or states derived from the physical states of the memory cells. For the sake of simplification, it is assumed below that the reference values27to29for all three memory cells24to26are equal. However, examples that provide at least partially different reference values are also possible. The comparison of each physical value with three reference values allows three binary digital values to be determined for each of the memory cells as follows: For the memory cell Spj24with the physical value Vj, the digital values αj, βj and γj are determined by αj={1forVj≤R-0forVj>R-,βj={1forVj≤R+0forVj>R+,γj={1forVj≤R0forVj>R. For the memory cell Spk25with the physical value Vk, the digital values αk, βk and γk are determined by αk={1forVk≤R-0forVk>R-,βk={1forVk≤R+0forVk>R+,γk={1forVk≤R0forVk>R. For the memory cell Spl26with the physical value Vl, the digital values αl, βl and γl are determined by αl={1forVl≤R-0forVl>R-,βl={1forVl≤R+0forVl>R+,γl={1forVl≤R0forVl>R. It is evident fromFIG.2that: αj=βj=γj=1, αk=βk=γk=0, αl=0,βl=γl=1. If a first data word W1=x1l, . . . , x1nand a second data word w2=x2l, . . . , x2nare provided, then one option for i=1, . . . ,n is to determine the bits x1iof the first data word and the bits x2iof the second data word such that the following applies:that x1i=x2i=αifor αi=βi,and that x1i=γiand x2i=γifor αi≠βi. For the memory cell Spj24, αj=βj=1, and the memory cell is in a reliable state. The jth component x1jof the first data word W1, and the jth component x2jof the second data word W2are equal and equal to αj=βj. For the memory cell Spk25, αk=βk=0, and the memory cell is in a reliable state. The kth component x1kof the first data word W1, and the kth component x2kof the second data word W2are equal and equal to αk=βk. For the memory cell Spl26, αl≠βl, and the memory cell is in an unreliable state. Its corresponding physical value Vl lies in the overlapping region23of the distributions21and22. The lth component x1lof the first data word W1is equal to γ1and the lth component x2lof the second data word W2is equal toγ1. The lth component of the first data word and of the second data word are unequal and inverse to one another. It is an option to use a first data word1=x1l, . . . , x1n, a second data word W2=x2l, . . . , x2n, a third data word W3=x3l, . . . , x3nand a fourth data word w4=x4l, . . . , x4nand to determine the bits for the data words W1, W2, W3and W4and for i=1, . . . ,n the bits x1i,x2i,x3i,x4ifor example in the following way: x1i=x2i=x3i=x4i=αifor αi=βiand x1i=0,x2i=1,x3i=γi,x4i=γi⊕1 for αi≠βi. It then applies for i=1, . . . ,n that x4i=x1i⊕x2i⊕x3i, so that the components of the fourth data word W4are equal to an exclusive-OR sum (also referred to as an XOR sum) of the corresponding components of the data words W1, W2and W3. If for example a memory cell SpK with 1≤K≤n is a first memory cell, which is in an unreliable state, so that αK≠βK applies, and if a memory cell SpL with 1≤K<L≤n is a second memory cell, which is in an unreliable state, so that αL≠βL applies, then it is advantageous if tuples [x1K,x1l],[x2K,x2l],[x3K,x3l],[x4K,x4l]of the bits of the data words W1, W2, W3and W4are determined such that these tuples form all four possible binary tuples. For the first memory cell SpK in an unreliable state, for example x1K,x2K,x3K,x4Kare determined as x1K=0,x2K=1,x3K=0,x4K=1 For the second memory cell SpL in an unreliable state, for examplex1L,x2L,x3L,x4Lare determined as x1L=0,x2L=1,x3L=1,x4L=0 It then applies that [x1K,x1L]=[0,0], [x2K,x2L]=[1,1], [x3K,x3L]=[0,1], [x4K,x4L]=[1,0]and each of the four possible tuples [0, 0], [1, 1], [0, 1] and [1, 0] occurs in one of the four data words. If one of the data words W1, W2, W3and W4is a codeword of the error code C used, then this codeword can be determined as the resultant data word Wr. If one of the data words W1, W2, W3and W4, for example W2, is an erroneous data word that can be corrected to a codeword W2,corof the code C by using the error code C, then the codeword W2,corcan be used as the resultant data word Wr=W2,cor. The case where the memory cells SpK and SpL are the only two memory cells of the memory cells Sp1, . . . ,Spn that are unreliable is considered by way of example. The digital values γK and γL form the tuple [γK, γL]. In one of the four data words W1, W2, W3and W4, one of the tuples [x1k,x1L]=[0,0], [x2K,x2L]=[1,1], [x3K,x3L]=[0,1], [x4K,x4L]=[1,0] is equal to the tuple [xK, xL] of the bits xK, xL which have been written to the memory cells SpK and SpL, irrespective of which binary values have been erroneously or correctly output by the memory cells SpK and SpL in an unreliable state. For example, for xK=1, xL=1, the second data word W2is a codeword of the error code C if no error has occurred in the n-2 bits x21, . . . ,x2K−1,x2K+1, . . . ,x2L−1,x2L+1, . . . ,x2n An error in one of these n-2 bits of the second data word W2can then be corrected as a 1-bit error, irrespective of the fact that two memory cells are in an unreliable state and independently of the erroneous or correct binary values output by these two memory cells. Example: Reading Out Memory Cells, Evaluation in the Time Domain Reading out of memory cells in the time domain by using multiple reference values is described below by way of example. By way of example, three memory cells Sp1, Sp2and Sp3and two different reference values R− and R+ are considered. Dependent on the state of the memory cell Sp1, a derived value v1(t) is determined. The derived value v1(t) is a binary, time-dependent value. Up to a point in time τ1, it assumes the value 0 and as from the point in time τ1 it assumes the value 1. For example, the state of the memory cell Sp1may be determined by an electrical resistance value R1. In dependence on the resistance value R1, a reading current I1is determined when reading out the memory cell Sp1. The reading current I1can be integrated by using a capacitance Ca over time in relation to a voltage V1(t), which is compared by means of a comparator with a threshold value of a voltage V. At the input of the comparator, the binary time-dependent value v1(t) is provided. As long as the voltage V1(t) is ≤V, then v1(t)=0. If the voltage V1(t) is >V, then v1(t)=1. Correspondingly, a binary derived value v2(t), which up to a point in time τ2assumes the value 0 and as from the point in time τ2assumes the binary value 1, is determined when reading out the memory cell Sp2. Also, a binary derived value v3(t), which up to a point in time τ3assumes the value 0 and as from the point in time τ3assumes the binary value 1, is determined when reading out the memory cell Sp3. Furthermore, a binary derived value r−(t), which up to a point in time τ− assumes the value 0 and as from the point in time τ− assumes the binary value 1, is determined for the reference value R−. In addition, a binary derived value r+(t), which up to a point in time τ+ assumes the value 0 and as from the point in time τ+ assumes the binary value 1, is determined for the reference value R+. FIG.3shows a circuit arrangement with 6 latches311,312,321,332,331and332. Each of the latches comprises a data input, an input for the input of a hold signal and a data output. Also represented inFIG.3are five inputs31to35, wherein at the input31there is a binary signal v1(t), at the input32there is a binary signal v2(t), at the input33there is a binary signal v3(t), at the input34there is a binary signal r−(t) and at the input35there is a binary signal r+(t). The input31is connected to the data inputs of the latches311and312, the input32is connected to the data inputs of the latches321and322and the input33is connected to the data inputs of the latches331and332. The input34is connected to the hold-signal inputs of the latches312,322and332and the input35is connected to the hold-signal inputs of the latches311,321and331. The value α1is provided at the output of the latch311, the value β1is provided at the output of the latch312, the value α2is provided at the output of the latch321, the value β2is provided at the output of the latch322, the value α3is provided at the output of the latch331and the value β3is provided at the output of the latch332. If the hold signal of a latch assumes the value 1, the value at the data input of the respective latch is stored in the latch. If for example τ−<τ1, then the value 0 is stored in the latch311. The hold signal r−(t) is equal to 1 before the signal v1(t) at the data input assumes the value 1. If τ+<τ1, then the value 0 is likewise stored in the latch312and it is the case that α1=β1. If τ−<τ1, then the value 0 is stored in the latch311. If τ+<τ1, then the value 1 is stored in the latch312and it is the case that α1≠β1. The further values stored in the latches are determined analogously. The content of the latches is determined on the basis of the τime sequence in which the binary signals v1(t), v2(t), v3(t), r−(t) and r+(t) assume the value 1. The content of the latches also determines the values α1, β1, α2, β2, α3and β3. If n memory cells are read out, then, with two reference values, 2·n latches can be used. If N>2 reference values are used, then N·n latches can be used. Determination of a Reference Value from a Set of Reference Values How at least one reference value can be determined from a set of reference values is described below by way of example. By way of example, five reference values R−−,R−,R,R+,R++with R−−>R−>R>R+>R++form a set of reference values. It is an option when reading out from memory cells to compare a physical value determined when reading out, or a value derived from the physical value, with the reference values R− and R+, to determine digital values and to form a first data word W1(R−, R+) and a second data word W2(R−, R+). If the first data word W1(R−, R+) is error-free or can be corrected by using the error code considered, then the first data word, or if applicable the corrected first data word, can be used as the resultant data word Wr(R−, R+). If the second data word W2(R−, R+) is error-free or can be corrected by using the error code considered, then the second data word, or if applicable the corrected second data word, can be used as the resultant data word Wr(R−, R+). The memory cells may be part of an addressable memory. For example, memory cells of a first memory area, which is determined by a first address area, may be read out by using the reference values R− and R+. In this case it may be determined how many errors have occurred when reading out the memory cells by using reference values R− and R+. For example, it may be determined how many reading operations at a reading address of the chosen memory area lead to an error that could not be corrected by using the error code. It is an option to read out memory cells of a second memory area, which can be determined by a second address area, by using the two reference values R−− and R, to form a first data word W1(R−−, R), a second data word W2(R−−, R) and a resultant data word Wr(R−−, R), as described above for the reference values R− and R+. In this case it can be determined how many errors have occurred when reading out the memory cells of the second memory area by using reference values R−− and R. For example, it may be determined how many reading operations at the reading address of the chosen memory area lead to an error that could not be corrected by using the error code. Further reference values or combinations of reference values may be respectively used for further memory areas when reading out and it may be determined how many errors cannot not be corrected by the error code. It is a further option to use those reference values or those combinations of reference values which have for example led to a minimum number of errors that cannot be corrected when reading out from memory cells at a subsequent time. It is also an option to determine for the various memory areas the number of unreliable cells occurring when reading out. If an addressable memory is used for example in a motor vehicle, the reading out of various memory areas and the determining of suitable reference values or combinations of reference values may take place when switching on or initializing (powering up). The determined reference value or the determined combination of reference values can be used during subsequent operation. Such an approach to determining the reference values may be advantageous if the distributions of the physical values that correspond to the stored values 1 and 0 change due to temperature influences or due to a loss of charge over time. The reference values used may also be dynamically adapted to the changing physical state of the memory cells. Although the disclosure has been more specifically illustrated and described in detail by means of the at least one example embodiment shown, the disclosure is not restricted thereto and other variations can be derived therefrom by a person skilled in the art without departing from the scope of protection of the disclosure. | 32,029 |
11861185 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for content masking in a storage system in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110B) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120nstored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an mode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, mode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or mode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g. partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-ID andFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services through the implementation of an infrastructure as a service (‘IaaS’) service model, through the implementation of a platform as a service (‘PaaS’) service model, through the implementation of a software as a service (‘SaaS’) service model, through the implementation of an authentication as a service (‘AaaS’) service model, through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306, and so on. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premises with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model, eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive vast amounts of telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed for a vast array of purposes including, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-ID andFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318, the cloud-based storage system318may be used to provide storage services to users of the cloud-based storage system318through the use of solid-state storage, and so on. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system318to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318and providing such data to users of the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322. Consider an example in which the cloud computing environment316is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, the cloud computing instance320that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance322that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance322that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance322that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance320that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance320that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance322that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance320operates as the primary controller and the second cloud computing instance322operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340ndepicted inFIG.3Cmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block-storage342,344,346that is offered by the cloud computing environment316. The block-storage342,344,346that is offered by the cloud computing environment316may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance340a, a second EBS volume may be coupled to a second cloud computing instance340b, and a third EBS volume may be coupled to a third cloud computing instance340n. In such an example, the block-storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud computing instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud computing instance340a,340b,340n. In an alternative embodiment, rather than using the block-storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.3C, the cloud computing instances340a,340b,340nwith local storage330,334,338may be utilized, by cloud computing instances320,322that support the execution of the storage controller application324,326to service I/O operations that are directed to the cloud-based storage system318. Consider an example in which a first cloud computing instance320that is executing the storage controller application324is operating as the primary controller. In such an example, the first cloud computing instance320that is executing the storage controller application324may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system318from users of the cloud-based storage system318. In such an example, the first cloud computing instance320that is executing the storage controller application324may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Readers will appreciate that when a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to not only write the data to its own local storage330,334,338resources and any appropriate block-storage342,344,346that are offered by the cloud computing environment316, but the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance340a,340b,340n. In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. Readers will appreciate that, as described above, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318. While the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. In order to address this, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. Consider an example in which data is written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nin 1 MB blocks. In such an example, assume that a user of the cloud-based storage system318issues a request to write data that, after being compressed and deduplicated by the storage controller application324,326results in the need to write 5 MB of data. In such an example, writing the data to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nis relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage348, 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage348, 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage348, and so on. As such, in some embodiments, each object that is written to the cloud-based object storage348may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage348may be incorporated into the cloud-based storage system318to increase the durability of the cloud-based storage system318. Continuing with the example described above where the cloud computing instances340a,340b,340nare EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances340a,340b,340nwith local storage330,334,338as the only source of persistent data storage in the cloud-based storage system318may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system318may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.999999999% durability, meaning that a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system318depicted inFIG.3Cnot only stores data in S3 but the cloud-based storage system318also stores data in local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, such that read operations can be serviced from local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, thereby reducing read latency when users of the cloud-based storage system318attempt to read data from the cloud-based storage system318. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. As described above, when the cloud computing instances340a,340b,340nwith local storage330,334,338are embodied as EC2 instances, the cloud computing instances340a,340b,340nwith local storage330,334,338are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance340a,340b,340nwith local storage330,334,338. As such, one or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances340a,340b,340nwith local storage330,334,338failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage348. Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage348such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage348as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage348, less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318. The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318in order to more rapidly pull data from the cloud-based object storage348and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system318. In such embodiments, once the data stored by the cloud-based storage system318has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system318have written to the cloud-based storage system318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system318have written to the cloud-based storage system318and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage348in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the could-based storage system318via communications with one or more of the cloud computing instances320,322that each are used to support the execution of a storage controller application324,326, via monitoring communications between cloud computing instances320,322,340a,340b,340n, via monitoring communications between cloud computing instances320,322,340a,340b,340nand the cloud-based object storage348, or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318. In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system318, an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances340a,340b,340nhas reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances340a,340b,340n, such that data stored in an already existing cloud computing instance340a,340b,340ncan be migrated to the one or more new cloud computing instances and the already existing cloud computing instance340a,340b,340ncan be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system318may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system318, but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system318may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system318may be dynamically scaled, the cloud-based storage system318may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system318described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system318described here can always ‘add’ additional storage, the cloud-based storage system318can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system318may implement a policy that garbage collection only be performed when the number of IOPS being serviced by the cloud-based storage system318falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system318is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services that may not include all of the components depicted inFIG.3C. In some embodiments, especially in embodiments where the cloud-based object storage348resources are embodied as Amazon S3, the cloud-based storage system318may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object- and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system318does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described above may be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various “things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubernetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modem massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.4sets forth a flow chart illustrating an example method for on-demand content filtering of snapshots within a storage system (400) stored on a storage device (412) according to some embodiments of the present disclosure. As described above, on-demand content filtering of snapshots within a storage system (400) stored on a storage device (412) may be carried out by one or more modules of computer program instructions executing on computer hardware such as a CPU, where the CPU is housed within a storage array controller as described above. Readers will appreciate that in other embodiments, on-demand content filtering of snapshots within a storage system (400) stored on a storage device (412) may be carried out by firmware within the storage device (412) itself. Such firmware may be executing on a computing device within the storage device (412) such as, for example, a memory controller, an ASIC, and so on. The example method depicted inFIG.4includes associating (402) an access policy (414) with a snapshot (416). The snapshot (416) depicted inFIG.4can represent the state of the storage system (400) (or at least a portion of the storage system) at a particular point in time, as the snapshot may include a copy of all data stored in a particular volume at a particular point in time, a copy of all data stored in particular LUN at a particular point in time, a copy of all data stored in a particular range of addresses at a particular point in time, and so on. Readers will appreciate that such snapshots (416) may be immutable although in alternative embodiments the snapshots (416) may be non-immutable. In the example method depicted inFIG.4, associating (402) an access policy (414) with a snapshot (416) may be carried out, for example, by assigning the access policy (414) to the snapshot (416) and retaining such an assignment through the use of one or more tables or other data structures that associate identifiers for particular snapshots with identifiers for particular access policies. Such assignments may be made, for example, through the use of one or more rules that cause particular access policies to be associated with data written to the storage device (412) by particular applications or types of applications, through the use of one or more rules that cause particular access policies to be associated with data written to the storage device (412) by particular users, or in many other ways. In the example method depicted inFIG.4, the access policy (414) can specify a transformation to apply to a predefined data object. The predefined data object specified in the access policy (414) may be embodied, for example, as one or more particular data fields within data that is stored on the storage device (412), as data stored on the storage device (412) that adheres to a predefined format, as data stored on the storage device (412) that is of a particular data type, and so on. The access policy (414) that specifies a transformation to apply to a predefined data object may be embodied, for example, as a set of predefined rules that specify changes to be made to a predefined data object when such a predefined data object is identified as being contained within data stored on the storage device (412). Readers will appreciate that in some embodiments, the access policy (414) can specify a transformation to apply to multiple predefined data objects, the access policy (414) can specify multiple transformations to apply to a predefined data object, the access policy (414) can specify multiple transformations to apply to multiple predefined data objects, or any combination thereof. Consider an example in which a particular access policy (414) specifies a transformation to apply to a predefined data object that adheres to the following predefined formats: ###-##-####, ##/##/####, and (###)###-####, where ‘#” represents a wildcard character. In such an example, the predefined formats may represent standard formats for confidential information in the form of a social security number, a date of birth, and a telephone number. In such an example, the access policy (414) may include a rule specifying that, when data in one of the predefined formats is discovered within a snapshot (416), each wildcard character should be replaced by a value of ‘1’. Consider an additional example in which a particular access policy (414) specifies a transformation to apply to a predefined data object that is of a particular data type such as integer data. In such an example, integer data may be viewed as a data type whose contents are more likely to represent standard formats for confidential information such as credit card numbers, social security numbers, and so on. In such an example, the access policy (414) may include a rule specifying that, when integer data is discovered within a snapshot (416), each integer should be replaced by a value of ‘1’. Readers will appreciate that an access policy (414) may also specify a transformation to apply to a predefined data object that is of a particular subject matter type. For example, the access policy (414) may include a rule specifying that, when financial data is discovered within a snapshot (416), each integer should be replaced by a value of ‘1’, each letter should be replaced by a value of ‘X’, and so on. Likewise, another access policy (414) may include a rule specifying that, when healthcare data is discovered within a snapshot (416), each integer should be replaced by a value of ‘1’, each letter should be replaced by a value of ‘X’, and so on. In such a way, general categories of data may be masked regardless of the particular format that the data takes. The example method depicted inFIG.4also includes receiving (404) a first request (418) to access a portion of the snapshot (416). In the example method depicted inFIG.4, the first request (418) to access a portion of the snapshot (416) is received from a host (420), although a host (420) is not required according to some embodiments of the present disclosure. If present, such a host (420) may be embodied, for example, as a host of the storage system such as a computing device that is coupled to the storage system via a SAN or other network, as a system-level entity such as the operating system or the array operating environment described above, as a support technician that is remotely logged into the storage array, or as some other form of host. In the example method depicted inFIG.4, receiving (404) the first request (418) to access a portion of the snapshot (416) may be carried out, for example, by a host (420) issuing a request to read the portion of the snapshot, by a host (420) issuing a request to copy the portion of the snapshot from a first location within the storage system (400) to a second location within the storage system, by a host (420) issuing a request to copy the portion of the snapshot from a first location within the storage system (400) to a location within another storage system, by a host (420) of the storage system issuing a request to replicate the portion of the snapshot to a backup location that is external to the storage system (400), by a host (420) of the storage system issuing a request to mount the portion of the snapshot, or by some other entity issuing similar requests. In the example method depicted inFIG.4, the first request (418) to access the portion of the snapshot (416) may be received (404), for example, via a message that is received over a SAN or in some other way. The example method depicted inFIG.4also includes responsive to receiving the first request (418), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to a data object contained within the portion of the snapshot (416). In the example method depicted inFIG.4, creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to a data object contained within the portion of the snapshot (416) may be carried out, for example, by searching the portion of the snapshot (416) to identify data that matches a predefined data object that is specified in the access policy (414). Upon discovering data contained in the portion of the snapshot (416) that matches a particular predefined data object that is specified in the access policy (414), a transformed snapshot portion (408) may be created (406) by applying the transformation that the access policy (404) specifies to apply to the particular predefined data object. Consider the example described above in which a particular access policy (414) specifies that for predefined data objects that adhere to the following predefined formats: ###-##-####, ##/##/####, and (###)###-####, each wildcard should be replaced by a value of ‘1’. In such an example, creating (406) a transformed snapshot portion (408) may be carried out by searching the portion of the snapshot (416) to identify data that adheres to the predefined formats described above. Upon discovering data that adheres to the predefined formats described above in the portion of the snapshot (416), a transformed snapshot portion (408) may be created (406) by replacing each wildcard character with a value of ‘1’. For example, the discovery of data such as ‘123-45-6789’ would result in a transformed snapshot portion (408) where such data would be transformed to ‘111-11-1111’. The example method depicted inFIG.4further includes responsive to receiving the first request (418), presenting (410) the transformed snapshot portion (408). Presenting (410) the transformed snapshot portion (408) may be carried out, for example, by sending the transformed snapshot portion (408) to the host (420) via one or more messages that are sent over a SAN or other form of data communications message. Such a transformed snapshot portion (408) may include portions of the snapshot (416) that were not transformed as well as portions of the snapshot (416) that were transformed by applying a transformation specified in the access policy (414) to one or more data objects contained within the snapshot (416). Readers will appreciate that presenting (410) the transformed snapshot portion (408) can coincide with providing the host (420) with access to an unmodified portion of the snapshot (416) that the host (420) requested access to by issuing the first request (418), such that the host (420) is given access to the entire snapshot portion, with some information effectively masked by applying the transformations specified in the access policy (414). For further explanation,FIG.5sets forth a flow chart illustrating an additional example method for on-demand content filtering of snapshots within a storage system (400) according to some embodiments of the present disclosure. The example method depicted inFIG.5is similar to the example method depicted inFIG.4, as the example method depicted inFIG.5also includes associating (402) an access policy (414) with a snapshot (416), receiving (404) a first request (418) to access a portion of the snapshot (416), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416), and presenting (410) the transformed snapshot portion (408). The example method depicted inFIG.5also includes identifying (502) one or more data objects within the portion of the snapshot (416) that match a predefined data object specified in the access policy (414). In the example method depicted inFIG.5, identifying one or more data objects within the portion of the snapshot (416) that match a predefined data object specified in the access policy (414) may be carried out, for example, by identifying one or more data objects within the portion of the snapshot (416) that correspond to the one or more particular data fields specified in the access policy (414), by identifying one or more data objects within the portion of the snapshot (416) that are of the same data type as the one or more data types specified in the access policy (414), by identifying one or more data objects within the portion of the snapshot (416) that adhere to a particular data format as specified in the access policy (414), and in other ways. As such, identifying one or more data objects within the portion of the snapshot (416) that match a predefined data object specified in the access policy (414) can generally be carried out by examining the data (or even the metadata) contained in a particular snapshot to determine whether some portion of the data contained in a particular snapshot matches one or more of the predefined data objects that are specified in the access policy (414). The example method depicted inFIG.5also includes creating (504) modified versions of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414) by applying the transformation to apply to the predefined data object. Creating (504) modified versions of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414), for example, by modifying the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in accordance with the access policy (414) to generate modified versions of the one or more data objects within the portion of the snapshot (416). Continuing with the example described above, some portion of the snapshot (416) includes data that takes the format of “(123)456-7890,” that portion of the snapshot (416) may be overwritten in place to include data that takes the format of “(111)111-1111.” Readers will appreciate that in alternative embodiments where the underlying storage devices do not support the in-place overwrite of data, the modified versions of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414) may be stored at a location within the storage device (412) or at some other location within the read path that is distinct from the location at which the unmodified version of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414) are stored. Consider an example in which the portion of the snapshot (416) resides at addresses 9000-10000 of a first volume. In such an example, assume that data that takes the format of “(123)456-7890” is discovered at addresses 9101-9112 of the first volume. In such an example, by applying the transformations described above, a modified version of such a data object can be created that takes the format of “(111)111-1111” and is stored at addresses 10001-10012 of the first volume. In this example, mapping tables or other internal data structures may be updated such that the portion of the snapshot (416) is specified as including the content of addresses 9000-9100 of the first volume, followed by the contents of addresses 10001-10012 of the first volume, followed by the content of addresses 9113-10000 of the first volume. For further explanation,FIG.6sets forth a flow chart illustrating an additional example method for on-demand content filtering of snapshots within a storage system (400) according to some embodiments of the present disclosure. The example method depicted inFIG.6is similar to the example method depicted inFIG.4, as the example method depicted inFIG.6also includes associating (402) an access policy (414) with a snapshot (416), receiving (404) a first request (418) to access a portion of the snapshot (416), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416), and presenting (410) the transformed snapshot portion (408). The example method depicted inFIG.6also includes storing (602), within the storage system (400), the transformed snapshot portion (408). In the example method depicted inFIG.6, the portion of the snapshot (416) may be stored at a first location of the storage device (412) and the transformed snapshot portion (408) may be stored at a second location of the storage device (412), or even at some other location within the storage system. Readers will appreciate that the portion of the snapshot (416) may be stored at a first location of the storage device (412) and the transformed snapshot portion (408) may be stored at a second location of the storage device (412), for example, because the underlying storage device (412) may not support in-place overwrites of data. For example, the storage device (412) may be embodied as NAND-based flash memory that does not support in-place overwrites of data. In NAND-based flash memory, data cannot be directly overwritten as it can be in a hard disk drive. As such, the transformed snapshot portion (408) must be stored at a second location of the storage device (412), rather than simply overwriting, at the first location, the portion of the snapshot (416). Readers will appreciate that the transformed snapshot portion (408) may include portions of the snapshot (416) that were not transformed as well as portions of the snapshot (416) that were transformed by applying a transformation specified in the access policy (414) to one or more data objects contained within the snapshot (416). In such an example, storing (602) the transformed snapshot portion (408) can include modifying mapping tables or other data structures that are used to map logical or physical addresses of data to a particular snapshot, thereby defining the transformed snapshot portion (408) as a collection of data stored at the logical or physical addresses specified in the mapping tables or other data structure utilized by a particular storage system. For further explanation,FIG.7sets forth an additional example method for on-demand content filtering of snapshots within a storage system (400) stored on a storage device (412) according to some embodiments of the present disclosure. The example method depicted inFIG.7is similar to the example method depicted inFIG.4andFIG.6, as the example method depicted inFIG.7also includes associating (402) an access policy (414) with a snapshot (416), receiving (404) a first request (418) to access a portion of the snapshot (416), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416), presenting (410) the transformed snapshot portion (408), and storing (602) the transformed snapshot portion (408) within the storage system (400). In the example method depicted inFIG.7, storing (602) the transformed snapshot portion (408) within the storage system (400) can include storing (702) modified versions of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414), without storing, within the storage system (400), an additional copy of data objects within the portion of the snapshot (416) that do not match a predefined data object specified in the access policy (414). Readers will appreciate that, as described above, the transformed snapshot portion (408) may include portions of the snapshot (416) that were not transformed as well as portions of the snapshot (416) that were transformed by applying a transformation specified in the access policy (414) to one or more data objects contained within the snapshot (416). In such an example, rather than storing an additional copy of data objects within the portion of the snapshot (416) that do not match a predefined data object specified in the access policy (414), storing (602) the transformed snapshot portion (408) within the storage system (400) may be carried out by only storing (702) modified versions of the one or more data objects within the portion of the snapshot (416) that match the predefined data object specified in the access policy (414) and modifying mapping tables or other data structures that are used to map logical or physical addresses of data to a particular snapshot, thereby defining the transformed snapshot portion (408) as a collection of data stored at the logical or physical addresses specified in the mapping tables or other data structure utilized by a particular storage system. For further explanation,FIG.8sets forth an additional example method on-demand content filtering of snapshots within a storage system (400) according to some embodiments of the present disclosure. The example method depicted inFIG.8is similar to the example method depicted inFIG.4andFIG.6, as the example method depicted inFIG.8also includes associating (402) an access policy (414) with a snapshot (416), receiving (404) a first request (418) to access a portion of the snapshot (416), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416), presenting (410) the transformed snapshot portion (408), and storing (602) the transformed snapshot portion (408) within the storage system (400). The example method depicted inFIG.8also includes receiving (804) a second request (802) to access the portion the snapshot (416). In the example method depicted inFIG.8, receiving (804) a second request (802) to access the portion the snapshot (416) may be carried out, for example, by a host (806) issuing a request to read the portion of the snapshot, by a host (808) issuing a request to copy the portion of the snapshot from a first location within the storage system (400) to a second location within the storage system, by a host (808) issuing a request to copy the portion of the snapshot from a first location within the storage system (400) to a location within another storage system, by a host (808) of the storage system issuing a request to replicate the portion of the snapshot to a backup location that is external to the storage system (400), by a host (808) of the storage system issuing a request to mount the portion of the snapshot, and so on. In the example method depicted inFIG.8, the second request (802) to access the portion of the snapshot (416) may be received (804), for example, via a message that is received over a SAN or in some other way. Readers will appreciate that although the example method depicted inFIG.8depicts an embodiment where the second request (802) is received from a different host (808) than the host (420) that issued the first request (418), the requests (418,802) may also be issued by the same host. Readers will appreciate that when a first request (418) to access a portion of the snapshot is received (404), the storage system (400) may need to create (406) a transformed snapshot portion (408) by applying one or more transformations specified in the access policy (414) and the storage system (400) may subsequently store (602) the transformed snapshot portion (408) within a storage device (412) that is included within the storage system (400), or at some other location within the storage system (400). When a second request (802) to access the portion the snapshot (416) is received (804), regardless of which host initiated the second request (802), the storage system (400) need not perform the tasks of creating (406) a transformed snapshot portion (408) by applying one or more transformations specified in the access policy (414) or storing (602) the transformed snapshot portion (408) within the storage system (400), as the storage system already has retained a transformed snapshot portion (408) within the storage system (400). The example method depicted inFIG.8also includes, in response to receiving the second request (804) to access the portion the snapshot, presenting (806) the transformed snapshot portion (408) stored within the storage system (400). In the example method depicted inFIG.8, presenting (806) the transformed snapshot portion (408) stored within the storage system (400) may be carried out, for example, by sending to the host (808) that initiated the second request (804) the transformed snapshot portion (408) stored within the storage system (400) via one or more messages that are sent over a SAN or other form of data communications message. Such a transformed snapshot portion (408) may include portions of the snapshot (416) that were not transformed as well as portions of the snapshot (416) that were transformed by applying a transformation specified in the access policy (414) to one or more data objects contained within the snapshot (416). Readers will appreciate that sending the transformed snapshot portion (408) to the host (808) can coincide with providing the host (808) with access to an unmodified portion of the snapshot (416) that the host (808) requested access to by issuing the first request (418), such that the host (808) is given access to the entire snapshot portion, with some information effectively masked by applying the transformations specified in the access policy (414). Readers will appreciate that because the transformed snapshot portion (408) was stored after the first request (418) to access a portion of the snapshot (416) was received (404), the transformed snapshot portion (408) can be sent to the host without performing the steps required to recreate the transformed snapshot portion (408). By servicing the second request (802) without performing the steps required to recreate the transformed snapshot portion (408), system resources may be preserved for other tasks, thereby improving the storage system's ability to perform other tasks. Readers will further appreciate that although the preceding paragraphs describe an embodiment where a first request (418) to access a portion of the snapshot (416) is received (404) and a second request (802) to access the same portion of the snapshot (416) is subsequently received (804), requests to access partially overlapping portions of the snapshot (416) may also be handled in a way where providing access to the overlapping portions of the snapshot (416) may be carried out without performing the steps required to recreate the overlapping portions of the transformed snapshot portion (408). Consider an example in which the first request (418) includes a request to access a portion of the snapshot (416) that is stored at addresses 0-1000 within a particular volume. Assume in such an example, that a second request (802) includes a request to access a portion of the snapshot (416) that is stored at addresses 500-1500 within the same particular volume. In such an example, the portion of the snapshot (416) that is stored at addresses 1001-1500 may undergo the transformation processes in response to the second request (802), but the previously generated transformed snapshot portion (408) that represents the portion of the snapshot (416) that is stored at addresses 500-1000 may be retrieved from the storage system (400) without needing to again performing the steps required to recreate the overlapping portions of the transformed snapshot portion (408) that represents the portion of the snapshot (416) that is stored at addresses 500-1000. For further explanation,FIG.9sets forth an additional example method on-demand content filtering of snapshots within a storage system (400) stored on a storage device (412) according to some embodiments of the present disclosure. The example method depicted inFIG.9is similar to the example method depicted inFIG.4, as the example method depicted inFIG.9also includes associating (402) an access policy (414) with a snapshot (416), receiving (404) a first request (418) to access a portion of the snapshot (416), creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416), and presenting (410) the transformed snapshot portion (408). In the example method depicted inFIG.9, creating (406) a transformed snapshot portion (408) by applying a transformation specified in the access policy (414) to one or more data objects contained within the portion of the snapshot (416) can include masking (902) types of information (904) in the data that are to be made inaccessible. The particular types of information (904) that are to be made inaccessible may be specified in the access policy (414) depicted inFIG.9. The particular types of information (904) that are to be made inaccessible may be embodied, for example, as predetermined categories of data such as credit card numbers, social security numbers, names, system configuration information, encryption keys or other security information, or other types of information that has been determined to be of a confidential nature such that no host should have the privileges to access such data. Alternatively, each host may have certain access privileges such that one host can access data that another host is not able to access. In the example method depicted inFIG.9, masking (902) types of information (904) in the data that are to be made inaccessible may be carried out, for example, by nulling out any data that is to be made inaccessible, by encrypting any data that is to be made inaccessible, or by otherwise preventing access to the data that is to be made inaccessible. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 205,950 |
11861186 | DETAILED DESCRIPTION Specific embodiments according to the present disclosure will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. The present disclosure provides systems and methods for a non-volatile storage system adapted to work in low temperatures.FIG.1schematically shows an electronic system100in accordance with an embodiment of the present disclosure. The electronic system100may comprise an electronic control unit102and a solid state storage device104. The solid state storage device104may comprise a temperature sensor106, a timer108, a backup battery110, a storage controller112, a voltage regulator114and one or more non-volatile memory (NVM) devices116. The electronic system100may be referred to as a host electronic system for the solid state storage device104. In some embodiments, the electronic system100may be an electronic system for a vehicle (e.g., engine management, ignition, radio, carputers, telematics, and/or in-car entertainment) and may be turned on when the ignition of the vehicle is turned on and turned off when the ignition of the vehicle is turned off. It should be noted that ignition on or off may refer to whether a vehicle's main power system is on or off, regardless of whether the vehicle is a combustion engine vehicle, an electric vehicle or a hybrid vehicle. The NVM devices116may provide non-volatile data storage for the solid state storage device104. In some embodiments, the NVM devices116may be one or more NAND flash devices. In some other embodiments, the NVM devices116may be one or more other types of non-volatile storage devices, for example, NOR flash memories, magnetoresistive random Access Memory (MRAM), resistive random access memory (RRAM), phase change random access memory (PCRAM), Nano-RAM, etc. The temperature sensor106may measure the environment temperature and periodically send out temperature reading data to the storage controller112. It should be noted that althoughFIG.1shows the temperature sensor106as part of the solid state storage device104, in some embodiments, the temperature sensor106may be an existing temperature sensor of an electronic system of a vehicle. For example, it is common for a modern vehicle to have a temperature sensor to measure environment temperature and display the measured environment temperature to drivers. In these embodiments, the temperature sensor106may be coupled to the electronic control unit102. In one embodiment of these embodiments, the electronic control unit102may set the time interval for the timer108based on the measured environment temperature. In another embodiment of these embodiments, the electronic control unit102may pass the measured environment temperature to the storage controller112and let the storage controller112set the time interval for the timer108based on the measured environment temperature. The voltage regulator114may be configured to adapt the power provided by a power source (e.g., the backup battery110) for circuits of the solid state storage device104. In some embodiments, the solid state storage device104may have more than one voltage regulator. The storage controller112may be configured to control and manage the NVM devices116and other components of the solid state storage device104. In some embodiments, the storage controller112may implement functionalities of a solid state drive (SSD) controller, and also low-temperature management operations (e.g., refresh, data back-up and user notifications). In at least one embodiment, the solid state storage device104may include a firmware that includes executable instructions to manage the solid state storage device104. For example, the storage controller112may comprise a computer processor (e.g., a microprocessor or a microcontroller) configured to execute the executable instructions of the firmware to perform various operations for managing data in the NVM devices116(e.g., regular operations of a SSD controller and low-temperature management operations). The timer108may be set with a time interval that may be configurable based on the environment temperature. In various embodiments, the timer108may be coupled to the electronic control unit102. In an implementation example, the timer108may stay off while the electronic system100is on (e.g., vehicle ignition is on), and may be turned on (e.g., start counting time) after the electronic system100is turned off. When the counted time reaches the time interval (or end of the time interval depending on whether the timer counts from zero to the time interval or from the time interval to zero), the timer108may send an interrupt to the storage controller112to wake up the storage controller112to perform low-temperature management operations. While the interrupt is asserted to the storage controller112, the timer108may be reset and re-start counting time again. Another interrupt may be sent to the storage controller112after the counted time reaches the time interval again. The operations of re-starting to count time and sending an interrupt may be repeated as long as the ignition is off. In some embodiments, the time interval may be set based on the environment temperature. For example, to make the low-temperature management effective and efficient, the time interval between operations may be a function of the temperature: the time interval is small when the temperature reading is low and the time interval is large when the temperature reading is high. That is, the time interval may be increased for a higher environment temperature. As an example, the time interval may be in the order of hours (e.g., 6 hours) when the environment temperature lowers to −40° C. but in the order of days when the environment temperature rises above −10° C. In one embodiment, at room temperature, the low-temperature management operations may be suspended. For example, at room temperature, the time interval may be set to infinite or the timer108may be deactivated (e.g., turned off). In addition to environment temperature, the time interval may also depend on a variety of factors that represent the durability and lifetime of the solid state storage device104. In some embodiment, these factors may include the program/erase (P/E) cycle counts, the page error count and the program time. For example, a large P/E cycle count or page error count may indicate that the solid state storage device104may have entered the late stage of its lifetime, and thus a smaller time interval may be needed to ensure data retention. In some embodiments, the temperature reading by the temperature sensor106may be monitored continuously (e.g., by the storage controller112, the electronic control unit102, or both) and the time interval may be set or adjusted (e.g., by the storage controller112, the electronic control unit102, or both) based on a detected temperature change that reaches or exceeds a temperature change threshold. For example, if there is no significant change in temperature, the present time interval may be maintained. However, if a significant change of temperature is detected, the time interval may be set to a new value based on the new temperature reading and the low-temperature management operations may be performed according to the new time interval. As an example, a significant change in temperature may be defined as a temperature equal to or exceed the temperature change threshold (e.g., 5° C.). In some embodiments, the backup battery110may provide a dedicated power supply to the solid state storage device104for the low-temperature management operations. The dedicated power supply for the solid state storage device104may help relieve a main battery for the electronic system100(e.g., the main battery of the vehicle) from heavy duties of the refresh and back-up operations, and thus may prevent the main battery from being drained quickly during ignition off time. The backup battery110may be recharged in place, or recharged after being removed. Therefore, the user has the convenient option to recharge the backup battery110instead of replacing the bucky main battery. In some embodiments, the battery level of the removable backup battery110may be reported to the storage controller112, the electronic control unit102, or both. In one of such embodiments, the battery level may be reported to a user by a user notification, such that the user may be notified in a timely manner of further necessary actions or pending risks. In various embodiments, the low-temperature management operations may include refresh and back-up. In some embodiments, the electronic system100may also be configured to send notifications to a user (e.g., by wireless connections such as Bluetooth, 3G/4G/5G or other wireless technology) using Simple Text Message (SMS), E-mail or mobile application in-app messages. In such embodiments, in addition to refresh and back-up, the low-temperature management operations may also include user notifications. In a refresh operation, the storage controller112may read data from the one or more NVM devices116, correct errors through controller's ECC schemes and re-program the data into the NVM devices116. In a back-up operation, in contrast, the storage controller112may read the data, correct errors and program the data to a second location of the NVM devices116. As a result, a second copy of the data may be retained at a different physical block after a back-up operation. In some embodiments, for the low-temperature management operations, the storage controller112may be configured (e.g., by firmware) to prioritize and selectively back-up data for the operating system, critical programs and important user data. In embodiments that provide user notifications, the user notification operations may be performed at the same or different intervals as the refresh and back-up operations. Exemplary notifications may include the battery level of the backup battery110, and one or more suggested action items, such as, but not limited to, charging the backup battery110, turning on the vehicle for a short time and moving the vehicle to an indoor garage to prevent potential data loss. FIG.2is a flowchart of a process200for conducting low temperature management of a solid state storage device in accordance with an embodiment of the present disclosure. At block202, a standby mode for low temperature management may be maintained until a host electronic system has been turned off. For example, the electronic system100may be an electronic system on a vehicle. When the vehicle is in the ignition on status, the electronic system100may be on and the solid state storage device104may perform its regular operations. The low temperature management features of the solid state storage device104may be in a standby mode (e.g., the timer108may be off and the temperature sensor106may be off or the temperature reading ignored). It should be noted that after the ignition is turned off, the solid state storage device104may enter the low temperature management mode, in which the timer108may be turned on and the temperature sensor106may be turned on. However, regular operations may be suspended to save power in the low temperature management mode. Therefore, the low temperature management mode may be a low power mode or a standby mode for regulator operations. At block204, a temperature reading from a temperature sensor may be checked. At block206, it may be determined that the temperature reading is below a temperature threshold. In some embodiments, from the moment that the vehicle ignition is turned off, the temperature reading from the temperature sensor106may start to be monitored and compared to a temperature threshold level as part of operations of the low temperature management mode. For example, 0° C. may be used as a temperature threshold. If the temperature is found lower than the threshold level, the low-temperature management features may kick in (e.g., setting the time interval and starting the timer). In some embodiments, monitoring of the temperature reading may be performed by the storage controller112. For example, after the host electronic system is turned off, the solid state storage device104may be configured into a low power mode in which a reduced number of power domains of the solid state storage device104may be kept on. The solid state storage device104may be configured to perform the temperature check with a low sampling frequency to minimize power consumption. Alternatively, the temperature monitoring may be done by the electronic control unit102. At block208, a time interval may be set on a timer based on the temperature reading. In some embodiments, the time interval may be set on the timer108based on the temperature reading by the storage controller112or the electronic control unit102. At block210, a timer may be used to count how long the host electronic system has been turned off. In some embodiments, the timer108may count up (e.g., from zero counting towards the time interval). In some other embodiments, the timer108may count down (e.g., from the time interval counting towards zero). At block212, an interrupt may be sent to a storage controller from the timer when the timer counts to a time Interval. For example, when the timer108counts to the value of the time interval, an interrupt may be generated by the timer108and sent to the storage controller112. At block214, low-temperature management operations may be performed using power supplied by a backup battery. In some embodiments, the low-temperature management operations may include refresh and back-up operations to be performed by the storage controller112for data stored in the NVMs116. In at least one embodiment, the low-temperature management operations may further include issuing user notifications (e.g., SMS, email, and/or mobile app messages). In some embodiments, the process200may further include resetting the timer108, restarting the time count and repeating the low-temperature management operations after another time interval. In one embodiment, while the timer108may be counting, the temperature reading from the temperature sensor106may be continuously monitored. If there is no significant change in temperature, the present time interval is maintained and the low-temperature management operations may be performed at the present time interval. If a significant change of temperature is recorded, the time interval value may be set or adjusted based on the new temperature reading and the low-temperature management operations may be performed at the new time interval. As an example, a significant change in temperature may be defined as a temperature change of 5° C. or more. FIG.3is a flowchart of another process300for conducting low temperature management of a solid state storage device in accordance with another embodiment of the present disclosure. At block302, a time interval may be set on a timer based on a known environment temperature. In some embodiments, the temperature sensor106may be one existing temperature sensor in the electronic system100. For example, it is common for a modern vehicle to have a temperature sensor to measure environment temperature and display the measured environment temperature to drivers. This measured environment temperature may be used for setting the time interval for the timer108. At block304, a standby mode for low temperature management may be maintained until a host electronic system has been turned off. At block306, the timer may be used to count how long the host electronic system has been turned off. At block308, an interrupt may be sent to a storage controller from the timer when the timer counts to a time Interval. At block310, low-temperature management operations may be performed using power supplied by a backup battery. Operations in blocks304,306,308and310of the process300may be identical or similar to the operations in blocks202,210,212and214of the process200. That is, in some embodiments that there is an existing knowledge of the environment temperature, the time interval of the timer108may be set according to the existing knowledge of the environment temperature before the low-temperature management features are started. Once the vehicle ignition is turned off, the timer108starts to count. When the end of time interval is reached, the refresh and back-up operations will be performed, and the user notifications are issued. In some embodiments of the process200and the process300, the timer108may start counting again for the next operation cycle after the interrupt has been sent or the low-temperature management operations have been performed. If the electronic system100is turned back on (e.g., the vehicle's ignition is turned on) in the middle of any time interval, the low temperature management features of the solid state storage device104may be put back to the standby mode for low temperature management (e.g., the timer is turned off). In one exemplary embodiment, there is provided an apparatus that may comprise: a temperature sensor to generate a temperature reading, a timer configured with a time interval, a backup battery, one or more non-volatile memory devices and a storage controller. The storage controller may be configured to: maintain a standby mode for low temperature management until a host electronic system has been turned off and start the timer when the host electronic system is turned off, check the temperature reading from the temperature sensor when the host electronic system is turned off, determine that the temperature reading is below a temperature threshold, set the time interval on the timer based on the temperature reading, receive an interrupt from the timer when the timer counts to the time Interval, and perform low-temperature management operations for data stored in the one or more non-volatile memory devices using power supplied by the backup battery. In one embodiment, the low-temperature management operations may include refresh and back-up data stored in the one or more non-volatile memory devices. In one embodiment, the low-temperature management operations may further include sending one or more user notifications. In one embodiment, for the low-temperature management operations, the storage controller may be further configured to prioritize and selectively back-up data for an operating system, critical programs and important user data. In one embodiment, the storage controller may be further configured to use the timer to count how much time has passed since the low-temperature management operations have been performed and use another interrupt from the timer to activate the storage controller to repeat the low-temperature management operations. In one embodiment, the storage controller may be further configured to determine that there is a significant temperature change when the temperature reading indicates that a temperature change has reached a temperature change threshold, and set a new value for the time interval based on the temperature change. In one embodiment, the one or more non-volatile memory devices and the storage may be part of a solid state storage device and time interval may be further determined based on a variety of factors that represent a durability and lifetime of the solid state storage device. In another exemplary embodiment, there is provided a method for managing a solid state storage device in a low temperature environment. The method may comprise: maintaining a standby mode for low temperature management until a host electronic system has been turned off, checking a temperature reading from a temperature sensor when the host electronic system is turned off, determining that the temperature reading is below a temperature threshold, setting a time interval on a timer based on the temperature reading, using the timer to count how long the host electronic system has been turned off, sending an interrupt to a storage controller of the solid state storage device from the timer when the timer counts to the time Interval and performing low-temperature management operations using power supplied by a backup battery. In one embodiment, the low-temperature management operations may include refresh and back-up data stored in one or more non-volatile memory devices of the solid state storage device. In one embodiment, the low-temperature management operations may further include sending one or more user notifications. In one embodiment, for the low-temperature management operations, the storage controller may be configured to prioritize and selectively back-up data for an operating system, critical programs and important user data. In one embodiment, the method may further comprise using the timer to count how much time has passed since the low-temperature management operations have been performed and sending another interrupt to activate the storage controller to repeat the low-temperature management operations. In one embodiment, the method may further comprise determining that there is a significant temperature change when the temperature reading indicates that a temperature change has reached a temperature change threshold, and setting a new value for the time interval based on the temperature change. In one embodiment, the time interval may be further determined based on a variety of factors that represent a durability and lifetime of the solid state storage device. In yet another exemplary embodiment, there is provided a method for managing a solid state storage device in a low temperature environment. The method may comprise: setting a time interval on a timer based on a known environment temperature reading, maintaining a standby mode for low temperature management until a host electronic system has been turned off, using the timer to count how long the host electronic system has been turned off, sending an interrupt to a storage controller of the solid state storage device from the timer when the timer counts to the time Interval, and performing low-temperature management operations using power supplied by a backup battery. In one embodiment, the low-temperature management operations may include refresh and back-up data stored in one or more non-volatile memory devices of the solid state storage device. In one embodiment, the low-temperature management operations may further include sending one or more user notifications. In one embodiment, for the low-temperature management operations, the storage controller may be configured to prioritize and selectively back-up data for an operating system, critical programs and important user data. In one embodiment, the method may further comprise using the timer to count how much time has passed since the low-temperature management operations have been performed and sending another interrupt to activate the storage controller to repeat the low-temperature management operations. In one embodiment, the time interval may be further determined based on a variety of factors that represent a durability and lifetime of the solid state storage device. Any of the disclosed methods and operations may be implemented as computer-executable instructions (e.g., software code for the operations described herein) stored on one or more computer-readable storage media (e.g., non-transitory computer-readable media, such as one or more optical media discs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives)) and executed on a device controller (e.g., firmware executed by ASIC). Any of the computer-executable instructions for implementing the disclosed techniques as well as any data created and used during implementation of the disclosed embodiments can be stored on one or more computer-readable media (e.g., non-transitory computer-readable media). While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. | 24,287 |
11861187 | DETAILED DESCRIPTION Exemplary embodiments will now be described more comprehensively with reference to the drawings. However, the exemplary embodiments may be implemented in various forms, and should not be understood to be limited to embodiments elaborated herein. Instead, these embodiments are provided to make the disclosure more comprehensive and complete and comprehensively communicate the concept of the exemplary embodiments to those skilled in the art. The same reference numerals in the drawings represent the same or similar structures, and thus the detailed description will be omitted. Although relative terms, such as “upper” and “lower”, are used in the specification to describe the relative relationship of one component to another component, these terms are used herein for convenience only, for example, according to the direction of the examples as illustrated in the drawings. It can be understood that if the device in the drawings is turned upside down, the components described as “upper” will become the “lower” components. When one structure is “on” the other structure, it is possible to indicate that the structure is integrally formed on the other structure, or the structure is “directly” disposed on the other structure, or the structure is “indirectly” disposed on the other structure by means of another structure. The terms “a”, “an”, “the”, and “said” are used to express the presence of one or more elements/parts/or the like. The terms “include” and “have” are used to be inclusive, and mean there may be additional elements/parts/or the like in addition to the listed elements/parts/or the like. A semiconductor memory is used in computers, servers, handheld devices such as mobile phones, printers and many other electronic devices and applications. The semiconductor memory may include a plurality of storage units in a memory array, and each storage unit stores at least one bit of information. A Dynamic Random Access Memory (DRAM) is an example of the semiconductor memory. This scheme is preferably used in the DRAM. Therefore, the following embodiment descriptions are made with reference to the DRAM as a non-restrictive example. In a DRAM integrated circuit device, the array of the storage units is typically arranged in rows and columns, so that a specific storage unit may be addressed by specifying the rows and columns of the array. A Word Line (WL) connects rows to a Bit Line (BL) Sense Amplifier (SA) of data in a set of detection units. Then, in the read operation, a data subset in the sense amplifier is selected or “column selected” for output. Referring toFIG.1, each storage unit100in the DRAM may usually include a capacitor110, a transistor120, a WL130and a BL140. The gate of the transistor120is connected to the WL130, the drain of the transistor120is connected to the BL140, and the source of the transistor120is connected to the capacitor110. A voltage signal on the WL130may control the transistor120to be turned on or off, so that the data information stored in the capacitor110is read through the BL140, or the data information is written into the capacitor110through the BL140for storage. A bank is composed of multiple storage units. The bank generally occupies 50-65% of the area of the whole DRAM device, and the rest of the area of the DRAM device is mainly composed of a peripheral circuit.FIG.2illustrates a schematic structural diagram of the peripheral circuit. As illustrated inFIG.2, the peripheral circuit of the DRAM device may include a Command Decoder210, an Address Latch220, a Refresh Address Counter (RAC)230, an Address MUX (AM)240and a Pre-Decoder (Pre-D)250. The Command decoder210is configured to decode RESET_n, CKE, CK_t/CK_c, PAR, TEN, CS_n, ACT_N and other commands CMD issued by the system, and the Address Latch220is configured to temporarily store the address code A<16:0>. In addition, the peripheral circuit of the DRAM device may also include: an activation window signal generation module260, a refresh window signal generation module270and a control signal generation module280. The activation window signal generation module260is configured to generate a bank activation window signal BANK ACT Window Signal, and the refresh window signal generation module270is configured to generate a refresh window signal Refresh Window Signal. Referring toFIG.3, a schematic structural diagram of a bank is illustrated. The bank300may include a BL, a complementary BL BL_B, a plurality of WLs and a plurality of storage units100. The plurality of storage units100share the above BL or complementary BL BL_B. In addition, the BL and the complementary BL BL_B are also configured to access input write drivers INPUT Write Driver and INPUT_B Write driver, and to output output signals OUTPUT and OUTPUT_B. In the exemplary implementation mode of the disclosure, the bank300may also include a sensing module310and a BL balancing module320. The BL balancing module320is configured to conduct the BL and the complementary BL BL_B under the action of a BL balancing control signal BLEQ to close the read-write operation to the storage units100. Referring toFIG.3, the sensing module310may mainly include the SA which may address the plurality of storage units100through the BL or BL_B. Specifically, a conventional sense amplifier is a differential amplifier, and the differential amplifier operates with the BL and the complementary BL BL_B serving as a reference line (as a reference line) to detect and amplify a voltage difference on a pair of BL and BL_B. For a memory, there are usually many different working stages, such as a read operation stage, a write operation stage and a refresh operation stage. A complete read operation stage may include four different sub-stages: Precharge, Access, Sense and Restore. The write operation stage may include five different sub-stages: Precharge, Access, Sense, Restore and Write Recovery. The exemplary implementation mode of the disclosure does not describe the specific working stages of the memory in detail, which may refer to the existing memory. Referring toFIG.4, a method for determining memory power consumption provided by the exemplary implementation mode of the disclosure may include the following operations. At S410, a memory control command is received and an analog memory is controlled to enter different working stages according to the memory control command. At S420, an original current change curve of the analog memory in different working stages is acquired. At S430, a target time period corresponding to a target working stage is determined according to a time sequence of the memory control command. At S440, a stage current change curve corresponding to the target working stage is intercepted from the original current change curve according to the target time period to obtain a target current change curve. At S450, target performance parameters are selected from a memory performance parameter table according to the target working stage. At S460, the memory power consumption is determined according to the target performance parameters and the target current change curve. In the method for determining memory power consumption provided by an exemplary implementation mode of the disclosure, on the one hand, the analog memory is controlled to enter different working stages, so that the original current change curve of the analog memory in different working stages may be easily acquired as the original current change curve of the memory for subsequent power consumption determination. On the other hand, after the target working stage is determined, the stage current change curve corresponding to the target working stage may be intercepted according to the corresponding target time period, the target current change curve may be obtained based on the stage current change curve, and finally, the memory power consumption may be determined in combination with the target current change curve and the target performance parameters. The determined memory power consumption may be used as the actual power consumption of the memory for performance analysis of the memory and may also provide a basis for system failure analysis. At S410, the memory control command is received and the analog memory is controlled to enter different working stages according to the memory control command. In the exemplary implementation mode of the disclosure, the analog memory is a memory model simulated according to components of the memory and the corresponding connection relationship of the components, for example, the memory structure may be established on simulation software based on the memory used actually, and various operations of the memory in the actual operation process may also be performed according to the memory control command. That is, the above analog memory may simulate the real memory to enter different working stages, so that various required data, such as current data, voltage data, etc. may be acquired. In actual application, the memory has multiple control commands, such as a read operation command, a write operation command, a refresh operation command, etc. According to different control commands, the memory may enter different working stages to complete the corresponding operations. In the exemplary implementation mode of the disclosure, the analog memory simulates the real memory to enter different working stages, so that the problem that it is difficult to record and acquire the current data or voltage data of the real memory in the working process may be solved. At S420, the original current change curve of the analog memory in different working stages is acquired. According to the memory control command, the analog memory is controlled to enter different working stages, such as the read operation stage, the write operation stage and the refresh operation stage. Generally, the above different working stages may be executed at intervals, so that the time at which the analog memory enters the different work stages needs to be marked. At the same time, in the marking process, the time lag between the issuance of the memory control command and the real execution start also needs to be considered, and the collected data is marked with the time of real execution start as a mark point, for example, so as to obtain the original current change curve. The original current change curve may include change data of current corresponding to different working stages (including the read operation stage, the write operation stage and the refresh operation stage) over time. Based on these data, the subsequent determination of the memory power consumption may be performed. At S430, the target time period corresponding to the target working stage is determined according to the time sequence of the memory control command. In actual application, the memory control command is usually issued according to the time sequence of the memory control command According to the specific issuance time of the control command and the delay from issuing to executing the control command, the starting time point corresponding to the target working stage that needs to be acquired may be determined, and then the target time period of the target working stage may be determined according to the execution duration of the target working stage. In the exemplary implementation mode of the disclosure, the target working stage may be at least one of different working stages, that is, the target working stage may only be the read operation stage, the write operation stage and the refresh operation stage. The target working stage may also be a read-write operation stage or a complete operation stage including read-write refresh. Alternatively, the target working stage may be at least one of four different sub-stages, i.e., Precharge, Access, Sense and Restore, in the read operation stage. The specific target working stage is not specially limited by the exemplary implementation mode of the disclosure, and may be flexibly determined according to actual needs. At S440, the stage current change curve corresponding to the target working stage is intercepted from the original current change curve according to the target time period to obtain the target current change curve. After the target time period is determined according to the operation at S430, the stage current change curve corresponding to the target time period, that is, the stage current change curve corresponding to the target working stage, may be intercepted from the original current change curve. For example, the stage current change curve corresponding to the target working stage may be the stage current change curve corresponding to the target read operation stage, the stage current change curve corresponding to the target write operation stage, etc. In the exemplary implementation mode of the disclosure, after the stage current change curve corresponding to the target working stage is intercepted, the stage current change curve also needs to be processed to obtain the target current change curve. The specific processing on the stage current change curve may be determined according to the required memory power consumption. If the required memory power consumption is the total power consumption, the total power consumption may be determined directly according to the current change curve in combination with the memory performance parameter. In the exemplary implementation mode of the disclosure, taking the memory power consumption required to be determined as the average overrun power consumption of the memory for an example, the processing procedure of the intercepted stage current change curve is described in detail. Referring toFIG.5, after the stage current change curve corresponding to the target working stage is intercepted, the intercepted stage current change curve may be discretized according to a preset step length to obtain discrete data. Then, a stage peak value and a stage valley value are determined from the discrete data. The above stage peak value and the stage valley value are linearly interpolated to obtain an interpolation line. Then, the preset error range, for example, two preset error range boundary lines inFIG.5, may be determined based on the interpolation line, one part of the stage current change curve falls within the preset error range and the other part falls outside the preset error range. In actual application, the above preset error range may be determined according to the actual situation, for example, the preset error range is any value between 5%-15%. If the preset error range is 5%, as illustrated inFIG.5, based on the interpolation line, the preset error range boundary lines may be determined at the positions +/−5% from the interpolation line on both sides of the interpolation line as the preset error range. It is to be noted that the above process of determining the stage peak value and the stage valley value may also be performed before the stage current change curve is discretized, that is, the highest point is directly determined from the stage current change curve as the stage peak value and the lowest point is determined from the stage current change curve as the stage valley value. The accuracy of determining the stage peak value and stage valley value is improved, and the loss of the stage peak value and stage valley value in the discretization process is avoided. In the exemplary implementation mode of the disclosure, the discrete data beyond the preset error range may be acquired and recorded as overrun data, and the overrun data, the stage peak value and the stage valley value may be fitted to obtain the target current change curve. The obtained target current change curve is mainly composed of data beyond the preset error range, so that the target current change curve may be configured to calculate the average overrun power consumption of the memory, and the average overrun power consumption of the memory may be used for performance analysis and failure analysis on the memory. It is to be noted that in the process of determining the average overrun power consumption, the stage valley value may not be selected, but the overrun data and the stage peak value may be directly fitted to obtain the target current change curve. The exemplary implementation mode of the disclosure does not specifically limit the determination method of the target current change curve. In actual application, the above preset step length may be determined according to the actual situation, for example, the preset step length may be any value between 8 ps and 12 ps, such as 10 ps. It is to be understood that the smaller the step length is, the higher the accuracy is and the longer the simulation time is, that is, the preset step length may be adjusted according to the required accuracy and time cost. The specific value of the preset step length is not specially limited by the exemplary implementation mode of the disclosure. In some embodiments, preliminary data discretization may be performed according to a larger preset step firstly, for example, the intercepted stage current change curve is discretized according to the first preset step to obtain first discrete data. The first discrete data beyond the preset error range is acquired and recorded as first overrun data. Then, data discretization is further performed on the stage current change curve corresponding to the time period where the first overrun data is located according to a smaller second preset step to obtain second discrete data, and the second discrete data beyond the preset error range is acquired and recorded as second overrun data. Subsequently, the second overrun data, the stage peak value and the stage valley value are fitted to obtain the target current change curve. In this way, it is beneficial to dynamically balance the simulation time and simulation accuracy according to the actual waveform of the stage current change curve, and to shorten the simulation time and improve the simulation accuracy at the same time. In actual application, the first preset step length is greater than the second preset step length. The values of the first preset step length and the second preset step length may be determined according to the actual situation, for example, the first preset step length may be greater than 12 ps, and the second preset step length may be less than or equal to 12 ps. The specific values of the first preset step length and the second preset step length are not specially limited by the exemplary implementation mode of the disclosure. At S450and S460, the target performance parameters are selected from the memory performance parameter table according to the target working stage; and the memory power consumption is determined according to the target performance parameters and the target current change curve. In the exemplary implementation mode of the disclosure, after the target current change curve is determined, a target performance parameter needs to be selected from the memory performance parameter table according to the target working stage. If the target working stage is the read operation stage, the components in the memory involved in the read operation stage need to be acquired, and the target performance parameter may be determined from performance parameters of these components. In actual application, the target performance parameters corresponding to different working stages are generally determined in advance according to the situation of the simulated real memory, and are stored in the memory performance parameter table for subsequent calling. The target performance parameters may include a resistance value, a capacitance value, an inductance value or the like corresponding to the target working stage. After the target performance parameters are selected, the memory power consumption may be determined according to the target performance parameters in combination with the target current change curve. Specifically, in the process of determining the memory power consumption, the determined memory power consumption is different according to the different target current change curves. If the target current change curve is the stage current change curve initially intercepted, the total power consumption of the memory in the target working stage may be calculated. If the target current change curve is composed of the above overrun data, the stage peak value and the stage valley value, the determined memory power consumption is the memory overrun power consumption and average overrun power consumption. Taking determination of the average overrun power consumption of the memory as an example, the average overrun current in the target working stage may be determined according to the target current change curve determined in the above operation at S440. In the process of determining the average overrun current, the average overrun current may be obtained through dividing the area covered by the target current change curve by the covered time length. Finally, the average overrun power consumption of the memory may be determined according to the average overrun current, and the resistance value, the capacitance value and the inductance value corresponding to the target working stage. The specific determination process is not described here. In the exemplary implementation mode of the disclosure, after the average overrun power consumption of the memory is determined, the memory may be adjusted according to the magnitude of the average overrun power consumption of the memory. For example, when the average overrun power consumption of the memory is greater than or equal to the preset power consumption, the power consumption analysis of the memory is performed to adjust the target performance parameters. For example, the resistance value, the capacitance value or the inductance value, etc. may be reduced, specifically, the resistance value may be reduced by selecting a component with a smaller resistance value, etc., and the operation frequency of the memory may be reduced, so as to ensure that the memory operates within a safe range. When the average overrun power consumption of the memory is less than the preset power consumption, an operating parameter of the target working stage in the memory may be adjusted, for example, increasing the operation frequency of the memory, so that the operation performance of the memory is improved under the condition of ensuring the normal operation of the memory. In actual application, the preset power consumption may be the limit power consumption of the memory at the target working stage specified by the relevant standard of the memory, and may also be manually set according to the actual situation or determined by the stage peak value and the stage valley value, which are not specially limited by the exemplary implementation mode of the disclosure. It is to be noted that after the target current change curve is determined, in addition to the above method for directly determining the memory power consumption, the data corresponding to the target current change curve may also be processed into a data format recognized by an analysis model. The processed data corresponding to the target current change curve may be input into the analysis model for system power consumption analysis to find out the possible power consumption problems of the memory. For example, whether the power consumption is caused by the memory itself or by noise, etc. The data analysis mode will not be elaborated in the exemplary implementation mode of the disclosure. The method for determining the memory power consumption provided by the exemplary implementation modes of the disclosure may select the target current change curve corresponding to the target working stage according to different working stages, and then determine the possible average overrun power consumption of the memory according to the target current change curve. The target performance parameter or operating parameter of the memory may be adjusted based on the average overrun power consumption, so that the memory is adjusted to the optimal working state and the utilization of the memory is improved on the premise of meeting the actual needs. It is to be noted that various operations of the method in the disclosure are described in the accompanying drawings in specific sequence. However, this does not require or imply that these operations must be executed in the particular order, or that all the operations illustrated must be executed to achieve desired results. Additionally or alternatively, certain operations may be omitted, a plurality of operations are combined into one operation for executing, and/or one operation is decomposed into the plurality of operations for executing, etc. In addition, in the exemplary embodiment, a device for determining memory power consumption is also provided. Referring toFIG.6, the device for determining memory power consumption600may include a memory analog module610, an original current acquisition module620, a target time period determination module630, a target current determination module640, a target parameter determination module650, a power consumption determination module660and a memory adjustment module670. The memory analog module610may be configured to receive a memory control command and control an analog memory to enter different working stages according to the memory control command. The original current acquisition module620may be configured to acquire an original current change curve of the analog memory in different working stages. The target time period determination module630may be configured to determine a target time period corresponding to a target working stage according to a time sequence of the memory control command. The target current determination module640may be configured to intercept a stage current change curve corresponding to the target working stage from the original current change curve according to the target time period to obtain a target current change curve. The target parameter determination module650may be configured to select target performance parameters from a memory performance parameter table according to the target working stage. The power consumption determination module660may be configured to determine the memory power consumption according to the target performance parameters and the target current change curve. In an exemplary implementation mode of the disclosure, the target current determination module640may be configured to discretize the stage current change curve according to a preset step length to obtain discrete data, determine a stage peak value and a stage valley value from the discrete data, perform a linear interpolation on the stage peak value and the stage valley value to obtain an interpolation line, acquire and record the discrete data beyond a preset error range based on the interpolation line as the overrun data, and fit the overrun data, the stage peak value and the stage valley value to obtain the target current change curve. In an exemplary implementation mode of the disclosure, the target current determination module640may also be configured to discretize the intercepted stage current change curve according to a first preset step length to obtain first discrete data, acquire and record the first discrete data beyond the preset error range as first overrun data, discretize the stage current change curve corresponding to the time period where the first overrun data is located according to a second preset step length to obtain second discrete data, acquire and record the second discrete data beyond the preset error range as second overrun data, and fit the second overrun data, the stage peak value and the stage valley value to obtain the target current change curve. In an exemplary implementation mode of the disclosure, the first preset step length is greater than the second preset step length. In an exemplary implementation mode of the disclosure, the power consumption determination module660may be configured to determine the average overrun current in the target working stage according to the target current change curve. The target performance parameters may include a resistance value, a capacitance value and an inductance value corresponding to the target working stage. The power consumption determination module660may be configured to determine an average overrun power consumption of the memory according to the average overrun current, the resistance value, the capacitance value and the inductance value. In an exemplary implementation mode of the disclosure, the device for determining the memory power consumption may further include the memory adjustment module670. The memory adjustment module670may be configured to analyze the power consumption of the memory when the average overrun power consumption of the memory is greater than or equal to a preset power consumption to adjust the target performance parameters. In an exemplary implementation mode of the disclosure, the memory adjustment module670may also be configured to adjust the operating parameter of the target working stage in the memory when the average overrun power consumption of the memory is less than the preset power consumption. In an exemplary implementation mode of the disclosure, different working stages may include a read operation stage, a write operation stage and a refresh operation stage. In an exemplary implementation mode of the disclosure, the target working stage may be at least one of the different working stages. In an exemplary implementation mode of the disclosure, the preset error range is any value between 5% and 15%. In an exemplary implementation mode of the disclosure, the analog memory is a memory model simulated according to components of the memory and the corresponding connection relationship of the components. The specific details of a virtual module of each device for determining memory power consumption have been described in detail in the corresponding method for determining memory power consumption, so that it will not be elaborated here. It is to be noted that, although a plurality of modules or units of the device for determining the memory power consumption are mentioned in the foregoing detailed descriptions, but this division is not mandatory. Actually, according to the implementation modes of the disclosure, the foregoing described features and functions of two or more modules or units may be embodied in a module or unit. Conversely, the foregoing described features and functions of a module or unit may further be embodied by a plurality of modules or units. In the exemplary embodiment of the disclosure, an electronic device capable of implementing the above method is also provided. Those skilled in the art may understand that various aspects of the disclosure may be implemented as systems, methods or program products. Therefore, various aspects of the disclosure may be specifically implemented in the following forms: a complete hardware implementation mode, a complete software implementation mode (including firmware, microcode, etc.), or a combination of hardware and software, which may be collectively referred to as “circuit”, “module” or “system”. An electronic device700according to such implementation mode of the disclosure is described below with reference toFIG.7. The electronic device700illustrated inFIG.7is only an example and should not form any limit to the functions and scope of application of the embodiments of the disclosure. As illustrated inFIG.7, the electronic device700is represented in the form of a general computing device. The components of the electronic device700may include, but are not limited to, at least one processing unit710, at least one storage unit720, a bus730connecting different system components (including the storage unit720and the processing unit710), and a display unit740. The storage unit720stores a program code that may be executed by the processing unit710to enable the processing unit710to execute the operations according to various exemplary implementation modes of the disclosure described in the above “exemplary methods” section of the description. For example, as illustrated inFIG.4, the processing unit710may execute the following operations at S410to S460. At S410, a memory control command is received and an analog memory is controlled to enter different working stages according to the memory control command. At S420, an original current change curve of the analog memory in different working stages is acquired. At S430, a target time period corresponding to a target working stage is determined according to the time sequence of the memory control command. At S440, a stage current change curve corresponding to the target working stage is intercepted from the original current change curve according to the target time period to obtain a target current change curve. At S450, target performance parameters are selected from a memory performance parameter table according to the target working stage. At S460, the memory power consumption is determined according to the target performance parameters and the target current change curve. The storage unit720may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM)7201and/or a cache storage unit7202, and may further include a Read-Only Memory (ROM)7203. The storage unit720may also include a program/utility7204having a set (at least one) of program modules7205. Such program modules7205includes, but are not limited to, an operating system, one or more application programs, other program modules and program data. Each or a certain combination of these examples may include an implementation of a network environment. The bus730may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local bus using any of a variety of bus structures. The electronic device700may also communicate with one or more external devices770(a keyboard, a pointing device, a Bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device700, and/or with any device that enables the electronic device700to communicate with one or more other computing devices (a router, a modem, etc.). This communication may be performed through an input/output (I/O) interface750. Moreover, the electronic device700may also communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through a network adapter760. As illustrated inFIG.7, the network adapter760communicates with other modules of the electronic device700through the bus730. It is to be understood that, although not illustrated inFIG.7, other hardware and/or software modules may be used in combination with the electronic device700, including, but not limited to, a microcode, a device driver, a redundant processing unit, an external disk drive array, a Redundant Arrays of Independent Disk (RAID) system, a tape drive, a data backup storage system, etc. Through the above descriptions about the implementation modes, it is easily understood by those skilled in the art that the exemplary implementation modes described herein may be implemented by software, or may be implemented by combining the software and necessary hardware. Therefore, the technical solution according to the implementation modes of the disclosure may be embodied in form of a software product, and the software product may be stored in a non-volatile storage medium (which may be a CD-ROM, a U disk, a mobile hard disk, etc.) or a network, including a plurality of instructions enabling a computing device (which may be a personal computer, a server, a terminal device, a network device, etc.) to execute the method according to the implementation modes of the disclosure. In the exemplary embodiment of the disclosure, a computer readable storage medium is also provided, on which a program product capable of achieving the above method in the description is stored. In some possible implementation modes, various aspects of the disclosure may also be implemented in the form of a program product including a program code. The program code, when being run on the terminal device, causes the terminal device to perform the operations according to various exemplary implementation modes of the disclosure described in the above “exemplary methods” section of the specification. A program product for achieving the above method according to an implementation mode of the disclosure may adopt a portable Compact Disk Read Only Memory (CD-ROM) and include a program code, and may run on a terminal device, such as a personal computer. However, the program product of the disclosure is not limited to this. In the disclosure, the readable storage medium may be any tangible medium including or storing a program, and the program may be used by or in combination with an instruction execution system, device, or apparatus. The program product may adopt any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, but not limited to, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or apparatus, or any combination thereof. More specific examples (a non-exhaustive list) of the readable storage medium may include an electrical connector with one or more wires, a portable disk, a hard disk, a RAM, a ROM, an Erasable Programmable ROM (EPROM or a flash memory), an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, or any proper combination thereof. The computer readable signal medium may include a data signal in a baseband or propagated as part of a carrier, a readable program code being born therein. A plurality of forms may be adopted for the propagated data signal, including, but not limited to, an electromagnetic signal, an optical signal, or any proper combination. The readable signal medium may also be any readable medium except the readable storage medium, and the readable medium may send, propagate, or transmit a program used by or in combination with an instruction execution system, device, or apparatus. The program code in the readable medium may be transmitted with any proper medium, including, but not limited to, wireless, a wire, an optical cable, Radio Frequency (RF), or any proper combination thereof. The program code for executing the operations of the disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., and conventional procedural programming languages such as “C” language or similar programming languages. The program code may be executed completely on a user computing device, partially on a user device, as a separate software package, partially on a user computing device, partially on a remote computing device, or completely on a remote computing device or server. In the case of the remote computing device, the remote computing device may be connected to a user computing device through any kind of network, including a LAN or a WAN, or may be connected to an external computing device (such as through the Internet using an Internet service provider). Moreover, the drawings are merely schematic descriptions of processes included in the methods in the exemplary embodiments of the disclosure, but not for limitation. It should be easily understood that the processes illustrated in the above drawings do not indicate or limit the time sequence of these processes. Moreover, it is also to be easily understood that these processes may be executed synchronously or asynchronously in a plurality of modules. After considering the specification and practicing the disclosure, those skilled in the art may easily conceive of other embodiments of this disclosure. This disclosure is intended to cover any variations, uses, or adaptive changes of this disclosure. These variations, uses, or adaptive changes follow the general principles of this disclosure and include common general knowledge or common technical means in the art, which are not disclosed in this disclosure. The specification and the embodiments are only considered as examples, and the practical scope and spirit of the disclosure are subject to the claims. It should be understood that the disclosure is not limited to the precise structures described above and illustrated in the drawings, and various modifications and variations may be made without departing from the scope thereof. The scope of the disclosure is only subject to the appended claims. | 41,107 |
11861188 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for storage systems with removable modules in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110A) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (RISC′) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage drive171A-F may be one or more zoned storage devices. In some implementations, the one or more zoned storage devices may be a shingled HDD. In implementations, the one or more storage devices may be a flash-based SSD. In a zoned storage device, a zoned namespace on the zoned storage device can be addressed by groups of blocks that are grouped and aligned by a natural size, forming a number of addressable zones. In implementations utilizing an SSD, the natural size may be based on the erase block size of the SSD. The mapping from a zone to an erase block (or to a shingled track in an HDD) may be arbitrary, dynamic, and hidden from view. The process of opening a zone may be an operation that allows a new zone to be dynamically mapped to underlying storage of the zoned storage device, and then allows data to be written through appending writes into the zone until the zone reaches capacity. The zone can be finished at any point, after which further data may not be written into the zone. When the data stored at the zone is no longer needed, the zone can be reset which effectively deletes the zone's content from the zoned storage device, making the physical storage held by that zone available for the subsequent storage of data. Once a zone has been written and finished, the zoned storage device ensures that the data stored at the zone is not lost until the zone is reset. In the time between writing the data to the zone and the resetting of the zone, the zone may be moved around between shingle tracks or erase blocks as part of maintenance operations within the zoned storage device, such as by copying data to keep the data refreshed or to handle memory cell aging in an SSD. In implementations utilizing an HDD, the resetting of the zone may allow the shingle tracks to be allocated to a new, opened zone that may be opened at some point in the future. In implementations utilizing an SSD, the resetting of the zone may cause the associated physical erase block(s) of the zone to be erased and subsequently reused for the storage of data. In some implementations, the zoned storage device may have a limit on the number of open zones at a point in time to reduce the amount of overhead dedicated to keeping zones open. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120nstored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In embodiments, authorities168operate to determine how operations will proceed against particular logical elements. Each of the logical elements may be operated on through a particular authority across a plurality of storage controllers of a storage system. The authorities168may communicate with the plurality of storage controllers so that the plurality of storage controllers collectively perform operations against those particular logical elements. In embodiments, logical elements could be, for example, files, directories, object buckets, individual objects, delineated parts of files or objects, other forms of key-value pair databases, or tables. In embodiments, performing an operation can involve, for example, ensuring consistency, structural integrity, and/or recoverability with other operations against the same logical element, reading metadata and data associated with that logical element, determining what data should be written durably into the storage system to persist any changes for the operation, or where metadata and data can be determined to be stored across modular storage devices attached to a plurality of the storage controllers in the storage system. In some embodiments the operations are token based transactions to efficiently communicate within a distributed system. Each transaction may be accompanied by or associated with a token, which gives permission to execute the transaction. The authorities168are able to maintain a pre-transaction state of the system until completion of the operation in some embodiments. The token based communication may be accomplished without a global lock across the system, and also enables restart of an operation in case of a disruption or other failure. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Modes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g. partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS' environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services through the implementation of an infrastructure as a service (‘IaaS’) service model, through the implementation of a platform as a service (‘PaaS’) service model, through the implementation of a software as a service (‘SaaS’) service model, through the implementation of an authentication as a service (‘AaaS’) service model, through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306, and so on. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model, eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive vast amounts of telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed for a vast array of purposes including, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. Although the example depicted inFIG.3Aillustrates the storage system306being coupled for data communications with the cloud services provider302, in other embodiments the storage system306may be part of a hybrid cloud deployment in which private cloud elements (e.g., private cloud services, on-premises infrastructure, and so on) and public cloud elements (e.g., public cloud services, infrastructure, and so on that may be provided by one or more cloud services providers) are combined to form a single solution, with orchestration among the various platforms. Such a hybrid cloud deployment may leverage hybrid cloud management software such as, for example, Azure™ Arc from Microsoft™, that centralize the management of the hybrid cloud deployment to any infrastructure and enable the deployment of services anywhere. In such an example, the hybrid cloud management software may be configured to create, update, and delete resources (both physical and virtual) that form the hybrid cloud deployment, to allocate compute and storage to specific workloads, to monitor workloads and resources for performance, policy compliance, updates and patches, security status, or to perform a variety of other tasks. Readers will appreciate that by pairing the storage systems described herein with one or more cloud services providers, various offerings may be enabled. For example, disaster recovery as a service (‘DRaaS’) may be provided where cloud resources are utilized to protect applications and data from disruption caused by disaster, including in embodiments where the storage systems may serve as the primary data store. In such embodiments, a total system backup may be taken that allows for business continuity in the event of system failure. In such embodiments, cloud data backup techniques (by themselves or as part of a larger DRaaS solution) may also be integrated into an overall solution that includes the storage systems and cloud services providers described herein. The storage systems described herein, as well as the cloud services providers, may be utilized to provide a wide array of security features. For example, the storage systems may encrypt data at rest (and data may be sent to and from the storage systems encrypted) and may make use of Key Management-as-a-Service (‘KMaaS’) to manage encryption keys, keys for locking and unlocking storage devices, and so on. Likewise, cloud data security gateways or similar mechanisms may be utilized to ensure that data stored within the storage systems does not improperly end up being stored in the cloud as part of a cloud data backup operation. Furthermore, microsegmentation or identity-based-segmentation may be utilized in a data center that includes the storage systems or within the cloud services provider, to create secure zones in data centers and cloud deployments that enables the isolation of workloads from one another. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Bmay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The storage resources308depicted inFIG.3Bmay also include racetrack memory (also referred to as domain-wall memory). Such racetrack memory may be embodied as a form of non-volatile, solid-state memory that relies on the intrinsic strength and orientation of the magnetic field created by an electron as it spins in addition to its electronic charge, in solid-state devices. Through the use of spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire, the domains may pass by magnetic read/write heads positioned near the wire as current is passed through the wire, which alter the domains to record patterns of bits. In order to create a racetrack memory device, many such wires and read/write elements may be packaged together. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The example storage system306depicted inFIG.3Bmay leverage the storage resources described above in a variety of different ways. For example, some portion of the storage resources may be utilized to serve as a write cache where data is initially written to storage resources with relatively fast write latencies, relatively high write bandwidth, or similar characteristics. In such an example, data that is written to the storage resources that serve as a write cache may later be written to other storage resources that may be characterized by slower write latencies, lower write bandwidth, or similar characteristics than the storage resources that are utilized to serve as a write cache. In a similar manner, storage resources within the storage system may be utilized as a read cache, where the read cache is populated in accordance with a set of predetermined rules or heuristics. In other embodiments, tiering may be achieved within the storage systems by placing data within the storage system in accordance with one or more policies such that, for example, data that is accessed frequently is stored in faster storage tiers while data that is accessed infrequently is stored in slower storage tiers. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation, the embodiments may be integrated into a cloud-based storage system. In this example, the cloud-based storage system is created entirely in a cloud computing environment such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system may be used to provide block storage services to users of the cloud-based storage system, the cloud-based storage system may be used to provide storage services to users of the cloud-based storage system through the use of solid-state storage, and so on. The cloud-based storage system may include two cloud computing instances that each are used to support the execution of a storage controller application. The cloud computing instances may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment to support the execution of software applications such as the storage controller application. In one embodiment, the cloud computing instances may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes a storage controller application may be booted to create and configure a virtual machine that may execute the storage controller application. A storage controller application may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system to the cloud-based storage system, erasing data from the cloud-based storage system, retrieving data from the cloud-based storage system and providing such data to users of the cloud-based storage system, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that there are two cloud computing instances each may include a storage controller application, in some embodiments one cloud computing instance may operate as the primary controller as described above while the other cloud computing instance may operate as the secondary controller as described above. Readers will appreciate that the storage controller application may include identical source code that is executed within different cloud computing instances. Consider an example in which the cloud computing environment is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, the cloud computing instance that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance operates as the primary controller and the second cloud computing instance operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance may operate as a primary controller for some portion of the address space supported by the cloud-based storage system, each cloud computing instance may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system may include cloud computing instances with local storage. The cloud computing instances may be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment to support the execution of software applications. The cloud computing instances may-have local storage-resources. The cloud computing instances with local storage may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. Each of the cloud computing instances with local storage can include a software daemon that, when executed by a cloud computing instance can present itself to the storage controller applications as if the cloud computing instance were a physical storage device (e.g., one or more SSDs). In such an example, the software daemon may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications and the cloud computing instances with local storage may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. Each of the cloud computing instances with local storage may also be coupled to block-storage that is offered by the cloud computing environment. The block-storage that is offered by the cloud computing environment may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance, a second EBS volume may be coupled to a second cloud computing instance, and a third EBS volume may be coupled to a third cloud computing instance. In such an example, the block-storage that is offered by the cloud computing environment. may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon (or some other module) that is executing within a particular cloud comping instance may, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage resources. In some alternative embodiments, data may only be written to the local storage resources within a particular cloud comping instance. In an alternative embodiment, rather than using the block-storage that is offered by the cloud computing environment as NVRAM, actual RAM on each of the cloud computing instances with local storage may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. The cloud computing instances with local storage may be utilized, by cloud computing instances that support the execution of the storage controller application to service I/O operations that are directed to the cloud-based storage system. Consider an example in which a first cloud computing instance that is executing the storage controller application is operating as the primary controller. In such an example, the first cloud computing instance that is executing the storage controller application may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system from users of the cloud-based storage system. In such an example, the first cloud computing instance that is executing the storage controller application may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances with local storage. Either cloud computing instance, in some embodiments, may receive a request to read data from the cloud-based storage system and may ultimately send a request to read data to one or more of the cloud computing instances with local storage. Readers will appreciate that when a request to write data is received by a particular cloud computing instance with local storage the software daemon or some other module of computer program instructions that is executing on the particular cloud computing instance may be configured to not only write the data to its own local storage resources and any appropriate block-storage that are offered by the cloud computing environment, but the software daemon or some other module of computer program instructions that is executing on the particular cloud computing instance may also be configured to write the data to cloud-based object storage that is attached to the particular cloud computing instance. The cloud-based object storage that is attached to the particular cloud computing instance may be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance. In other embodiments, the cloud computing instances that each include the storage controller application may initiate the storage of the data in the local storage of the cloud computing instances and the cloud-based object storage. Readers will appreciate that, as described above, the cloud-based storage system may be used to provide block storage services to users of the cloud-based storage system. While the local storage resources and the block-storage resources that are utilized by the cloud computing instances may support block-level access, the cloud-based object storage that is attached to the particular cloud computing instance supports only object-based access. In order to address this, the software daemon or some other module of computer program instructions that is executing on the particular cloud computing instance may be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage that is attached to the particular cloud computing instance. Consider an example in which data is written to the local storage resources and the block-storage resources that are utilized by the cloud computing instances in 1 MB blocks. In such an example, assume that a user of the cloud-based storage system issues a request to write data that, after being compressed and deduplicated by the storage controller application results in the need to write 5 MB of data. In such an example, writing the data to the local storage resources and the block-storage resources that are utilized by the cloud computing instances is relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage resources and the block-storage resources that are utilized by the cloud computing instances. In such an example, the software daemon or some other module of computer program instructions that is executing on the particular cloud computing instance may be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage, 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage, 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage, and so on. As such, in some embodiments, each object that is written to the cloud-based object storage may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage may be incorporated into the cloud-based storage system to increase the durability of the cloud-based storage system. Continuing with the example described above where the cloud computing instances are EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances with local storage as the only source of persistent data storage in the cloud-based storage system may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.999999999% durability, meaning that a cloud-based storage system that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system may not only stores data in S3 but the cloud-based storage system also stores data in local storage resources and block-storage resources that are utilized by the cloud computing instances, such that read operations can be serviced from local storage resources and the block-storage resources that are utilized by the cloud computing instances, thereby reducing read latency when users of the cloud-based storage system attempt to read data from the cloud-based storage system. In some embodiments, all data that is stored by the cloud-based storage system may be stored in both: 1) the cloud-based object storage, and 2) at least one of the local storage resources or block-storage resources that are utilized by the cloud computing instances. In such embodiments, the local storage resources and block-storage resources that are utilized by the cloud computing instances may effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances without requiring the cloud computing instances to access the cloud-based object storage. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system may be stored in the cloud-based object storage, but less than all data that is stored by the cloud-based storage system may be stored in at least one of the local storage resources or block-storage resources that are utilized by the cloud computing instances. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system should reside in both: 1) the cloud-based object storage, and 2) at least one of the local storage resources or block-storage resources that are utilized by the cloud computing instances. As described above, when the cloud computing instances with local storage are embodied as EC2 instances, the cloud computing instances with local storage are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance with local storage. As such, one or more modules of computer program instructions that are executing within the cloud-based storage system (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances with local storage. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances with local storage by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances from the cloud-based object storage, and storing the data retrieved from the cloud-based object storage in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances with local storage failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage. Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage, less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system. The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system in order to more rapidly pull data from the cloud-based object storage and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system. In such embodiments, once the data stored by the cloud-based storage system has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system have written to the cloud-based storage system. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system have written to the cloud-based storage system and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the could-based storage system via communications with one or more of the cloud computing instances that each are used to support the execution of a storage controller application via monitoring communications between cloud computing instances, via monitoring communications between cloud computing instances and the cloud-based object storage, or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances that are used to support the execution of a storage controller application are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system. In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances that are used to support the execution of a storage controller application are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system, an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances has reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances, such that data stored in an already existing cloud computing instance can be migrated to the one or more new cloud computing instances and the already existing cloud computing instance can be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system, but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system may be dynamically scaled, the cloud-based storage system may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system described here can always ‘add’ additional storage, the cloud-based storage system can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system may implement a policy that garbage collection only be performed when the number of TOPS being serviced by the cloud-based storage system falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services. In some embodiments, especially in embodiments where the cloud-based object storage resources are embodied as Amazon S3, the cloud-based storage system may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object—and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described above may be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may be also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks, including the development of multi-layer neural networks, have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. In order for the storage systems described above to serve as a data hub or as part of an AI deployment, in some embodiments the storage systems may be configured to provide DMA between storage devices that are included in the storage systems and one or more GPUs that are used in an AI or big data analytics pipeline. The one or more GPUs may be coupled to the storage system, for example, via NVMe-over-Fabrics (‘NVMe-oF’) such that bottlenecks such as the host CPU can be bypassed and the storage system (or one of the components contained therein) can directly access GPU memory. In such an example, the storage systems may leverage API hooks to the GPUs to transfer data directly to the GPUs. For example, the GPUs may be embodied as Nvidia™ GPUs and the storage systems may support GPUDirect Storage (‘GDS’) software, or have similar proprietary software, that enables the storage system to transfer data to the GPUs via RDMA or similar mechanism. Readers will appreciate that in embodiments where the storage systems are embodied as cloud-based storage systems as described below, virtual drive or other components within such a cloud-based storage system may also be configured Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various “things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming though the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubernetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. The storage systems described above may also be configured to implement NVMe Zoned Namespaces. Through the use of NVMe Zoned Namespaces, the logical address space of a namespace is divided into zones. Each zone provides a logical block address range that must be written sequentially and explicitly reset before rewriting, thereby enabling the creation of namespaces that expose the natural boundaries of the device and offload management of internal mapping tables to the host. In order to implement NVMe Zoned Name Spaces (‘ZNS’), ZNS SSDs or some other form of zoned block devices may be utilized that expose a namespace logical address space using zones. With the zones aligned to the internal physical properties of the device, several inefficiencies in the placement of data can be eliminated. In such embodiments, each zone may be mapped, for example, to a separate application such that functions like wear levelling and garbage collection could be performed on a per-zone or per-application basis rather than across the entire device. In order to support ZNS, the storage controllers described herein may be configured with to interact with zoned block devices through the usage of, for example, the Linux™ kernel zoned block device interface or other tools. The storage systems described above may also be configured to implement zoned storage in other ways such as, for example, through the usage of shingled magnetic recording (SMR) storage devices. In examples where zoned storage is used, device-managed embodiments may be deployed where the storage devices hide this complexity by managing it in the firmware, presenting an interface like any other storage device. Alternatively, zoned storage may be implemented via a host-managed embodiment that depends on the operating system to know how to handle the drive, and only write sequentially to certain regions of the drive. Zoned storage may similarly be implemented using a host-aware embodiment in which a combination of a drive managed and host managed implementation is deployed. The embodiments may be integrated with a computing device that may be specifically configured to perform one or more of the processes described herein. The computing device may include a communication interface, a processor, a storage device, and an input/output (“I/O”) module communicatively connected one to another via a communication infrastructure A Communication interface may be configured to communicate with one or more computing devices. Examples of communication interface include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. A Processor generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor may perform operations by executing computer-executable instructions (e.g., an application, software, code, and/or other executable data instance) stored in storage device. A Storage device may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, the storage device may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device. For example, data representative of computer-executable instructions configured to direct processor to perform any of the operations described herein may be stored within storage device. In some examples, data may be arranged in one or more databases residing within storage device. An I/O module may include one or more I/O modules configured to receive user input and provide user output. The I/O module may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. An I/O module may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device. The storage systems described above may, either alone or in combination, by configured to serve as a continuous data protection store. A continuous data protection store is a feature of a storage system that records updates to a dataset in such a way that consistent images of prior contents of the dataset can be accessed with a low time granularity (often on the order of seconds, or even less), and stretching back for a reasonable period of time (often hours or days). These allow access to very recent consistent points in time for the dataset, and also allow access to access to points in time for a dataset that might have just preceded some event that, for example, caused parts of the dataset to be corrupted or otherwise lost, while retaining close to the maximum number of updates that preceded that event. Conceptually, they are like a sequence of snapshots of a dataset taken very frequently and kept for a long period of time, though continuous data protection stores are often implemented quite differently from snapshots. A storage system implementing a data continuous data protection store may further provide a means of accessing these points in time, accessing one or more of these points in time as snapshots or as cloned copies, or reverting the dataset back to one of those recorded points in time. Over time, to reduce overhead, some points in the time held in a continuous data protection store can be merged with other nearby points in time, essentially deleting some of these points in time from the store. This can reduce the capacity needed to store updates. It may also be possible to convert a limited number of these points in time into longer duration snapshots. For example, such a store might keep a low granularity sequence of points in time stretching back a few hours from the present, with some points in time merged or deleted to reduce overhead for up to an additional day. Stretching back in the past further than that, some of these points in time could be converted to snapshots representing consistent point-in-time images from only every few hours. The present disclosure relates to independent scaling of compute resources and storage resources in a storage system. Storage systems described herein may include a plurality of blades. Each of the blades in the storage system may be embodied, for example, as a computing device that includes one or more computer processors, dynamic random access memory (‘DRAM’), flash memory, interfaces for one more communication busses, interfaces for one or more power distribution busses, cooling components, one or more chassis, a switch, and so on. Although the blades will be described in more detail below, readers will appreciate that the blades may be embodied as different types of blades, such that the collective set of blades include heterogeneous members. Each of the blades in the storage system may be mounted within one of a plurality of chassis. Each chassis may be embodied, for example, as physical structure that helps protect and organize components within the storage system. Each chassis may include a plurality of slots, where each slot is configured to receive a blade. Each chassis may also include one or more mechanisms such as a power distribution bus that is utilized to provide power to each blade that is mounted within the chassis, one or more data communication mechanisms such as a data communication bus that enables communication between each blade that is mounted within the chassis, one or more data communication mechanisms such as a data communication bus that enables communication between each blade that is mounted within and an external data communications network, and so on. In fact, each chassis may include at least two instances of both the power distribution mechanism and the communication mechanisms, where each instance of the power distribution mechanism and each instance of the communication mechanisms may be enabled or disabled independently. As mentioned above, the present disclosure relates to independent scaling of compute resources and storage resources. Compute resources may be scaled independently of storage resources, for example, by altering the amount of compute resources that are provided by the storage system without changing the amount of storage resources that are provided by the storage system or by changing the amount of storage resources that are provided by the storage system without changing the amount of compute resources that are provided by the storage system. Compute resources and storage resources may be independently scaled, for example, by adding blades that only include storage resources, by adding blades that only include compute resources, by enabling compute resources on a blade to be powered up or powered down with no impact to the storage resources in the storage system, by enabling storage resources on a blade to be powered up or powered down with no impact to the compute resources in the storage system, and so on. As such, embodiments of the present disclosure will be described that include hardware support for independent scaling of compute resources and storage resources, software support for independent scaling of compute resources and storage resources, or any combination thereof. Example apparatuses and storage systems that support independent scaling of compute resources and storage resources in accordance with the present disclosure are described with reference to the accompanying drawings, beginning withFIG.4.FIG.4sets forth a diagram of a chassis (424) for use in a storage system that supports independent scaling of compute resources and storage resources. The chassis (424) depicted inFIG.4may be embodied, for example, as an enclosure that may be mounted within a larger enclosure (e.g., a rack) to form a multi-chassis storage system. The chassis (424) depicted inFIG.4may include a plurality of slots (420) where each slot is configured to receive a blade (422). Although not depicted in the example ofFIG.4, readers will appreciate that each slot may include various support structures such as rails, locking mechanisms, and other physical components for securing a blade (422) within a particular slot. Furthermore, in alternative embodiments, a single blade may span multiple slots. The blade (422) depicted inFIG.4may be embodied, for example, as a computing device that includes one or more computer processors, dynamic random access memory (‘DRAM’), flash memory, interfaces for one more communication busses, interfaces for one or more power distribution busses, cooling components, and so on. Although blades will be described in more detail below, readers will appreciate that the chassis (424) may be configured to support different types of blades, such that the collective set of blades may include heterogeneous members. Blades may be of different types as some blades may only provide processing resources to the overall storage system, some blades may only provide storage resources to the overall storage system, and some blades may provide both processing resources and storage resources to the overall storage system. Furthermore, even the blades that are identical in type may be different in terms of the amount of storage resources that the blades provide to the overall storage system. For example, a first blade that only provides storage resources to the overall storage system may provide 8 TB of storage while a second blade that only provides storage resources to the overall storage system may provide 256 TB of storage. The blades that are identical in type may also be different in terms of the amount of processing resources that the blades provide to the overall storage system. For example, a first blade that only provides processing resources to the overall storage system may include more processors or more powerful processors than a second blade that only provides processing resources to the overall storage system. Readers will appreciate that other differences may also exist between two individual blades and that blade uniformity is not required according to embodiments described herein. The chassis (424) depicted inFIG.4may also include a compartment (416) that is used to house computing devices and computing components that are utilized by the blades that are mounted within the chassis (424). The compartment (416) may include, for example, one or more power supplies that are used to provide power to one or more blades mounted within the chassis (424), one or more power busses that are used to deliver power from one or more power supplies to one or more blades mounted within the chassis (424), one or more network switches that are used to route data communications between blades mounted within the chassis (424), one or more network switches that are used to route data communications between blades mounted within the chassis (424) and a data communications network that is external to the chassis (424), one or more data communications busses, and so on. Readers will appreciate that additional computing devices and computing components may be mounted within the compartment (416) according to embodiments of the present disclosure. The chassis (424) depicted inFIG.4may also include a connector panel (418) that is used to support various interfaces and connectors that allow components within the blades that are mounted within the chassis (424) to couple to computing devices and computing components that are housed in the compartment (416). The connector panel (418) may be used to provide various interfaces and connectors to each blade (422), as each slot may have a unique set of interfaces and connectors mounted on the connector panel (418), such that a blade that is mounted within a particular slot may couple to the unique set of interfaces and connectors mounted on the connector panel (418) when the blade is inserted into the particular slot. In the example depicted inFIG.4, four network interfaces (402,404,406,408) are mounted on the connector panel (418) for use by the blade (422) depicted inFIG.4when the blade (422) is inserted into a slot (426). The four network interfaces (402,404,406,408) may be embodied, for example, as an RJ45 connector that is coupled to an Ethernet cable and inserted into an Ethernet port on the blade (422), as a 9-pin DE-9 cable connector that is coupled to an optical fiber cable and inserted into a Fibre Channel port on the blade (422), as a cooper or optical Quad Small Form-factor Pluggable (‘QSFP’) for Ethernet, InfiniBand, or other high speed signaling interface, as other interfaces that enable an Ethernet adapter in the blade (422) to be coupled to a data communications network, as other interfaces that enable a Fibre Channel adapter in the blade (422) to be coupled to a data communications network, as other interfaces that enable other types of host bus adapters in the blade (422) to be coupled to a data communications network, and so on. Readers will appreciate that each of the four network interfaces (402,404,406,408) may be used to couple the blade (422) to distinct data communications networks, two or more of the network interfaces (402,404,406,408) may be used to couple the blade (422) to the same data communications networks, one or more of the network interfaces (402,404,406,408) may be used to couple the blade (422) to other blades or computing devices for point-to-point communications with the blade (422), and so on. In the example depicted inFIG.4, two power interfaces are also mounted on the connector panel (418) for use by the blade (422) depicted inFIG.4when the blade (422) is inserted into a slot (426). The power interfaces (412,414) may be embodied, for example, as an interface to a power bus that is coupled to a power supply for delivering power to one or more of the blades in the chassis (424). Readers will appreciate that each power interface (412,414) may be coupled to an independently controlled power domain, such that enabling or disabling the delivery of power to the blade (422) via the first power interface (412) has no impact on the delivery of power to the blade (422) via the second power interface (414), and vice versa. Readers will appreciate that some components within the blade (422) may be configured to receive power via the first power interface (412) while other components within the blade (422) may be configured to receive power via the second power interface (414), so that the delivery of power to different components within the blade (422) may be independently controlled. For example, compute resources within the blade (422) may receive power via the first power interface (412) while storage resources within the blade (422) may receive power via the second power interface (414). In the example depicted inFIG.4, a cooling apparatus (410) is also mounted on the connector panel (418). The cooling apparatus (410) may be embodied, for example, as a fan that is configured to deliver air flow to the blade (422) when the blade is inserted into the slot (426). Readers will appreciate that the connector panel (418) may include other interfaces not depicted here, different numbers of interfaces than are depicted here, and so on. Readers will further appreciate that while a connector panel (418) is one possible way to enable the blades that are mounted within the chassis (424) to couple to computing devices and computing components that are housed in the compartment (416), chassis for use in storage systems according to embodiments of the present disclosure can utilize other mechanisms to enable the blades that are mounted within the chassis (424) to couple to computing devices and computing components that are housed in the compartment (416). Furthermore, such computing devices and computing components do not have to be contained within a distinct compartment (416), as chassis (424) for use in storage systems according to embodiments of the present disclosure may be embodied in other ways. For further explanation,FIG.5sets forth a diagram of a hybrid blade (502) useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The hybrid blade (502) depicted inFIG.5is referred to as a ‘hybrid’ blade because the hybrid blade (502) includes both compute resources and storage resources. The compute resources in the hybrid blade (502) depicted inFIG.5includes a host server (504) that contains a computer processor (506) coupled to memory (510) via a memory bus (508). The computer processor (506) depicted inFIG.5may be embodied, for example, as a central processing unit (CPU′) or other form of electronic circuitry configured to execute computer program instructions. The computer processor (506) may utilize the memory (510) to store data or other information useful during the execution of computer program instructions by the computer processor (506). Such memory (510) may be embodied, for example, as DRAM that is utilized by the computer processor (506) to store information when the computer processor (506) is performing computational tasks such as creating and sending I/O operations to one of the storage units (512,514), breaking up data, reassembling data, and other tasks. In the example depicted inFIG.2, the host server (504) can represent compute resources that the hybrid blade (502) may offer for use by entities executing on a storage system that includes the hybrid blade (502). For example, one or more authorities (which will be described in greater detail below) that are executing on the storage system may execute on the host server (504). In the example depicted inFIG.5, the host server (504) is coupled to two data communication links (532,534). Such data communications links (532,534) may be embodied, for example, as Ethernet links, such that the host server (504) can be coupled to a data communication network via a network adapter (not shown) contained in the host server (504). Through the use of such data communications links (532,534), the host server (504) may receive input/output operations that are directed to the attached storage units (512,514), such as requests to read data from the attached storage units (512,514) or requests to write data to the attached storage units (512,514), from other blades in a storage system that includes the hybrid blade (802). The hybrid blade (502) depicted inFIG.5also includes storage resources in the form of one or more storage units (510,514). Each storage unit (510,514) may include flash (528,530) memory as well as other forms of memory (524,526), such as non-volatile random access memory (‘NVRAM’) which will be discussed in greater detail below. In the example depicted inFIG.5, each storage unit (512,514) can represent storage resources that the hybrid blade (502) may offer for use by users of a storage system that includes the hybrid blade (502). In the example depicted inFIG.5, the storage units (512,514) may include integrated circuits such as a field-programmable gate array (‘FPGA’) (520,522), microprocessors such as an Advanced RISC Machine (‘ARM’) microprocessor that are utilized to write data to and read data from the flash (528,530) memory as well as the other forms of memory (524,526) in the storage unit (512,514), or any other form of computer processor. The FPGAs (520,522) and the ARM (516,518) microprocessors may, in some embodiments, perform operations other than strict memory accesses. For example, in some embodiments the FPGAs (520,522) and the ARM (516,518) microprocessors may break up data, reassemble data, and so on. In the example depicted inFIG.5, the computer processor (506) may access the storage units (512,514) via a data communication bus (536) such as a Peripheral Component Interconnect Express (‘PCIe’) bus. In the example depicted inFIG.5, the data communication bus (536), ARM (516,518) microprocessors, and FPGAs (520,522) form a local access interface through which the local compute resources (e.g., the host server (504)) can access the local storage resources (e.g., the flash memory (528,530) and other forms of memory (524,526)). In the example depicted inFIG.5, the hybrid blade (502) also includes data communications links (538,540) that may be used to communicatively couple one or more of the storage units (512,514) to other blades in the storage system. The data communications links (538,540) may be embodied, for example, as an Ethernet link that enables an FPGA (520,522) in the storage unit (512,514) to couple to a data communications network. The data communications links (538,540) and the FPGAs (520,522) may collectively form a remote access interface through which compute resources on a remote blade can access the local storage resources (e.g., the flash memory (528,530) and other forms of memory (524,526)) without utilizing the local compute resources (e.g., the host server (504)). In such an example, compute resources on a remote blade may send an instruction to write data to, or read data from, the local storage resources directly to the FPGA (520,522) in the storage unit (512,514) via the data communications links (538,540). In such a way, compute resources on a remote blade can directly access local storage resources on the hybrid blade (502) without needing to route such an access request through the local compute resources on the hybrid blade (502). Although in some embodiments the remote access interface may be embodied as an Ethernet interface and the local access interface may be embodied as a PCIe interface, readers will appreciate that hybrid blades (502) according to embodiments of the present disclosure may utilize other types of interfaces for the remote access interface and the local access interface. In some embodiments the remote access interface and the local access interface may be implemented using the same technologies, in other embodiments the remote access interface and the local access interface may be implemented using other technologies, and so on. In the example depicted inFIG.5, the hybrid blade (502) also includes a power interface (546) and a power distribution bus (548) through which power can be provided to the host server (504). The power interface (546) may be coupled, for example, to a first power supply, to a first power bus that is external to the hybrid blade (502) and provided by the chassis that the blade is mounted within, and so on. Readers will appreciate that the power interface (546) and the power distribution bus (548) may collectively form a first local power domain that is configured to deliver power to the local compute resources (e.g., the host server (504)). In the example depicted inFIG.5, the hybrid blade (502) also includes a power interface (542) and a power distribution bus (544) through which power can be provided to one or more of the storage units (512,514). The power interface (542) may be coupled, for example, to a second power supply, to a second power bus that is external to the hybrid blade (502) and provided by the chassis that the blade is mounted within, and so on. Readers will appreciate that the power interface (542) and the power distribution bus (544) may collectively form a second local power domain that is configured to deliver power to the local storage resources (e.g., the storage units (512,514). In the example depicted inFIG.5, the first local power domain and the second local power domain can be independently operated as the power interfaces (542,546) may be enabled or disabled independently, the distinct power supplies that are coupled to the power interfaces (542,546) may be enabled or disabled independently, the distinct power busses that are coupled to the power interfaces (542,546) may be enabled or disabled independently, and so on. In such a way, the delivery of power to the host server (504) may be enabled or disabled independently of the delivery of power to one or more of the storage units (512,514), and vice versa. Readers will appreciate that in the example depicted inFIG.5, the second local power domain described in the preceding paragraph can also include a remote access interface such as the data communications links (538,540). As described above, the data communications links (538,540) may be embodied as an Ethernet link that enables an FPGA (520,522) in the storage unit (512,514) to couple to a data communications network. Power may therefore be delivered to the local storage resources (e.g., the storage units (512,514)) via the data communications links (538,540), for example, through the use of Power over Ethernet (‘PoE’) techniques. In such a way, when a remote blade is accessing the local storage resources via the remote access interface, the storage units (512,514) may be powered using remote access interface, whereas the storage units (512,514) may be powered using the power interfaces (542,546) and the power distribution bus (544) when the local compute resources are accessing the local storage resources. In alternative embodiments, power may be provided to the storage units (512,514) in different ways, so long as the delivery of power to the host server (504) may be enabled or disabled independently of the delivery of power to one or more of the storage units (512,514), and vice versa. The preceding paragraphs describe non-limiting, example embodiments of a first local power domain and a second local power domain. In alternative embodiments, the first local power domain and the second local power domain may include fewer or additional components. The first local power domain and the second local power domain may also be configured to deliver power to components within the hybrid blade (502) in coordination with components that are external to the hybrid blade (502) such as, for example, external power supplies, external power busses, external data communications networks, and so on. The first local power domain and the second local power domain may also be coupled to receive power from the same power source (e.g., the same power supply), so long as the delivery of power to the host server (504) may be enabled or disabled independently of the delivery of power to one or more of the storage units (512,514), and vice versa. In an embodiment where the first local power domain and the second local power domain may receive power from the same power source, the delivery of power to the host server (504) may be enabled or disabled independently of the delivery of power to one or more of the storage units (512,514), and vice versa, through the use of a switching mechanism, power delivery network, or other mechanism that enables the delivery of power to each power domain to be blocked or enabled independently. Readers will appreciate that additional embodiments are possible that are consistent with the spirit of the present disclosure. Readers will appreciate that other types of blades may also exist. For example, a compute blade may be similar to the hybrid blade (502) depicted inFIG.5as the compute blade may include one or more host servers that are similar to the host server (504) depicted inFIG.5. Such a compute blade may be different than the hybrid blade (502) depicted inFIG.5, however, as the compute blade may lack the storage units (512,514) depicted inFIG.5. Readers will further appreciate that a storage blade may be similar to the hybrid blade (502) depicted inFIG.5as the storage blade may include one or more storage units that are similar to the storage units (512,514) depicted inFIG.5. Such a storage blade may be different than the hybrid blade (502) depicted inFIG.5, however, as the storage blade may lack the host server (504) depicted inFIG.5. The example blade (502) depicted inFIG.5is included only for explanatory purposes. In other embodiments, the blades may include additional processors, additional storage units, compute resources that are packaged in a different manner, storage resources that are packaged in a different manner, and so on. For further explanation,FIG.6sets forth a diagram of an additional hybrid blade (602) useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The hybrid blade (602) depicted inFIG.6is similar to the hybrid blade (502) depicted inFIG.5, as the hybrid blade (802) depicted inFIG.6also includes local storage resources such as the storage units (512,514), local compute resources such as the host server (504), a local access interface through which the local compute resources can access the local storage resources, a remote access interface through which compute resources on a remote blade can access the local storage resources without utilizing the local compute resources, a first local power domain configured to deliver power to the local compute resources, and a second local power domain configured to deliver power to the local storage resources, where the first local power domain and the second local power domain can be independently operated. The hybrid blade (602) depicted inFIG.6also includes a switching mechanism (604,606) that is configured to provide access to local storage resources such as the storage units (512,514). Each switching mechanism (604,606) may be configured to couple to the local compute resources via a first interface and further configured to couple to remote compute resources via a second interface. The first switching mechanism (604), for example, may be coupled to local compute resources in the form of a host server (504) via a first interface such as the local access interface and also coupled to remote compute resources in the form of a host server on a remote blade (not shown) via a second interface such as the remote access interface that includes the data communications link (538). The second switching mechanism (606) may be coupled to local compute resources in the form of a host server (504) via a first interface such as the local access interface and also coupled to remote compute resources in the form of a host server on a remote blade (not shown) via a second interface such as the remote access interface that includes the data communications link (540). In the specific example illustrated inFIG.6, the first switching mechanism (604) is coupled to the remote access interface that includes the data communications link (538), such that the storage unit (512) may be accessed by a host server on a remote blade without utilizing the local compute resources in the hybrid blade (602). The second switching mechanism (606), however, is coupled to the local access interface, such that the storage unit (514) may be accessed by the local compute resources in the hybrid blade (602). In such an example, however, the dashed lines in the switching mechanisms (604,606) are used to illustrate that each switching mechanism (604,606) may be reconfigured to couple the storage units (512,514) to a different data communications pathway. In the example depicted inFIG.6, each switching mechanism (604,606) may be embodied as a mechanical device that can facilitate a data communications connection between a particular storage unit (512,514) and a plurality of data communications pathways, although at any particular time each switching mechanism (604,606) may only facilitate data communications between the particular storage unit (512,514) and a single data communications pathway. For further explanation,FIG.7sets forth a diagram of a storage blade (702) useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The storage blade (702) depicted inFIG.7is similar to the hybrid blade described above with reference toFIG.5andFIG.6, although the storage blade (702) is different than the hybrid blades described above as the storage blade (702) does not include any local compute resources that are available for use by a storage system that the storage blade (702) is included within. The storage blade (702) depicted inFIG.7includes storage resources in the form of one or more storage units (512,514). Each storage unit (512,514) may include flash (528,530) memory as well as other forms of memory (524,526), such as ‘NVRAM, which will be discussed in greater detail below. In the example depicted inFIG.4, each storage unit (512,514) can represent storage resources that the storage blade (702) may offer for use by users of a storage system that includes the storage blade (702). In the example depicted inFIG.7, the storage units (512,514) may include integrated circuits such as an FPGA (520,522), microprocessors such as an ARM microprocessor that are utilized to write data to and read data from the flash (528,530) memory as well as the other forms of memory (524,526) in the storage unit (512,514), or any other form of computer processor. The FPGAs (520,522) and the ARM (516,518) microprocessors may, in some embodiments, perform operations other than strict memory accesses. For example, in some embodiments the FPGAs (520,522) and the ARM (516,518) microprocessors may break up data, reassemble data, and so on. In the example depicted inFIG.7, the storage blade (702) also includes data communications links (538,540) that may be used to couple one or more of the storage units (512,514) to other blades in the storage system. The data communications links (538,540) may be embodied, for example, as an Ethernet link that enables an FPGA (520,522) in the storage unit (512,514) to couple to a data communications network. The data communications links (538,540) and the FPGAs (520,522) may collectively form a remote access interface through which compute resources on a remote blade can access the local storage resources (e.g., the flash memory (528,530) and other forms of memory (524,526)) without utilizing any local compute resources on the storage blade (702). In such an example, compute resources on a remote blade may send an instruction to write data to, or read data from, the local storage resources directly to the FPGA (520,522) in the storage unit (512,514) via the data communications links (538,540). In such a way, compute resources on a remote blade can directly access local storage resources on the hybrid blade (502) without needing to route such an access request through local compute resources on the storage blade (702). In the example depicted inFIG.7, the storage blade (702) also includes a power interface (542) and a power distribution bus (544) through which power can be provided to one or more of the storage units (512,514). The power interface (542) may be coupled, for example, to a power supply, to a power bus that is external to the hybrid blade (502) and provided by the chassis that the blade is mounted within, and so on. Readers will appreciate that the power interface (542) and the power distribution bus (544) may collectively form a local power domain configured to deliver power to the local storage resources (e.g., the storage units (512,514). Readers will appreciate that in the example depicted inFIG.7, the local power domain can also include a remote access interface such as the data communications links (538,540). As described above, the data communications links (538,540) may be embodied as an Ethernet link that enables an FPGA (520,522) in the storage unit (512,514) to couple to a data communications network. Power may therefore be delivered to the local storage resources (e.g., the storage units (512,514)) via the data communications links (538,540), for example, through the use of PoE techniques. In such a way, power may be delivered to the storage units (512,514) via the remote access interface, via the power interface (542) and power distribution bus (544), or any combination thereof. For further explanation,FIG.8sets forth a diagram of a compute blade (802) useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The compute blade (802) depicted inFIG.8is similar to the hybrid blade described above with reference toFIG.5andFIG.6, although the compute blade (802) is different than the hybrid blades described above as the compute blade (802) does not include any local storage resources that may be used that are available for use by a storage system that the compute blade (802) is included within. The compute resources in the compute blade (802) depicted inFIG.8includes a host server (504) that contains a computer processor (506) coupled to memory (510) via a memory bus (508). The computer processor (506) depicted inFIG.8may be embodied, for example, as a CPU or other form of electronic circuitry configured to execute computer program instructions. The computer processor (506) may utilize the memory (510) to store data or other information useful during the execution of computer program instructions by the computer processor (506). Such memory (510) may be embodied, for example, as DRAM that is utilized by the computer processor (506) to store information when the computer processor (506) is performing computational tasks such as creating and sending I/O operations to one of the storage units (512,514), breaking up data, reassembling data, and other tasks. In the example depicted inFIG.5, the host server (504) can represent compute resources that the compute blade (802) may offer for use by entities executing on a storage system that includes the compute blade (802). For example, one or more authorities (which will be described in greater detail below) that are executing on the storage system may execute on the host server (504). In the example depicted inFIG.8, the host server (504) is coupled to two data communication links (532,534). Such data communications links (532,534) may be embodied, for example, as Ethernet links, such that the host server (504) can be coupled to a data communication network via a network adapter (not shown) contained in the host server (504). In the example depicted inFIG.8, the compute blade (802) also includes a power interface (546) and a power distribution bus (548) through which power can be provided to the host server (504). The power interface (546) may be coupled, for example, to a power supply, to a power bus that is external to the compute blade (802) and provided by the chassis that the blade is mounted within, and so on. Readers will appreciate that the power interface (546) and the power distribution bus (548) may collectively form a local power domain that is configured to deliver power to the local compute resources (e.g., the host server (504)) in the compute blade (802). For further explanation,FIG.9sets forth a diagram of a storage system that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The storage system ofFIG.9includes a plurality of chassis (602,606,610,614) mounted within a rack (600). The rack (600) depicted inFIG.9may be embodied as a standardized frame or enclosure for mounting multiple equipment modules, such as multiple chassis (602,606,610,614). The rack (600) may be embodied, for example, as a 19-inch rack that includes edges or ears that protrude on each side, thereby enabling a chassis (602,606,610,614) or other module to be fastened to the rack (600) with screws or some other form of fastener. Readers will appreciate that while the storage system depicted inFIG.6includes a plurality of chassis (602,606,610,614) mounted within a single rack (600), in other embodiments the plurality of chassis (602,606,610,614) may be distributed across multiple racks. For example, a first chassis in the storage system may be mounted within a first rack, a second chassis in the storage system may be mounted within a second rack, and so on. Although depicted in less detail, each of the chassis (602,606,610,614) depicted inFIG.9may be similar to the chassis described above with reference toFIG.1, as the chassis (602,606,610,614) include a plurality of slots, where each slot is configured to receive a blade. The chassis (602,606,610,614) depicted inFIG.9may be embodied, for example, as passive elements that includes no logic. Each chassis (602,606,610,614) may include a mechanism, such as a power distribution bus, that is utilized to provide power to each blade that is mounted within the chassis (602,606,610,614). Each chassis (602,606,610,614) may further include a communication mechanism, such as a communication bus, that enables communication between each blade that is mounted within the chassis (602,606,610,614). The communication mechanism may be embodied, for example, as an Ethernet bus, a PCIe bus, InfiniBand bus, and so on. In some embodiments, each chassis (602,606,610,614) may include at least two instances of both the power distribution mechanism and the communication mechanism, where each instance of the power distribution mechanism and each instance of the communication mechanism may be enabled or disabled independently. Each chassis (602,606,610,614) depicted inFIG.9may also include one or more ports for receiving an external communication bus that enables communication between multiple chassis (602,606,610,614), directly or through a switch, as well as communications between a chassis (602,606,610,614) and an external client system. The external communication bus may use a technology such as Ethernet, InfiniBand, Fibre Channel, and so on. In some embodiments, the external communication bus may use different communication bus technologies for inter-chassis communication than is used for communication with an external client system. In embodiments where one or more switches are deployed, each switch may act as a translation layer between multiple protocols or technologies. When multiple chassis (602,606,610,614) are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such a PCIe interface, a SAS interfaces, a SATA interface, or other interface using protocols such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’), hypertext transfer protocol (‘HTTP’), Object storage protocols, and so on. Translation from the client protocol may occur at the switch, external communication bus, or within each blade. Each chassis (602,606,610,614) depicted inFIG.9houses fifteen blades (604,608,612,616), although in other embodiments each chassis (602,606,610,614) may house more or fewer blades. Each of the blades (604,608,612,616) depicted inFIG.9may be embodied, for example, as a computing device that includes one or more computer processors, DRAM, flash memory, interfaces for one more communication busses, interfaces for one or more power distribution busses, cooling components, and so on. Readers will appreciate that the blades (604,608,612,616) depicted inFIG.9may be embodied as different types of blades, such that the collective set of blades (604,608,612,616) include heterogeneous members. Blades may be of different types as some blades (604,608,612,616) may only provide processing resources to the overall storage system, some blades (604,608,612,616) may only provide storage resources to the overall storage system, and some blades (604,608,612,616) may provide both processing resources and storage resources to the overall storage system. Furthermore, even the blades (604,608,612,616) that are identical in type may be different in terms of the amount of storage resources that the blades (604,608,612,616) provide to the overall storage system. For example, a first blade that only provides storage resources to the overall storage system may provide 8 TB of storage while a second blade that only provides storage resources to the overall storage system may provide 256 TB of storage. The blades (604,608,612,616) that are identical in type may also be different in terms of the amount of processing resources that the blades (604,608,612,616) provide to the overall storage system. For example, a first blade that only provides processing resources to the overall storage system may include more processors or more powerful processors than a second blade that only provides processing resources to the overall storage system. Readers will appreciate that other differences may also exist between two individual blades and that blade uniformity is not required according to embodiments described herein. Although not explicitly depicted inFIG.9, each chassis (602,606,610,614) may include one or more modules, data communications busses, or other apparatus that is used to identify which type of blade is inserted into a particular slot of the chassis (602,606,610,614). In such an example, a management module may be configured to request information from each blade in each chassis (602,606,610,614) when each blade is powered on, when the blade is inserted into a chassis (602,606,610,614), or at some other time. The information received by the management module can include, for example, a special purpose identifier maintained by the blade that identifies the type (e.g., storage blade, compute blade, hybrid blade) of blade that has been inserted into the chassis (602,606,610,614). In an alternative embodiment, each blade (604,608,612,616) may be configured to automatically provide such information to a management module as part of a registration process. In the example depicted inFIG.9, the storage system may be initially configured by a management module that is executing remotely. The management module may be executing, for example, in a network switch control processor. Readers will appreciate that such a management module may be executing on any remote CPU and may be coupled to the storage system via one or more data communication networks. Alternatively, the management module may be executing locally as the management module may be executing on one or more of the blades (604,608,612,616) in the storage system. The storage system depicted inFIG.9includes a first blade (618) mounted within one of the chassis (602) that includes one or more storage resources but does not include compute resources. The first blade (618) may be embodied, for example, as a storage blade such as the storage blade described above with reference toFIG.7. The storage system depicted inFIG.9also includes a second blade (620) mounted within one of the chassis (606) that includes one or more compute resources but does not include storage resources. The second blade (620) may be embodied, for example, as a compute blade such as the compute blade described above with reference toFIG.8. The storage system depicted inFIG.9also includes a third blade (622) mounted within one of the chassis (610) that includes one or more storage resources and one or more compute resources. The third blade (622) may be embodied, for example, as a hybrid blade such as the hybrid blades described above with reference toFIG.5andFIG.6. The third blade (622) depicted inFIG.9may include a local access interface through which the compute resources in the third blade (622) can access the storage resources in the third blade (622). The compute resources in the third blade (622) may be embodied, for example, as one or more host servers that include a computer processor coupled to memory via a memory bus. The storage resources in the third blade (622) may be embodied, for example, as one or more storage units that include flash memory as well as other forms of memory, such as NVRAM, which will be discussed in greater detail below. In such an example, the compute resources in the third blade (622) may access the storage resources (622) in the third blade (622), for example, via a local access interface such as a data communication bus that forms a data communications path between the compute resources in the third blade (622) and the storage resources (622) in the third blade (622), as well as any other microprocessors, FPGAs, or other computing devices required to carry out data communications between the compute resources in the third blade (622) and the storage resources (622) in the third blade (622). The third blade (622) depicted inFIG.9may also include a remote access interface through which compute resources in a remote blade can access the storage resources in the third blade (622) without utilizing the compute resources in the third blade (622). The remote access interface may be embodied, for example, as a data communications interface in the third blade (622) that enables an FPGA, microprocessor, or other form of computing device that is part of the storage resources in the third blade (622) to couple to a data communications network. In such an example, compute resources on a remote blade may send an instruction to write data to, or read data from, the storage resources on the third blade (622) directly to microprocessor, or other form of computing device that is part of the storage resources in the third blade (622). In such a way, compute resources on a remote blade can directly access storage resources on the third blade (622) without needing to route such an access request through the compute resources on the third blade (622). Readers will appreciate that the remote access interface in the third blade (622) may utilize first data communications protocol while the local access interface in the third blade (622) may utilize a different, second data communications protocol. The third blade (622) depicted inFIG.9may also include a first power interface for delivering power to the compute resources in the third blade (622). The first power interface may be embodied, for example, as a port for coupling to a power source that is external to the third blade (622) and a power distribution bus that couples the port to one or more compute resources such as a host server. The port may be coupled, for example, to a first power supply, to a first power bus that is external to the third blade (622) and provided by the chassis (610) that the blade is mounted within, and so on. The third blade (622) depicted inFIG.9may also include a second power interface for delivering power to the storage resources in the third blade (622). The second power interface may be embodied, for example, as a port for coupling to a power source that is external to the third blade (622) and a power distribution bus that couples the port to one or more storage resources such as one or more storage units. The port may be coupled, for example, to a second power supply, to a second power bus that is external to the third blade (622) and provided by the chassis (610) that the blade is mounted within, and so on. In the example depicted inFIG.9, power delivery to the first power interface in the third blade (622) may be controlled independently of power delivery to the second power interface in the third blade (622). Power delivery to the first power interface may be controlled independently of power delivery to the second power interface, for example, because the first power interface is coupled to a first power source and the second power interface is coupled to a second power source. In such an example, powering up or down either power source would result in power delivery to the first power interface being controlled independently of power delivery to the second power interface. Power delivery to the first power interface may also be controlled independently of power delivery to the second power interface, for example, because the first power interface can be enabled or disabled independently of enabling or disabling the second power interface, the second power interface can be enabled or disabled independently of enabling or disabling the first power interface, and so on. In such an example, each of the power interfaces may include some mechanism that allows the power interface to block the flow of electricity through the power interface, such that the power interface is disabled. Each power interfaces may likewise include some mechanism, which may be the same mechanism as described in the preceding sentence, that allows the power interface to permit the flow of electricity through the power interface, such that the power interface is enabled. In the example depicted inFIG.9, the second power interface in the third blade (622) may be included within the remote access interface in the third blade (622). As described above, the remote access interface in the third blade (622) may be embodied as an Ethernet link that enables an FPGA, microprocessor, or other computing device in a storage unit in the third blade (622) to couple to a data communications network. Power may therefore be delivered to the storage unit in the third blade (622) such an Ethernet link, for example, through the use of PoE techniques. In such a way, when a remote blade is accessing the storage unit in the third blade (622) via the remote access interface in the third blade (622), such a storage unit may be powered using remote access interface. The third blade (622) depicted inFIG.9may also include a switching mechanism configured to provide access to the storage resources in the third blade (622), where the switching mechanism is configured to couple to compute resources in the third blade (622) via a first interface and also configured to couple to compute resources on a remote blade via a second interface. The switching mechanism may be coupled to local storage resources via a first interface such as a data communications link that is coupled to compute resources within the third blade (622). The switching mechanism may also be coupled to local storage resources via a second data communications link that is coupled to compute resources on another blade in the storage system, such that the local storage resources may be accessed without utilizing compute resources within the third blade (622). The switching mechanism may be embodied as a mechanical device that can facilitate a data communications connection between a particular storage unit and a plurality of data communications pathways, although at any particular time the switching mechanism may only facilitate data communications between the particular storage unit and a single data communications pathway. For further explanation,FIG.10sets forth a diagram of a storage system (702) that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The storage system (702) depicted inFIG.10includes one or more chassis (704,738). Although depicted in less detail, each of the chassis (704,738) depicted inFIG.10may be similar to the chassis described above with reference toFIG.4, as each chassis (704,738) includes a plurality of slots, where each slot is configured to receive a blade. Each chassis (704,738) may include mechanisms, such as one or more power distribution busses, that are utilized to provide power to blades that are mounted within the chassis (704,738). Each chassis (704,738) may further include mechanisms, such as one or more communication busses, that facilitate data communications between one or more blades that are mounted within the chassis (704,738), as well as other data communications devices such as network switches that are mounted within the chassis (704,738). The communication mechanisms may be embodied, for example, as one or more Ethernet busses, as one or more PCIe busses, as one or more InfiniBand busses, and so on. In some embodiments, each chassis (704,738) may include at least two instances of both a power distribution mechanism and a communication mechanism, where each instance of the power distribution mechanism and each instance of the communication mechanism may be enabled or disabled independently. Each chassis (704,738) depicted inFIG.10may also include one or more ports for receiving an external communication bus that enables communication between multiple chassis (704,738), directly or through a switch, as well as communications between a chassis (704,738) and an external client system. The external communication bus may use a technology such as Ethernet, InfiniBand, Fibre Channel, and so on. In some embodiments, the external communication bus may use different communication bus technologies for inter-chassis communication than is used for communication with an external client system. In embodiments where one or more switches are deployed, each switch may act as a translation layer between multiple protocols or technologies. When multiple chassis (704,738) are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such a PCIe interface, a SAS interfaces, a SATA interface, or other interface using protocols such as NFS, CIFS, SCSI, HTTP, Object, and so on. Translation from the client protocol may occur at the switch, external communication bus, or within each blade. Although not explicitly depicted inFIG.10, each chassis (704,738) may include one or more modules, data communications busses, or other apparatus that is used to identify which type of blade is inserted into a particular slot of the chassis (704,738). In such an example, a management module may be configured to request information from each blade in each chassis (704,738) when each blade is powered on, when the blade is inserted into a chassis (704,738), or at some other time. The information received by the management module can include, for example, a special purpose identifier maintained by the blade that identifies the type (e.g., storage blade, compute blade, hybrid blade) of blade that has been inserted into the chassis (704,738). In an alternative embodiment, each blade may be configured to automatically provide such information to a management module as part of a registration process. The storage system (702) depicted inFIG.10also includes a plurality of compute resources (714,716,748). The compute resources (714,716,748) in the storage system (702) depicted inFIG.10may be embodied, for example, as one or more host servers such as the host servers described above with reference toFIGS.5,6, and8. Such host servers may reside in blades (752,754,756) that are mounted within one or more slots (706,708,740) in the storage system (702). The storage system (702) depicted inFIG.10also includes a plurality of storage resources (734,736,750). The storage resources (734,736,750) in the storage system (702) depicted inFIG.10may be embodied, for example, as one or more storage units such as the storage units described above with reference toFIGS.5,6, and7. Such storage units may reside in blades (752,754,758) that are mounted within one or more slots (706,708,742) in the storage system (702). The storage system (702) depicted inFIG.10also includes a plurality of blades (752,754,756,758). In the example depicted inFIG.10, each of the blades (752,754,756,758) includes at least one compute resource (714,716,748) or at least one storage resource (734,736,750). Each of the blades (752,754,756,758) may therefore be embodied, for example, as a hybrid blade, as a compute blade, or as a storage blade as described above with reference toFIGS.5,6,7, and8. In the example storage system (702) depicted inFIG.10, each of the storage resources (734,736,750) may be directly accessed by each of the compute resources (714,716,748) without utilizing an intermediate compute resource (714,716,748). Each of the storage resources (734,736,750) may be directly accessed by each of the compute resources (714,716,748) without utilizing an intermediate compute resource (714,716,748), for example, through the use of a remote access interface that provides access to the storage resources (734,736,750). Such a remote access interface may be embodied, for example, as an Ethernet link is coupled to an FPGA in a storage unit, thereby enabling the storage unit to be coupled for data communications with a data communications network that each of the remote compute resources (714,716,748) may also be coupled to. In such an example, compute resources on a remote blade can access a storage unit on a local blade without utilizing the compute resources on the local blade as the compute resources on the remote blade may send an instruction to write data to, or read data from, the storage unit, without routing such an instruction through compute resources on the local blade. In the example depicted inFIG.10, the storage system (702) also includes a first power domain configured to deliver power to one or more of the compute resources. The first power domain may be embodied, for example, as a power supply, power distribution bus, and power interface to a host server, where the first power interface is configured to deliver power to one or more of the compute resources. In the example depicted inFIG.10, three power domains (710,712,744) are depicted that may serve as distinct instances of a first power domain that is configured to deliver power to one or more of the compute resources (714,716,748). Readers will appreciate that although each of the compute resources (714,716,748) depicted inFIG.10receive power from a distinct instance of a first power domain, in other embodiments, one or more of the compute resources (714,716,748) may be configured to receive power from the same instance of a first power domain, such that multiple compute resources (714,716,748) may be powered up or powered down by enabling or disabling the delivery of power by a single instance of a first power domain. In the example depicted inFIG.10, the storage system (702) also includes a second power domain configured to deliver power to the storage resources. The second power domain may be embodied, for example, as a power supply, power distribution bus, and power interface to a storage unit, where the second power domain is configured to deliver power to one or more of the storage resources. In the example depicted inFIG.10, three power domains (730,732,746) are depicted that may serve as distinct instances of a second power domain that is configured to deliver power to one or more of the storage resources (734,736,750). Readers will appreciate that although each of the storage resources (734,736,750) depicted inFIG.10receive power from a distinct instance of a second power domain, in other embodiments, one or more of the storage resources (734,736,750) may be configured to receive power from the same instance of a second power domain, such that multiple storage resources (734,736,750) may be powered up or powered down by enabling or disabling the delivery of power by a single instance of a second power domain. The preceding paragraphs describe non-limiting, example embodiments of a first power domain and a second power domain. In some embodiments, the first power domain and the second power domain may include fewer or additional components. The first power domain and the second power domain may also be configured to deliver power to components within the storage system (702) in coordination with components such as, for example, external power supplies, external power busses, external data communications networks, and so on. The first power domain and the second power domain may also be coupled to receive power from the same power source (e.g., the same power supply), so long as the delivery of power to one or more compute resources (714,716,748) may be enabled or disabled independently of the delivery of power to one or more storage resources (734,736,750), and vice versa. In an embodiment where the first power domain and the second power domain receive power from the same power source, the delivery of power to one or more compute resources (714,716,748) may be enabled or disabled independently of the delivery of power to one or more storage resources (734,736,750), and vice versa, through the use of a switching mechanism, power delivery network, or other mechanism that enables the delivery of power to each power domain to be blocked or enabled independently. Readers will appreciate that additional embodiments are possible that are consistent with the spirit of the present disclosure. In the example depicted inFIG.10, each instance of a first power domain can be operated independently of each instance of a second power domain. Each instance of a first power domain can be operated independently of each instance of a second power domain as the power interfaces within each power domain (710,712,730,732,744,746) may be enabled or disabled independently, the distinct power supplies that provide power to each power domain (710,712,730,732,744,746) may be enabled or disabled independently, the distinct power busses that are included in each power domain (710,712,730,732,744,746) may be enabled or disabled independently, and so on. In such a way, the delivery of power to one or more compute resources (714,716,748) may be enabled or disabled independently of the delivery of power to one or more storage resources (734,736,750), and vice versa. Because the delivery of power to one or more compute resources (714,716,748) may be enabled or disabled independently of the delivery of power to one or more storage resources (734,736,750), independent scaling of each type of resources may be achieved by enabling or disabling the delivery of power to only one type (i.e., storage or compute) of resource. For example, enabling the delivery of power to one or more storage resources increases the amount of storage resources available in the storage system (702) without impacting the amount of compute resources available in the storage system (702), disabling the delivery of power to one or more storage resources decreases the amount of storage resources available in the storage system (702) without impacting the amount of compute resources available in the storage system (702), enabling the delivery of power to one or more compute resources increases the amount of compute resources available in the storage system (702) without impacting the amount of storage resources available in the storage system (702), disabling the delivery of power to one or more compute resources decreases the amount of compute resources available in the storage system (702) without impacting the amount of storage resources available in the storage system (702), and so on. The storage system (702) depicted inFIG.10includes a blade (756) that includes compute resources (748) but does not include storage resources. Although the blade (756) that includes compute resources (748) but does not include storage resources is depicted in less detail, readers will appreciate that the blade (756) may be similar to the compute blade described above with reference toFIG.8. The storage system (702) depicted inFIG.10also includes a blade (758) that includes storage resources (750) but does not include any compute resources. Although the blade (758) that includes storage resources (750) but does not include any compute resources is depicted in less detail, readers will appreciate that the blade (758) may be similar to the storage blade described above with reference toFIG.7. The storage system (702) depicted inFIG.10also includes blades (752,754) that include storage resources (734,736) and compute resources (714,716). Although the blades (752,754) that include storage resources (734,736) and compute resources (714,716) are depicted in less detail, readers will appreciate that the blades (752,754) may be similar to the hybrid blades described above with reference toFIG.5andFIG.6. In the example depicted inFIG.10, each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) can also include a local access interface (718,720) through which the local compute resources (714,716) can access the local storage resources (734,736). Each local access interface (718,720) may be embodied, for example, as a data communication bus that forms a data communications path between the compute resources (714,716) in a particular blade (752,754) and one or more storage resources (734,736) within the same particular blade (752,754), as well as any other microprocessors, FPGAs, or other computing devices required to carry out data communications between the compute resources (714,716) in a particular blade (752,754) and one or more storage resources (734,736) within the same particular blade (752,754). In the example depicted inFIG.10, each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) can also include a remote access interface (722,724) through which compute resources (748) on a remote blade (756) can access the local storage resources (734,736) without utilizing the local compute resources (714,716). Each remote access interface (722,724) may be embodied, for example, as a data communications interface in each of the blades (752,754) that enables an FPGA, microprocessor, or other form of computing device that is part of the storage resources (734,736) in a particular blade (752,754) to couple to a data communications network. In such an example, compute resources (714,716,748) on a remote blade (752,754,756) may send an instruction to write data to, or read data from, the storage resources (734,736) in a different blade (752,754) directly to microprocessor, or other form of computing device that is part of the storage resources (734,736) in the different blade (752,754). For example, compute resources (714,748) on two of the blades (752,756) may directly send an instruction to write data to, or read data from, the storage resources (736) in another blade (754) without utilizing the compute resources (716) on the targeted blade (754), compute resources (716,748) on two of the blades (754,756) may directly send an instruction to write data to, or read data from, the storage resources (734) in another blade (752) without utilizing the compute resources (714) on the targeted blade (752), and so on. In the example depicted inFIG.10, each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) can also include a switching mechanism (728,728) configured to provide access to the local storage resources (734,736), where the switching mechanism (726,728) is coupled to the local compute resources (714,716) via the local access interface (718,720) and the switching mechanism (726,728) is coupled to the compute resources (714,716,748) on a remote blade (752,754,756) via a remote access interface (722,724). For example, the switching mechanism (726) in one of the illustrated blades (752) may be coupled to the local compute resources (714) on the illustrated blade (752) via the local access interface (718) and the switching mechanism (726,728) may also be coupled to the compute resources (716,748) on a remote blade (754,756) via a remote access interface (722) on the illustrated blade (752). Likewise, the switching mechanism (728) in another illustrated blade (754) may be coupled to its local compute resources (716) via the local access interface (720) and the switching mechanism (728) may also be coupled to the compute resources (714,748) on a remote blade (752,756) via a remote access interface (724) on the blade (754). In the example depicted inFIG.10, each switching mechanism (726,728) may be similar to the switching mechanisms described above with reference toFIG.6. In the example depicted inFIG.10, each remote access interface (722,724) may utilize a first data communications protocol and each local access interface (718,720) may utilize a second data communications protocol. For each, the storage resources (734,736) may be accessed by local compute resources (714,716) via the local access interface (718,720) by utilizing a PCIe data communications link whereas the storage resources (734,736) may be accessed by compute resources (714,716,748) on a remote blade (752,754,756) via the remote access interface (722,724) by utilizing an Ethernet data communications link. In the example depicted inFIG.10, each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) can also include a first local power domain (710,712) configured to deliver power to the local compute resources (714,716). The first local power domain (710,712) in each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) may be embodied, for example, as a power supply, power distribution bus, and power interface to a host server, where the first power interface is configured to deliver power to one or more of the compute resources (714,716) in the blade (752,754). In the example depicted inFIG.10, each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) can also include a second local power domain (730,732) configured to deliver power to the local storage resources (734,736). The second local power domain (730,732) in each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716) may be embodied, for example, as a power supply, power distribution bus, and power interface to a storage unit, where the second power domain is configured to deliver power to one or more of the storage resources (734,736) in the blade (752,754). Readers will appreciate that, for each of the blades (752,754) that include storage resources (734,736) and compute resources (714,716), the first local power domain (710,712) and the second local power domain (730,732) may be independently operated. The first local power domain (710,712) and the second local power domain (730,732) in a particular blade (752,754) may be operated independently as the power interfaces within each power domain (710,712,730,732) may be enabled or disabled independently, the distinct power supplies that provide power to each power domain (710,712,730,732) may be enabled or disabled independently, the distinct power busses that are included in each power domain (710,712,730,732) may be enabled or disabled independently, and so on. In such a way, the delivery of power to one or more compute resources (714,716) may be enabled or disabled independently of the delivery of power to one or more storage resources (734,736), and vice versa. In the example depicted inFIG.10, the storage resources (734,736,750) within at least one blade (752,754,756) may receive power via a remote access interface. As described above, the storage resources (734,736,750) within each blade (752,754,756) may be accessed via a remote access interface (722,724,760). Such remote access interface (722,724,760) can include an Ethernet link that enables a storage unit to couple to a data communications network. Power may therefore be delivered to the storage resources (734,736,750), for example, through the use of PoE techniques. In such a way, when a remote blade is accessing the storage resources (734,736,750) within a particular blade (752,754,758) via the remote access interface (722,724,760), the storage resources (734,736,750) may be powered using the remote access interface (722,724,760). In alternative embodiments, power may be provided to the storage resources (734,736,750) in different ways. For further explanation,FIG.11sets forth a diagram of a set of blades (802,804,806,808) useful in a storage system that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. Although blades will be described in greater detail below, the blades (802,804,806,808) depicted inFIG.11may include compute resources (810,812,814), storage resources in the form of flash memory (830,832,834), storage resources in the form of non-volatile random access memory (‘NVRAM’) (836,838,840), or any combination thereof. In the example depicted inFIG.11, the blades (802,804,806,808) are of differing types. For example, one blade (806) includes only compute resources (814), another blade (808) includes only storage resources, depicted here as flash (834) memory and NVRAM (840), and two of the blades (802,804) include compute resources (810,812) as well as storage resources in the form of flash (830,832) memory and NVRAM (836,838). In such of an example, the blade (806) that includes only compute resources (814) may be referred to as a compute blade, the blade (808) that includes only storage resources may be referred to as a storage blade, and the blades (802,804) that include both compute resources (810,812) and storage resources may be referred to as a hybrid blade. The compute resources (810,812,814) depicted inFIG.11may be embodied, for example, as one or more computer processors, as well as memory that is utilized by the computer processor but not included as part of general storage within the storage system. The compute resources (810,812,814) may be coupled for data communication with other blades and with external client systems, for example, via one or more data communication busses that are coupled to the compute resources (810,812,814) via one or more data communication adapters. The flash memory (830,832,834) depicted inFIG.11may be embodied, for example, as multiple flash dies which may be referred to as packages of flash dies or an array of flash dies. Such flash dies may be packaged in any number of ways, with a single die per package, multiple dies per package, in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, and so on. Although not illustrated inFIG.11, an input output (I/O) port may be coupled to the flash dies and a direct memory access (DMA′) unit may also be coupled directly or indirectly to the flash dies. Such components may be implemented, for example, on a programmable logic device (PLD′) such as a field programmable gate array (‘FPGA’). The flash memory (830,832,834) depicted inFIG.11may be organized as pages of a predetermined size, blocks that include a predetermined number of pages, and so on. The NVRAM (836,838,840) depicted inFIG.11may be embodied, for example, as one or more non-volatile dual in-line memory modules (‘NVDIMMs’), as one more DRAM dual in-line memory modules (‘DIMMs’) that receive primary power through a DIMM slot but are also attached to a backup power source such as a supercapacitor, and so on. The NVRAM (836,838,840) depicted inFIG.11may be utilized as a memory buffer for temporarily storing data that will be written to flash memory (830,832,834), as writing data to the NVRAM (836,838,840) may be carried out more quickly than writing data to flash memory (830,832,834). In this way, the latency of write requests may be significantly improved relative to a system in which data is written directly to the flash memory (830,832,834). In the example method depicted inFIG.11, a first blade (802) includes a first authority (168) that is executing on the compute resources (810) within the first blade (802) and a second blade (806) includes a second authority (168) that is executing on the compute resources (814) within the second blade (806). Each authority (168) represents a logical partition of control and may be embodied as a module of software executing on the compute resources (810,812,814) of a particular blade (802,804,806). Each authority (168) may be configured to control how and where data is stored in storage system. For example, authorities (168) may assist in determining which type of erasure coding scheme is applied to the data, authorities (168) may assist in determining where one or more portions of the data may be stored in the storage system, and so on. Each authority (168) may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system or some other entity. Each authority (168) may operate independently and autonomously on its partition of each of the entity spaces defined within the system. Each authority (168) may serve as an independent controller over those spaces, each providing its own data and metadata structures, its own background workers, and maintaining its own lifecycle. Each authority (168) may, for example, allocate its own segments, maintains its own log/pyramid, maintain its own NVRAM, define its own sequence ranges for advancing persistent state, boot independently, and so on. Readers will appreciate that every piece of data and every piece of metadata stored in the storage system is owned by a particular authority (168). Each authority (168) may cause data that is owned by the authority (168) to be stored within storage that is located within the same blade whose computing resources are supporting the authority (168) or within storage that is located on some other blade. In the example depicted inFIG.11:a. The authority (168) that is executing on the compute resources (810) within a first blade (802) has caused data to be stored within a portion (820) of flash (830) that is physically located within the first blade (802).b. The authority (168) that is executing on the compute resources (810) within a first blade (802) has also caused data to be stored in a portion (842) of NVRAM (836) that is physically located within the first blade (802).c. The authority (168) that is executing on the compute resources (810) within the first blade (802) has also caused data to be stored within a portion (822) of flash (832) on the second blade (804) in the storage system.d. The authority (168) that is executing on the compute resources (810) within the first blade (802) has also caused data to be stored within a portion (826) of flash (834) and a portion (846) of NVRAM (840) on the fourth blade (808) in the storage system.e. The authority (168) that is executing on the compute resources (814) within the third blade (802) has caused data to be stored within a portion (844) of NVRAM (836) that is physically located within the first blade (802).f. The authority (168) that is executing on the compute resources (814) within the third blade (802) has also caused data to be stored within a portion (824) of flash (832) within the second blade (804).g. The authority (168) that is executing on the compute resources (814) within the third blade (802) has also caused data to be stored within a portion (828) of flash (834) within the fourth blade (808).h. The authority (168) that is executing on the compute resources (814) within the third blade (802) has also caused data to be stored within a portion (848) of NVRAM (840) within the fourth blade (808). Readers will appreciate that many embodiments other than the embodiment depicted inFIG.11are contemplated as it relates to the relationship between data, authorities, and system components. In some embodiments, every piece of data and every piece of metadata has redundancy in the storage system. In some embodiments, the owner of a particular piece of data or a particular piece of metadata may be a ward, with an authority being a group or set of wards. Likewise, in some embodiments there are redundant copies of authorities. In some embodiments, authorities have a relationship to blades and the storage resources contained therein. For example, each authority may cover a range of data segment numbers or other identifiers of the data and each authority may be assigned to a specific storage resource. Data may be stored in a segment according to some embodiments of the present disclosure, and such segments may be associated with a segment number which serves as indirection for a configuration of a RAID stripe. A segment may identify a set of storage resources and a local identifier into the set of storage resources that may contain data. In some embodiments, the local identifier may be an offset into a storage device and may be reused sequentially by multiple segments. In other embodiments the local identifier may be unique for a specific segment and never reused. The offsets in the storage device may be applied to locating data for writing to or reading from the storage device. Readers will appreciate that if there is a change in where a particular segment of data is located (e.g., during a data move or a data reconstruction), the authority for that data segment should be consulted. In order to locate a particular piece of data, a hash value for a data segment may be calculated, an inode number may be applied, a data segment number may be applied, and so on. The output of such an operation can point to a storage resource for the particular piece of data. In some embodiments the operation described above may be carried out in two stages. The first stage maps an entity identifier (ID) such as a segment number, an inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage maps the authority identifier to a particular storage resource, which may be done through an explicit mapping. The operation may be repeatable, so that when the calculation is performed, the result of the calculation reliably points to a particular storage resource. The operation may take the set of reachable storage resources as input, and if the set of reachable storage resources changes, the optimal set changes. In some embodiments, a persisted value represents the current assignment and the calculated value represents the target assignment the cluster will attempt to reconfigure towards. The compute resources (810,812,814) within the blades (802,804,806) may be tasked with breaking up data to be written to storage resources in the storage system. When data is to be written to a storage resource, the authority for that data is located as described above. When the segment ID for data is already determined, the request to write the data is forwarded to the blade that is hosting the authority, as determined using the segment ID. The computing resources on such a blade may be utilized to break up the data and transmit the data for writing to a storage resource, at which point the transmitted data may be written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled and in other embodiments data is pushed. When compute resources (810,812,814) within the blades (802,804,806) are tasked with reassembling data read from storage resources in the storage system, the authority for the segment ID containing the data is located as described above. The compute resources (810,812,814) within the blades (802,804,806) may also be tasked with reassembling data read from storage resources in the storage system. The compute resources (810,812,814) that support the authority that owns the data may request the data from the appropriate storage resource. In some embodiments, the data may be read from flash storage as a data stripe. The compute resources (810,812,814) that support the authority that owns the data may be utilized to reassemble the read data, including correcting any errors according to the appropriate erasure coding scheme, and forward the reassembled data to the network. In other embodiments, breaking up and reassembling data, or some portion thereof, may be performed by the storage resources themselves. The preceding paragraphs discuss the concept of a segment. A segment may represent a logical container of data in accordance with some embodiments. A segment may be embodied, for example, as an address space between medium address space and physical flash locations. Segments may also contain metadata that enables data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In some embodiments, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment may be protected from memory and other failures, for example, by breaking the segment into a number of data and parity shards. The data and parity shards may be distributed by striping the shards across storage resources in accordance with an erasure coding scheme. The examples described above relate, at least to some extent, to chassis for use in a storage system that supports independent scaling of compute resources and storage resources, blades for use in storage systems that support independent scaling of compute resources and storage resources, and storage systems that support independent scaling of compute resources and storage resources. Readers will appreciate that the resources that are independently scaled, compute resources and storage resources, are those resources that are generally available to users of the storage system. For example, the storage resources that are independently scaled may be storage resources that a user of the storage system can use to persistently store user data. Likewise, the compute resources that are independently scaled may be compute resources that a user of the storage system can use to support the execution of applications, authorities, and the like. Readers will appreciate that while the host servers described above with reference toFIGS.5,6, and8include memory, such memory is not considered to be part of the storage resources that are independently scaled. Such memory is included in the host server for the purpose of supporting the execution of instructions by a processor that is also included in the host server. Such memory is not, however, included in the host server for the purpose of expanding the amount of storage that a storage system that includes the blades depicted inFIGS.5,6, and8can make available to users of the storage system. As such, a compute blade is described above as lacking storage resources, in spite of the fact that the compute blade can include some form of memory that may be used to support the execution of computer program instructions by the compute resources in the compute blade. Readers will similarly appreciate that while the storage units described above with reference toFIGS.5,6, and7include an ARM and an FPGA, such devices are not considered to be part of the compute resources that are independently scaled. Such devices are included in the storage units for the purpose of accessing storage in the storage units in much the same way that a memory controller accesses memory. Such devices are not, however, included in the storage units for the purpose of expanding the amount of computing resources that a storage system that includes the blades depicted inFIGS.5,6, and7can make available to users of the storage system. As such, a storage blade is described above as lacking compute resources, in spite of the fact that the storage blade can include some devices that may be used to support the execution of computer program instructions that read and write data to attached storage. For further explanation,FIG.12sets forth a block diagram of automated computing machinery comprising an example computer (952) useful in supporting independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The computer (952) ofFIG.12includes at least one computer processor (956) or “CPU” as well as random access memory (“RAM”) (968) which is connected through a high speed memory bus (966) and bus adapter (958) to processor (956) and to other components of the computer (952). Stored in RAM (968) is a dynamic configuration module (926), a module of computer program instructions for useful in supporting independent scaling of compute resources and storage resources according to embodiments of the present disclosure. The dynamic configuration module (926) may be configured for supporting independent scaling of compute resources and storage resources by performing a variety of support functions such as, for example, detecting the insertion of a blade into a chassis through the receipt of one or more device registration messages, admitting a blade that has been powered up into the storage system, logically removing a blade that has been powered down from the storage system, maintaining information identifying available and unavailable resources in the storage system, and so on. Also stored in RAM (968) is an operating system (954). Operating systems useful in computers configured for supporting independent scaling of compute resources and storage resources according to embodiments described herein include UNIX™, Linux™, Microsoft XP™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. The operating system (954) and dynamic configuration module (926) in the example ofFIG.9are shown in RAM (968), but many components of such software typically are stored in non-volatile memory also, such as, for example, on a disk drive (970). The example computer (952) ofFIG.12also includes disk drive adapter (972) coupled through expansion bus (960) and bus adapter (958) to processor (956) and other components of the computer (952). Disk drive adapter (972) connects non-volatile data storage to the computer (952) in the form of disk drive (970). Disk drive adapters useful in computers configured for supporting independent scaling of compute resources and storage resources according to embodiments described herein include Integrated Drive Electronics (“IDE”) adapters, Small Computer System Interface (“SCSI”) adapters, and others as will occur to those of skill in the art. Non-volatile computer memory also may be implemented for as an optical disk drive, electrically erasable programmable read-only memory (so-called “EEPROM” or “Flash” memory), RAM drives, and so on, as will occur to those of skill in the art. The example computer (952) ofFIG.12includes one or more input/output (“I/O”) adapters (978). I/O adapters implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices such as computer display screens, as well as user input from user input devices (982) such as keyboards and mice. The example computer (952) ofFIG.9includes a video adapter (909), which is an example of an I/O adapter specially designed for graphic output to a display device (980) such as a display screen or computer monitor. Video adapter (909) is connected to processor (956) through a high speed video bus (964), bus adapter (958), and the front side bus (962), which is also a high speed bus. The example computer (952) ofFIG.12includes a communications adapter (967) for data communications with a storage system (984) as described above and for data communications with a data communications network (900). Such data communications may be carried out serially through RS-232 connections, through external buses such as a Universal Serial Bus (‘USB’), a Fibre Channel data communications link, an Infiniband data communications link, through data communications networks such as IP data communications networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a data communications network. Examples of communications adapters useful in computers configured for supporting independent scaling of compute resources and storage resources according to embodiments described herein include Ethernet (IEEE 802.3) adapters for wired data communications, Fibre Channel adapters, Infiniband adapters, and so on. The computer (952) may implement certain instructions stored on RAM (968) for execution by processor (956) for supporting independent scaling of compute resources and storage resources. In some embodiments, dynamically configuring the storage system to facilitate independent scaling of resources may be implemented as part of a larger set of executable instructions. For example, the dynamic configuration module (926) may be part of an overall system management process. FIG.13Asets forth a diagram of a single chassis1116storage system that has a switch1002for direct network-connected communication among computing resources1004,1006,1008and storage resources1010,1012,1014of the storage system according to embodiments of the present disclosure. The switch1002can be integrated with the chassis1116, or separate from the chassis1116, and is implemented as a network switch, an Ethernet switch, switch fabric, a switch matrix, a switch module, a fabric module or multiple switches in various embodiments. Multiple blades, which can be heterogeneous or homogeneous and include compute-only blades, storage-only blades or hybrid compute and storage blades in various combinations, populate the chassis1116. Each compute resource1004,1006,1008and each storage resource1010,1012,1014in the blades is direct network-connected to the switch1002, for example without bridging to PCIe (peripheral component interconnect express) or other bridging or routing to other networks to communicate with a compute resource1004,1006,1008or a storage resource1010,1012,1014. That is, the switch1002direct network-connects processors or compute resources and solid-state storage memory or storage resources in the storage system. Each compute resource1004,1006,1008can communicate with each other compute resource1004,1006,1008and with each storage resource10101012,1014, through the switch1002. Each storage resource1010,1012,1014can communicate with each other storage resource1010,1012,1014and with each compute resource1004,1006,1008, through the switch1002. In some embodiments, communication uses Ethernet protocol, or other network protocol. FIG.13Bsets forth a diagram of a multi-chassis1116storage system that has a switch1002,1016for direct network-connected communication among compute resources1004,1008,1018,1020and storage resources1010,1014,1022,1024of the storage system according to embodiments of the present disclosure. Multiple chassis1116can be arranged on one or more racks or otherwise coupled by a switch1016, such as a top of rack switch or other switch such as described with reference toFIG.10A. Each chassis1116has multiple blades in heterogeneous or homogeneous arrangement with compute resources and storage resources, in various embodiments such as described with reference toFIG.10A. The combination of the switches1002in each of the multiple chassis1116and the switch1016coupling the multiple chassis1116act as a switch1002,1016that direct network-connects processors or compute resources and solid-state storage memory or storage resources in the storage system. In further embodiments, the switch1002,1016is an integrated switch that both couples multiple chassis1116and couples compute resources1004,1008,1018,1020and storage resources1010,1014,1022,1024in the multiple chassis1116. As in single chassis embodiments, each compute resource1004,1008,1018,1020can communicate with each other compute resource1004,1008,1018,1020and with each storage resource1010,1014,1022,1024, through the switch1002,1016. Each storage resource1010,1014,1022,1024can communicate with each other storage resource1010,1014,1022,1024and with each compute resource1004,1008,1018,1020, through the switch1002,1016. In some embodiments, communication uses Ethernet protocol, or other network protocol. Switch1002, in single chassis storage systems such as shown inFIG.13A, and switch1002,1016, in multi-chassis storage systems such as shown inFIG.13B, support disaggregated compute resources and storage resources in the storage system. A storage resource and a computer resource do not need to be in the same blade, or even in the same chassis1116when communicating with each other. There is little or no penalty in terms of communication delay or latency, when communicating between any two compute resources or storage resources, or any compute resource and any storage resource, regardless of location in the storage system. All such resources can be treated as being approximately equally close, without need of aggregating resources in a given blade. Disaggregation of compute resources and storage resources supports storage system expansion and scalability, because read and write accesses, data striping and all forms of communication among resources do not suffer worsening delays as the system grows. At most, there is a small communication delay penalty when going from a single chassis system to a multi-chassis system, as a result of the additional layer of switching in some embodiments, but no penalty for adding blades to either system, and no penalty for adding more chassis to a multi-chassis system. FIG.14Asets forth a diagram of a storage resource1102for embodiments of a storage system, with flash memory1104and a flash controller1106connected to a switch1002such as shown inFIGS.13A and13B. A suitable example of a flash controller1106is shown inFIGS.5-7, with an ARM516(processor or CPU) and FPGA520, and other flash controllers are readily devised. Further versions with other types of solid-state storage memory and other types of controller suitable to those memories are readily devised. The flash controller1106manages the flash memory1104and communicates with other resources using an appropriate network protocol through the switch1002, thus supporting direct network-connection of the storage resource1102. This version of a storage resource1102could be in a storage-only blade, or a hybrid compute and storage blade, in various embodiments. FIG.14Bsets forth a diagram of a storage resource1114for embodiments of a storage system, with flash memory1112, a NIC (network interface card or network interface controller)1108and a packet processing control processor1110, with the NIC1108connected to a switch1002such as shown inFIGS.13A and13B. The NIC1108is implemented on a card, one or more integrated circuits, or as a module in an integrated circuit such as a full custom chip, ASIC or FPGA, in various embodiments, and can be local to the flash memory1112or remote from the flash memory1112. A packet processing control processor1110connects to the flash memory1112and composes and decomposes packets with the NIC1108, so that the flash memory1112can communicate over the switch1002with other resources in the storage system. As with the version of a storage resource1102shown inFIG.11A, this version supports direct network-connection of the storage resource1102and could be in a storage-only blade or a hybrid compute and storage blade. FIG.14Csets forth a diagram of a storage resource1118for embodiments of a storage system, with network-connectable flash memory1116connected to a switch1002such as shown inFIGS.13A and13B. Network-connectable flash memory1116has a NIC1108, flash controller1106or other module for network connection on chip or in package, in some embodiments. This supports direct network-connection of the storage resource1118, and could be in a storage-only blade or a hybrid compute and storage blade. FIG.15sets forth a diagram depicting compute resources1004,1006,1008voting to assign a host controller1206for a storage resource1012to one of the computing resources1008. The host controller1206does not need to be assigned to a compute resource1008that has the storage resource1012on the same blade. That is, the host controller1206could be assigned to a compute resource1008on a different blade from the storage resource1012that is controlled by the host controller1206, for example two hybrid blades, a compute-only blade and a storage-only blade, a hybrid blade and a storage-only blade, or a compute-only blade and a hybrid blade. Or the host controller1206could be assigned to a compute resource1008on the same blade that has the storage resource1012corresponding to the host controller1206, e.g., a hybrid blade. Various voting mechanisms and communication for voting are readily devised in keeping with the teachings herein. In some embodiments, each storage resource has an assigned host controller, in the compute resources. There could be zero, one, or more than one host controller on a given blade, in various embodiments, and host controllers could be transferred, reassigned to another blade, or replaced as resources are shifted or blades are added to or removed from the storage system, or a failure occurs. This ability to hold a vote1202and assign1204the host controller1206to any of a number of available compute resources supports disaggregated compute resources and storage resources in the storage system, because the storage memory is not required to be aggregated with the host controller that is managing the storage memory or processor(s) that are communicating with the storage memory for any specific communication. Host controller and corresponding storage memory are not required to be in the same blade, or even in the same chassis. InFIG.12, the host controller1206, no matter which compute resource1004,1006,1008and blade is assigned to have the host controller1206, communicates with a corresponding storage resource1012through the switch1208. FIG.16is a flow diagram of a method of communicating in a storage system that has disaggregated compute resources and storage memory, which can be practiced by embodiments of storage systems described herein and variations thereof. The method can be practiced by various processors in the computing resources and storage resources in embodiments of storage systems. In an action1302, computing resources and storage resources in various blades of a storage cluster are coupled through a network switch. The storage cluster could be single or multi chassis, and the blades could be homogeneous or heterogeneous. In an action1304, compute resources cooperate to select, elect and/or assign host controllers in the computing resources, for the solid-state storage memories. Each host controller communicates with and manages a corresponding solid-state storage memory, but the host controller and corresponding solid-state storage memory are not required to be on the same blade (although they can be so). In an action1306ofFIG.16, computing resources and storage resources communicate with each other through the network switch. For example, this communication could use Ethernet protocol. Communication can be among resources in a single chassis, or among resources in multiple chassis. In an action1308, data stripes are written from computing resources and blades to storage resources in blades, by communication through the network switch. The storage resources have solid-state storage memories, in various embodiments. In an action1310, data stripes are read from storage resources in the blades to computing resources in the blades, by communication through the network switch. Further embodiments of the method are readily devised, including variations with specific communications, specific resources, and various types of switches, blades, compute resources and storage resources as described herein for embodiments of storage systems. FIG.17illustrates a 10 slot1704chassis1702with removable compute blades1706with 4 slots1708for flash modules in a 5 rack unit. The compute blades1706are removable from the front of the chassis1702. Each compute blade1706in this example has four slots for removable modules, and each module in the compute blade1706is a flash memory module, or other type of storage memory module. Other types of modules, as further discussed below, could be used in various embodiments to reconfigure the blades. The five rack unit (5RU) chassis has two integrated external fabric modules (EFMs) in some embodiments. FIG.18illustrates a system1802in an 8 rack unit with two fabric modules1804and 4 controllers1806. Slots1808are shown occupied by blades with solid-state memory, in this example flash memory. In some embodiments, the blades have removable modules, with various amounts of solid-state memory, and the storage system supports heterogeneous mixes of modules and blades. FIG.19illustrates 5 controllers1902and 22 custom storage modules1904with 2 flash modules in a 5 rack unit. Each of five CPU slots1906has a controller1902installed. In some embodiments, the flash modules in each storage module1904are removable, and may be replaced with various capacity flash modules, for example as a storage memory upgrade. In some embodiments, a flash module is replaceable with a compute module, to reconfigure the storage module1904as a combination compute and storage module, or, with both flash modules replaced by compute modules, as a compute-only module. Accordingly, this embodiment enables the addition of compute and storage together and/or separately for scaling or other purposes as the blade no longer has compute and flash embedded together. In some embodiments, the storage blade702ofFIG.7may be modified to remove the ARM516and FPGA520and thus would be a storage module as described herein. In another embodiment, compute resource810of blade802ofFIG.11is optional and thus there may exist a modular architecture where there is compute and flash, compute and accelerators or offload engines, or just flash, i.e., no compute. As described further below the, the system is designed to be modular for flexibility so that changes can be made easily through the replacement/substitution of modular components. Furthermore, the modular components may include accelerators or graphic processing units that are compatible with the form factor of the storage module and plug into or combine with the compute module. In some embodiments, the accelerators may replace the compute module on the main board. Various types of accelerators are described below. FIG.20illustrates a 3 rack unit expansion shelf2006. Both the horizontal slots2004and the vertical slots2002have blades/modules installed. The blades/modules have removable modules, or non-removable modules, in various embodiments. A heterogeneous mix of blades and/or a heterogeneous mix of modules is supported in various embodiments of storage systems. In some embodiments, there may be no modules added to a blade of the storage system. In this embodiment a new blade type can be inserted into the system where the new blade type uses the entire volume of the slot. The new blade type may have additional compute (such as a dual-socketed processor), a GPU, some other suitable accelerator, etc. FIG.21illustrates servers2102with up to 8 drive slots each and corresponding fabric modules2108. In this example, each server2102has NVRAM2104and one or more solid-state drives2106in the drive slots. Fully populated, the server2102can have eight solid-state drives2106in the drive slots. In some embodiments, the solid-state drives2106are removable modules. In some embodiments, the NVRAM2104is in a removable module. FIG.22illustrates rack unit servers2202that are stateless plus an external shelf2204. The external shelf2204is populated with storage memory blades2206. In some embodiments, the blades2206have removable storage memory modules. FIG.23illustrates rack unit servers2302that that include non-volatile random access memory2304plus an external shelf2306. Storage memory blades2308are inserted in the external shelf2306, and are removable. In some embodiments, each blade2308has one or more removable storage memory modules. It should be appreciated thatFIGS.17-23illustrate various combinations of combining storage and compute that provides numerous axes of freedom for scaling. The compute and storage for the various embodiments may be added together into the system and/or separately as discussed herein. In addition, the fabric module coupling the blades may incorporate PCI and/or Ethernet. With reference toFIGS.14A-14Cthe NIC1108and CPU1110are detachable modules. In addition NIC1108may be a SmartNIC or data processing unit that offloads work from the main engine. The offloaded work can include storage, compression, packet processing, and security functions. The data processing unit may be optimized for data movement in some embodiments. FIG.24Aillustrates a blade2402with one type of slot2404for a removable module2406, accessible by removing the blade2402from a chassis. Further embodiments with more than one slot2404, and more than one removable module2406, accessible with the blade2402removed are readily devised. FIG.24Billustrates a blade2412with another type of slot2408for a removable module2410, accessible without removing the blade2412from the chassis. For example, if the blade2412is inserted to the left, into a chassis, the module2410can be inserted from the right, into the slot2408in the blade2412with the blade2412in the chassis. The module2410is also removable from the slot2408with the blade2412in the chassis. FIG.24Cillustrates a blade2414with multiple slots2408for a removable module2410. The module2410can be inserted into, or removed or exchanged from, any of the slots2408, as can further modules2410. Variations with various numbers of slots2408are readily devised. With reference at least toFIGS.24A-24C, there are many possible combinations and configurations of blades with various removable modules in various embodiments. One blade embodiment can be configured as a compute-only blade, a storage-only blade, and a combination compute and storage blade (also called a hybrid compute/storage blade), through selection of appropriate removable module(s). For example, a compute-only blade could have one or multiple removable modules each with compute resources such as one or more processors, a processor cluster, or multiple processor clusters. A storage-only blade could have one or multiple removable modules each with flash memory, other solid-state memory, or other storage memory, of homogeneous or heterogeneous amounts of memory, NVRAM, or combination of NVRAM and storage memory. A combination compute and storage blade could have one or more compute resources modules, and one or more storage resources modules. A compute-only blade can be reconfigured as or converted to a compute/storage blade or a storage-only blade. A storage-only blade can be reconfigured as or converted to a compute/storage blade or a compute-only blade. A compute/storage blade can be reconfigured as or converted to a compute-only or a storage-only blade. In essence, the blades described with the embodiments described herein can optionally have storage and that storage is detachable. The compute complex for the blades can optional connect to the detachable storage. Thus, the blade can be changed from a compute blade to a storage blade or vice versa rather than starting with a blade that is limited to a compute blade or a storage blade. One embodiment of a removable module has one or more accelerators, as an accelerator module. The removable module may include one or more graphics processing units (GPUs), which can be used as processing resources. In some embodiments, the removable module has one or more neural networks, for example with appropriate processor(s), data structuring and connectivity. As mentioned above, the removable module has a smart network interface controller (SmartNIC), or more than one. One removable module has a data processing unit (DPU), or more than one. One removable module has a SmartNIC with a programmable DPU that performs data processing tasks such as compression/decompression, encryption/decryption in cooperation with the network interface controller, to offload network data handling and communication tasks from another processor(s), e.g., a blade processor or a storage controller. Through selectability of a variety of removable modules, a blade is configurable and reconfigurable multiple ways, in various embodiments. The accelerator may offload any software function from the main engine or host in some embodiments. It should be appreciated that the type of memory integrated into the embodiments is not limited to flash as other types of memory such as RAM, 3D crosspoint storage, etc. may be included. FIG.25illustrates a flow diagram of a method that is practiced on or by embodiments of storage systems. This method of configuring a storage system, and variations thereof, makes use of removable modules, blades and storage systems described herein, and variations thereof. In an action2502, first data is accessed in the storage system. This could be user data, system data, metadata, etc., in various embodiments of storage systems that have blades. In an action2504, a removable module is added to a blade, or to each of multiple blades. Various suitable modules are described above. The addition of a removable module could be an addition of a new module, or a replacement of an existing module, on a blade. The removable module may be an accelerator or data processing unit as described herein. The blade could be made into a hybrid blade by adding compute resources or storage resources in some embodiment. As the components are modular the complexity and cost with replacing an entire blade is avoided. In an action2506, the first data or second data is accessed in the storage system. One or more of the blades has been reconfigured by adding the removable module. It is appreciated that the storage system is operational both before and after the addition of the removable module(s), be it a new addition, the replacement of one or more removable modules, or a combination of replacement and addition. The storage system may show new capabilities, features, or improvement, for example in storage capacity, type of storage memory, computational capacity, data handling, throughput and/or latency or other aspects of data management and access, from the addition or replacement of one or more removable modules. Although some embodiments are described largely in the context of a storage system, readers of skill in the art will recognize that embodiments of the present disclosure may also take the form of a computer program product disposed upon computer readable storage media for use with any suitable processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, solid-state media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps described herein as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. In some examples, a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein. The instructions, when executed by a processor of a computing device, may direct the processor and/or computing device to perform one or more operations, including one or more of the operations described herein. Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media. A non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device). For example, a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media. Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g. a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory (“RAM”), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.). Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM). Advantages and features of the present disclosure can be further described by the following statements:1. A storage system, comprising:a plurality of blades comprising, in aggregate, compute resources and storage resources; andat least one of the plurality of blades having one or more removable modules.2. The storage system of statement 1, wherein the one or more removable modules comprises storage memory as a portion of the storage resources.3. The storage system of statement 1, wherein the one or more removable modules comprises one or more processors as a portion of the compute resources.4. The storage system of statement 1, wherein the at least one of the plurality of blades, through selectability of the one or more removable modules, is configurable as each of a compute-only blade, a storage-only blade, and a combination compute and storage blade.5. The storage system of statement 1, wherein the one or more removable modules comprises an accelerator module.6. The storage system of statement 1, wherein the one or more removable modules comprises a data processing unit (DPU).7. The storage system of statement 1, wherein the one or more removable modules comprises a smart network interface controller (SmartNIC).8. The storage system of statement 1, wherein the one or more removable modules comprises one or more graphics processing units (GPU) as a portion of the compute resources.9. The storage system of statement 1, wherein the one or more removable modules comprises a neural network as a portion of the compute resources.10. The storage system of statement 1, wherein:blades of the plurality of blades are coupled as a storage cluster or a storage array; andthe blades of the plurality of blades are heterogeneous as to the compute resources, the storage resources and the one or more removable modules.11. The storage system of statement 1, wherein the at least one of the plurality of blades comprises a blade having a plurality of slots for a plurality of removable modules.12. A storage system, comprising:a plurality of blades coupled as a storage array or a storage cluster; andeach of one or more of the plurality of blades having one or more removable modules.13. The storage system of statement 12, wherein the one or more removable modules comprises various amounts of flash memory.14. The storage system of statement 12, wherein the one or more removable modules comprises one or more processors, data processing units, graphics processing units, or accelerators.15. The storage system of statement 12, wherein the one or more of the plurality of blades, through the one or more removable modules, is configurable as a compute-only blade, configurable as a storage-only blade, and configurable as a combination compute and storage blade.16. The storage system of statement 12, wherein the one or more removable modules comprises a SmartNIC having a DPU.17. A method of configuring a storage system, comprising:accessing first data in the storage system;adding a removable module to each of one or more of a plurality of blades of a storage system; andaccessing the first data or second data in the storage system, with the one or more of the plurality of blades reconfigured by the adding the removable module to each of the one or more of the plurality of blades.18. The method of statement 17, wherein the adding the removable module comprises adding a processor, storage memory, a SmartNIC, an accelerator, a DPU or a GPU.19. The method of statement 17, further comprising:removing a further removable module from another of the plurality of blades.20. The method of statement 17, further comprising:replacing a further removable module in another of the plurality of blades by a still further removable module. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 308,206 |
11861189 | DETAILED DESCRIPTION Various embodiments of the present disclosure are described below with reference to the accompanying drawings. Elements and features of the present disclosure, however, may be configured or arranged differently to form other embodiments, which may be variations of any of the disclosed embodiments. In this disclosure, the terms “comprise,” “comprising,” “include,” and “including” are open-ended. As used in the appended claims, these terms specify the presence of the stated elements and do not preclude the presence or addition of one or more other elements. The terms in a claim do not foreclose the apparatus from including additional components (e.g., an interface unit, circuitry, etc.) in this disclosure, references to various features (e.g., dements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. In this disclosure, various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the blocks/units/circuits/components include structure (e.g., circuitry) that performs one or more tasks during operation. As such, the block/unit/circuit/component can be said to be configured to perform the task even when the specified block/unit/circuit/component is not currently operational (e.g., is not turned on nor activated). The block/unit/circuit/component used with the “configured to” language include hardware for example, circuits, memory storing program instructions executable to implement the operation, etc. Additionally, “configured to” can include a generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. As used in the disclosure, the term ‘circuitry’ may refer to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term “circuitry” also covers an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term “circuitry” also covers, for example, and if applicable to a particular claim element, an integrated circuit for a storage device. As used herein, the terms “first,” “second,” “third,” and so on are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). The terms “first” and “second” do not necessarily imply that the first value must be written before the second value. Further, although the terms may be used herein to identify various elements, these elements are not limited by these terms. These terms are used to distinguish one element from another element that otherwise have the same or similar names. For example, a first circuitry may be distinguished from a second circuitry. Further, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. Herein, an item of data or a data item may be a sequence of bits. For example, the data item may include the contents of a file, a portion of the file, a page in memory, an object in an object-oriented program, a digital message, a digital scanned image, a part of a video or audio signal, or any other entity which can be represented by a sequence of bits. According to an embodiment, the data item may include a discrete object. According to another embodiment, the data item may include a unit of information within a transmission packet between two different components. An embodiment of the disclosure can provide a data processing system and a method for operating the data processing system, which includes components and resources such as a memory system and a host, and is capable of dynamically allocating plural data paths used for data communication between the components based on usages of the components and the resources. A memory system according to an embodiment of the present disclosure can support a zoned namespace (ZNS) command set and include a zoned storage device interface allowing a non-volatile memory device and a host to collaborate on data placement, such that data can be aligned to a physical media of the non-volatile memory device, improving an overall performance of the memory system and increasing a capacity that can be exposed to the host. Even while a data input/output operation is performed within the non-volatile memory device, an apparatus or a method can make the memory system adjust the number of bits of data stored in a non-volatile memory cell included in a memory block allocated for a specific zone, in response to host's request. In addition, in a memory system according to an embodiment of the present disclosure, a namespace divided into plural zones can be established during an initial operation for performing a data input/output operation, after being connected to a host, based on a characteristic of data stored in the non-volatile memory device. The embodiment can provide an apparatus and method capable of improving data input/output operation performance and improving durability of the non-volatile memory device, because the namespace can be changed or adjusted in response to the characteristic of data. Further, a memory system according to an embodiment of the present disclosure can overcome the limitation of operation according to an initial setting value in a non-volatile memory device that supports a namespace divided by plural zones, so increasing availability of the non-volatile memory device. In an embodiment of the present disclosure, a memory system can include a memory device including a plurality of memory blocks, each memory block including memory cells capable of storing multi-bit data; and a controller configured to allocate the plurality of memory blocks for plural zoned namespaces input from an external device and access a memory block allocated for one of the plural zoned namespaces, which is input along with a data input/output request. In response to a first request input from the external device, the controller can adjust a number of bits of data stored in a memory cell included in a memory block, which is allocated for at least one zoned namespace among the plural zoned namespaces, and fix a storage capacity of the at least one zoned namespace. The controller can be configured to, when the plurality of memory blocks allocated for the plural zoned namespaces includes memory cells storing n-bit data, adjust the plurality of memory blocks to store 1-bit data in each of the memory cells included therein and deactivate (n−1) zoned namespaces among the plural zoned namespaces, wherein the n is an integer greater than 1. The controller can be configured to, when the first request is input along with a specific zoned namespace among the plural zoned namespaces, adjust at least one memory block allocated for the specific zoned namespace to store 1-bit data in each of memory cells in the at least one memory block and deactivate at least one zoned namespace other than the specific zoned namespace among the plural zoned namespaces to fix a storage capacity of the specific zoned namespace. The controller can be further configured to, in response to a second request input from the external device, adjust the at least one memory block storing the 1-bit data in each of memory cells to store the mufti-bit data therein. The controller can be configured to, in response to the second request, activate the at least one zoned namespace previously deactivated to store data. The first request can be associated with a characteristic of data stored in the memory device via the at least one zoned namespace of the plural zoned namespaces. The first request can be associated with a memory block storing data which has a shorter update period than, or is more frequently read than, other data stored in other memory blocks allocated for other zoned namespaces. The controller can be further configured to perform garbage collection to at least one memory block among the plurality of memory blocks and release zoned namespace allocation of the at least one memory block when the at least one memory block is erased. The controller can be further configured to transmit, to the external device, a result of the adjusting of the number of bits of data stored in the memory cell included in the memory block allocated for the at least one zoned namespace. In another embodiment of the present disclosure, a method for operating a memory system can include allocating a plurality of memory blocks, each memory block including memory cells capable of storing multi-bit data, for plural zoned namespaces input from an external device; accessing a memory block allocated for one of the plural zoned namespaces, which is input along with a data input/output request; and adjusting a number of bits of data stored in a memory cell included in a memory block, which is allocated for at least one zoned namespace among the plural zoned namespaces and fixing a storage capacity of the at least one zoned namespace, in response to a first request input from the external device. The adjusting of the number of bits of data can include, when the plurality of memory blocks allocated for the plural zoned namespaces includes memory cells storing n-bit data, in response to the first request, adjusting the plurality of memory blocks to store 1-bit data in each of the memory cells included therein; and deactivating (n−1) zoned namespaces among the plural zoned namespaces. Herein, the n is an integer greater than 1. The adjusting of the number of bits of data can include, when the first request is input along with a specific zoned namespace among the plural zoned namespaces, adjusting at least one memory block allocated for the specific zoned namespace to store 1-bit data in each of memory cells in the at least one memory block; and deactivating at least one zoned namespace other than the specific zoned namespace among the plural zoned namespaces to fix a storage capacity of the specific zoned namespace. The method can further include adjusting, in response to a second request input from the external device, the at least one memory block storing the 1-bit data in each of memory cells to store the mufti-bit data therein. The method can further include activating, in response to the second request, the at least one zoned namespace previously deactivated to store data. The first request can be associated with a characteristic of data stored in the memory device via the at least one zoned namespace of the plural zoned namespaces. The first request can be associated with a memory block storing data which has a shorter update period than, or is more frequently read than, other data stored in other memory blocks. The method can further include performing garbage collection to at least one memory block among the plurality of memory blocks; and releasing zoned namespace allocation of the at least one memory block when the at least one memory block is erased. The method can further include transmitting, to the external device, a result of the adjusting of the number of bits of data stored in the memory cell included in the memory block. In another embodiment of the present disclosure, a memory system can include a memory device including a plurality of memory blocks, each memory block including memory cells capable of storing multi-bit data; and a controller configured to allocate the plurality of memory blocks for plural zoned namespaces input from an external device, access a memory block allocated for one of the plural zoned namespaces, which is input along with a data input/output request, adjust, in response to a first request, a number of bits of data stored in a memory cell included in a memory block, which is allocated for at least one zoned namespace among the plural zoned namespaces and fix a storage capacity of the at least one zoned namespace. The controller can be configured to deactivate at least one zoned namespace other than the at least one zoned namespace among the plural zoned namespaces to fix or maintain a storage capacity of the at least one zoned namespace corresponding to the first request. In another embodiment, a memory system can include a memory device including first and second groups of memory blocks each having memory cells each capable of having any of first and second cell storage capacities, each group having a predetermined group storage capacity due to a predetermined one of the first and second cell storage capacities; and a controller configured to change, between the first and second cell storage capacities, a cell storage capacity of the memory cells within the respective first and second groups while keeping the predetermined group storage capacity of the first group by incorporating, into the first group, one or more of the memory blocks within the second group. In another embodiment, a memory system can include a memory device including a group of memory blocks having memory cells each capable of having any of first and second cell storage capacities, the group having a predetermined group storage capacity due to a predetermined one of the first and second cell storage capacities; and a controller configured to change, between the first and second cell storage capacities, a cell storage capacity of the memory cells within the group while keeping the predetermined group storage capacity of the group by incorporating, into another group within the memory device, one or more of the memory blocks within the group. Embodiments of the present disclosure will now be described with reference to the accompanying drawings, wherein like numbers reference like elements. FIG.1illustrates a data processing system according to an embodiment of the present disclosure. Referring toFIG.1, a memory system110may include a memory device150and a controller130. The memory device150and the controller130in the memory system110may be considered components or elements physically separated from each other. The memory device150and the controller130may be connected via at least one data path. For example, the data path may include a channel and/or a way. According to an embodiment, the memory device150and the controller130may be components or elements functionally divided. Further, according to an embodiment, the memory device150and the controller130may be implemented with a single chip or a plurality of chips. The controller130may perform a data input/output operation in response to a request input from the external device. For example, when the controller130performs a read operation in response to a read request input from an external device, data stored in a plurality of non-volatile memory cells included in the memory device150is transferred to the controller130. Although now shown inFIG.1, the memory device150may include a memory die. The memory die may include a plurality of memory blocks. The memory block may be understood as a group of non-volatile memory cells in which data is removed together by a single erase operation. Although not illustrated, the memory block may include a page which is a group of non-volatile memory cells that store data together during a single program operation or output data together during a single read operation. For example, one memory block may include a plurality of pages. For example, the memory device150may include a plurality of memory planes or a plurality of memory dies. According to an embodiment, the memory plane may be considered a logical or a physical partition including at least one memory block, a driving circuit capable of controlling an array including a plurality of non-volatile memory cells, and a buffer that can temporarily store data inputted to, or outputted from, non-volatile memory cells. In addition, according to an embodiment, the memory die may include at least one memory plane. The memory die may be understood as a set of components implemented on a physically distinguishable substrate. Each memory die may be connected to the controller130through a data path. Each memory die may include an interface to exchange a piece of data and a signal with the controller130. According to an embodiment, the memory device150may include at least one memory block, at least one memory plane, or at least one memory die. The internal configuration of the memory device150shown inFIG.1may be different according to performance of the memory system110. An embodiment of the present disclosure is not limited to the internal configuration shown inFIG.1. Referring toFIG.1, the memory device150may include a voltage supply circuit capable of supplying at least some voltage into the memory block. The voltage supply circuit may supply a read voltage Vrd, a program voltage Vprog, a pass voltage Vpass, or an erase voltage Vers into a non-volatile memory cell included in the memory block. For example, during a read operation for reading data stored in the non-volatile memory cell included in the memory block, the voltage supply circuit may supply the read voltage Vrd into a selected non-volatile memory cell. During the program operation for storing data in the non-volatile memory cell included in the memory block, the voltage supply circuit may supply the program voltage Vprog into a selected non-volatile memory cell. Also, during a read operation or a program operation performed on the selected nonvolatile memory cell, the voltage supply circuit may supply a pass voltage Vpass into a non-selected nonvolatile memory cell. During the erasing operation for erasing data stored in the non-volatile memory cell included in the memory block, the voltage supply circuit may supply the erase voltage Vers into the memory block. The memory device150may store information regarding various voltages which are supplied to the memory block60based on which operation is performed. For example, when a non-volatile memory cell in the memory block can store multi-bit data, plural levels of the read voltage Vrd for recognizing or reading the multi-bit data may be required. The memory device150may include a table including information corresponding to plural levels of the read voltage Vrd, corresponding to the mufti-bit data. For example, the table can include bias values stored in a register, each bias value corresponding to a specific level of the read voltage Vrd. The number of bias values for the read voltage Vrd that is used for a read operation may be limited to a preset range. Also, the bias values can be quantized. At least some of the plurality of memory blocks included in the memory device150can be allocated for a namespace divided into plural zones, hereinafter, referred to as zoned namespaces (ZNSs). According to an embodiment, the controller130may evenly allocate all memory blocks included in the memory device150for respective ZNSs. In this case, plural memory blocks allocated for a specific ZNS can include a memory block storing data therein (such as an open block or closed block) and an empty memory block not storing any data therein (such as a free block). According to an embodiment, the controller130may allocate, for a ZNS, at least some of the memory blocks included in the memory device150, the at least some memory blocks corresponding to a storage capacity required by the ZNS. Herein, the storage capacity may refer to how much storage space the memory device150provides. A memory block allocated for a specific ZNS can be released according to garbage collection, and another free block can be newly allocated for the specific ZNS. Or, when the specific ZNS is deactivated, the at least some memory blocks of the deactivated ZNS may become unallocated for any ZNS. The controller130may allocate the unallocated memory block for a specific ZNS additionally, as needed, while performing an input/output operation or receiving a request input from an external device. The ZNS may refer to a scheme of using a namespace divided by plural zones. Herein, the namespace may be considered as a logical storage space which is formattable within the memory device150. The namespace may have a preset or adjustable storage capacity. When the ZNS is applicable to the memory system110, data input/output operations may be performed differently from a conventional non-volatile memory system which does not support a scheme of the ZNS. For example, the host102can execute a plurality of applications APPL1, APPL2, APPL3, and the plurality of applications APPL1, APPL2, APPL3 can generate data items, individually, and store the generated data items in the memory system110. First, in the conventional non-volatile memory system, data items input from a host102are sequentially stored in a memory block within the memory device. That is, data items generated by the plurality of applications APPL1, APPL2, APPL3 may be sequentially stored in the memory device, without separation or distinction, according to an order of the data items which have been transferred from the host102to the conventional non-volatile memory system. The data items generated by the plurality of applications APPL1, APPL2, APPL3 may be sequentially stored in a memory block opened for programming data within the memory device. In the memory block, the data items generated by the plurality of applications APPL1, APPL2, APPL3 can be mixed or jumbled. In these processes, the controller is configured to generate map data items, each associating a logical address input from the host102with a physical address indicating a location where data is stored in the memory device. Thereafter, when the plurality of applications APPL1, APPL2, APPL3 executed by the host102requests data items stored in the memory system, the controller can output the data items requested by the plurality of applications APPL1, APPL2, APPL3 based on the map data items. In the conventional non-volatile memory system, various types of data items generated by various applications may be mixed or jumbled in a single memory block. In this case, data items stored in the single memory block (valid data is the latest data) can have different validity, and it may be difficult to predict validity of data items. Due to this reason, when garbage collection is performed, a lot of resources may be consumed to select a valid data item or to check whether the data items are valid. In addition, because plural applications are associated with a single memory block, a data input/output operation requested by one of the plural applications may be delayed by another operation requested or caused by another application. When garbage collection is performed on the memory block, plural operations requested by the plural applications may be delayed. However, the ZNS can avoid or prevent above-described issues which occur in the conventional non-volatile memory system. In a scheme of the ZNS, the plurality of applications APPL1, APPL2, APPL3 may sequentially store data items in respectively assigned zoned namespaces ZNS1, ZNS2, ZNS3. Here, the zone may have a predetermined storage space corresponding to a logical address scheme used by the host102. Plural memory blocks included in the memory device150may be allocated for individual zones. Referring toFIG.1, the plurality of zoned namespaces ZNS1, ZNS2, ZNS3 can correspond to the plurality of applications APPL1, APPL2, APPL3 included in the memory device150. A data item associated with the first application (APPL1, 312) can be programmed in, or read from, a memory block allocated for the first zoned namespace (ZNS1, 322). The second application (APPL2, 314) can store a data item in, or read a data item from, another memory block allocated for the second zoned namespace (ZNS2, 324). In addition, the third applications (APPL3, 316) may store a data item in, or read a data item from, another memory block allocated for the third zoned namespace (ZNS3, 326). In this case, data items generated by the first application APPL1 are sequentially stored in a memory block allocated for the first zoned namespace ZNS1, so that the memory system110does not have to check another memory block allocated for ZNSs other than first zoned namespace ZNS1 among the plurality of zoned namespaces ZNS1, ZNS2, ZNS3 for performing a data input/output operation or garbage collection. In addition, until a storage space in the first zoned namespace ZNS1 allocated to the first application APPL1 becomes insufficient to store data, garbage collection need not be performed on the memory blocks allocated for the first zoned namespace ZNS1. For this reason, efficiency of garbage collection for the memory device150may increase, and a frequency of performing the garbage collection may decrease. This can lead to a decrease in a write amplification factor (WAF) indicating a degree to which an amount of data write (or data program) is amplified in the memory device150, and increase a lifespan of the memory device150. In addition, in the memory system110to which the ZNS is applied, media over-provisioning in the memory device150can be reduced, as well as a utilization (or occupancy) rate of the volatile memory144(refer toFIGS.2to3) can be reduced. It is possible to reduce the amount of data processed, transmitted, or received within the memory system110, so that overheads generated in the memory system110might decrease. Through this, performance of the data input/output operation of the memory system110may be improved or enhanced. According to an embodiment, the plurality of zoned namespaces ZNS1, ZNS2, ZNS3 may be individually allocated for each of the plurality of applications APPL1, APPL2, APPL3. In another embodiment, the plurality of applications APPL1, APPL2, APPL3 may share a specific ZNS. In addition, in another embodiment, plural ZNSs are allocated for each of a plurality of applications APPL1, APPL2, APPL3. Each of the applications APPL1, APPL2, APPL3 can use the plural ZNSs according to characteristics of data to be stored in the memory system110. For example, when the first zoned namespace ZNS1 and the second zoned namespace ZNS2 are allocated for the first application APPL1, the first application APPL1 can store a hot data item (e.g., a data item frequently accessed or read, or updated) in the first zoned namespace ZNS1, and store a cold data item (e.g., a data item less frequently accessed or read, or updated) in the second zoned namespace ZNS2. The hot data item is more frequently read, updated or re-programmed than the cold data item, so that a validity period of the host data item is shorter than that of the cold data item. During an initial operation for engagement between the host102and the memory system110, the host102and the memory system110may exchange information regarding ZNSs allocated for the respective applications APPL1, APPL2, APPL3. A data input/output operation may be performed for each of the applications APPL1, APPL2, APPL3 through a corresponding ZNS. Depending on the characteristics of data during the data input/output operation or the characteristics of applications, the host102can require a faster data input/output speed of the memory system110, or securely store a data item with a very high priority in the memory system110. Furthermore, a plurality of non-volatile memory cells included in the memory device150may include memory cells, each memory cell storing mufti-bit data. But the memory system110can adjust the memory cell to store one-bit data. When a fast input/output speed is required or data should be safely stored, the memory system110may adjust a memory block including memory cells to store one-bit data in each memory cell although the memory cells are capable of storing multi-bit data. Further, if necessary, a specific memory block in the memory device150may be used as a single-level cell (SLC) buffer for a fast data input/output operation or data safety. Sometimes, the memory system110can adjust the number of bits of data stored in the memory cell for wear leveling. Due to various reasons, after the host102and the memory system110set the ZNS and exchange information regarding the set ZNS with each other, the memory system110might independently change the number of bits of data stored in the memory cell included in the memory block allocated for the ZNS. However, it could be issue that a preset storage capacity of the ZNS would be changed when the number of bits of data stored in the memory cell included in the memory block allocated for the already set ZNS is changed. Referring toFIG.1, the host102can send a first request RES to the memory system110to change the number of bits of data stored in a non-volatile memory cell in a memory block allocated for the ZNS. For example, when the host102transmits the first request RES in relation to the second zoned namespace ZNS2 to the memory system110, the memory system110can adjust a memory block AR1 allocated for the second zoned namespace ZNS2 to store single-bit data in each memory cell included in the memory block AR1. That is, the memory block AR1 can be adjusted from a multi-level cell memory block (MLC) including memory cells capable of storing multi-bit data to a single-level cell memory block (SLC) including memory cells capable of storing single-bit data. Further, when the host102transmits a second request RCS in relation to the second zoned namespace ZNS2 to the memory system110, the memory system110can adjust the memory block AR1 allocated for the second zoned namespace ZNS2 to store multi-bit data in each memory cell included in the memory block AR1. That is, the memory block AR1 can be adjusted from the single-level cell (SLC) memory block including memory cells capable of storing one-bit data to the multi-level cell (MLC) memory block including memory cells capable of storing multi-bit data. During these operations, the memory system110may fix or maintain a storage capacity of the second zoned namespace ZNS2 even when the number of bits of data stored in the memory cell included in the memory block is adjusted. The memory system110may deactivate another ZNS not to be used by the host102, so that the memory system110could release memory blocks allocated for the deactivated ZNS and re-allocate released memory block for the second zoned namespace ZNS2. The memory system110may adjust the number of bits of data that can be stored in the non-volatile memory cell in the memory block and then notify an adjusted result to the host102. According to an embodiment, the first and second requests RES, RCS transmitted from the host102can be generated while the plurality of applications APPL1, APPL2, APPL3 are running (e.g., a fast operation speed is requested or data having a priority is generated). In addition, in another embodiment, when a deteriorated operation state of a specific memory block in the memory device150is recognized due to a background operation such as bad block management, garbage collection or wear leveling performed in the memory system110, the memory system110may transmit relevant information to the host102for inducing a change in a storage space in the ZNS associated with the specific memory block. For example, when an operation state of the memory block AR1 allocated for the second zoned namespace ZNS2 reaches a preset lifetime (e.g., P/E cycle) for storing mufti-bit data (e.g., TLC block, QLC block, etc.), the memory system110may transmit information regarding the second zoned namespace ZNS2 associated with the memory block AR1 to the host102for a purpose of adjusting the memory block AR1 to store one-bit data in memory cells (e.g., the memory block AR1 is adjusted into a SLC block). Through this operation, the lifespan of the memory device150can be longer. FIGS.2and3illustrate some operations that may be performed by the memory system110according to one or more embodiments of the present disclosure. Referring toFIG.2, a data processing system100may include a host102engaged or coupled with a memory system, such as memory system110. The host102may include a portable electronic device (e.g., a mobile phone, an MP3 player, a laptop computer, etc.) or a non-portable electronic device (e.g., a desktop computer, a game player, a television, a projector, etc.). The host102may also include at least one operating system (OS), which can control functions and operations performed in the host102. The OS can provide interoperability between the host102engaged operatively with the memory system110and the user who intends to store data in the memory system110. The OS may support functions and operations corresponding to user requests. By way of example but not limitation, the OS can be classified into a general operating system and a mobile operating system according to mobility of the host102. The general operating system may be split into a personal operating system and an enterprise operating system according to system requirements or a user environment. As compared with the personal operating system, the enterprise operating systems can be specialized for securing and supporting high performance computing. The mobile operating system may be subject to support services or functions for mobility (e.g., a power saving function). The host102may include a plurality of operating systems. The host102may execute multiple operating systems interlocked with the memory system110, corresponding to a user request. The host102may transmit a plurality of commands corresponding to the user's requests into the memory system110, thereby performing operations corresponding to commands within the memory system110. The controller130in the memory system110may control the memory device150in response to a request or a command input from the host102. For example, the controller130may perform a read operation to provide a piece of data read from the memory device150for the host102and may perform a write operation (or a program operation) to store a piece of data input from the host102in the memory device150. In order to perform data input/output (I/O) operations, the controller130may control and manage internal operations for data read, data program, data erase, or the like. According to an embodiment, the controller130can include a host interface132, a processor134, error correction circuitry138, a power management unit (PMIS)140, a memory interface142, and a memory144. Regarding the memory system110, the components included in the controller130as illustrated inFIG.2may vary according to structure, function, operation performance, or the like, among various embodiments. For example, the memory system110may be implemented with any of various types of storage devices, which may be electrically coupled with the host102, according to a protocol of a host interface. Non-limiting examples of suitable storage devices include a solid state drive (SSD), a multimedia card (MMC), an embedded MMC (eMMC), a reduced size MMC (RS-MMC), a micro-MMC, a secure digital (SD) card, a mini-SD, a micro-SD, a universal serial bus (USB) storage device, a universal flash storage (UFS) device, a compact flash (CF) card, a smart media (SM) card, a memory stick, and the like. Components in the controller130may be added or omitted based on implementation of the memory system110. The host102and the memory system110may include a controller or an interface for transmitting and receiving signals, a piece of data, and the like, in accordance with one or more predetermined protocols. For example, the host interface132in the memory system110may include an apparatus capable of transmitting signals, a piece of data, and the like, to the host102or receiving signals, a piece of data, and the like input from the host102. The host interface132included in the controller130may receive signals, commands (or requests), and/or a piece of data input from the host102. For example, the host102and the memory system110may use a predetermined protocol to transmit and receive a piece of data between each other. Examples of protocols or interfaces supported by the host102and the memory system110for sending and receiving a piece of data include Universal Serial Bus (USB), Multi-Media Card (MMC), Parallel Advanced Technology Attachment (DATA), Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), Integrated Drive Electronics (IDE), Peripheral Component Interconnect Express (PCIE), Serial-attached SCSI (SAS), Serial Advanced Technology Attachment (SATA), Mobile Industry Processor Interface (MIDI), and the like. According to an embodiment, the host interface132is a type of layer for exchanging a piece of data with the host102and is implemented with, or driven by, firmware called a host interface layer (HIL). The Integrated Drive Electronics (IDE) or Advanced Technology Attachment (ATA) may be used as one of the interfaces for transmitting and receiving a piece of data and, for example, may use a cable including 40 wires connected in parallel to support data transmission and reception between the host102and the memory system110. When a plurality of memory systems110are connected to a single host102, the plurality of memory systems110may be divided into a master and a slave by using a position or a dip switch to which the plurality of memory systems110are connected. The memory system110set as the master may be used as the main memory device. The IDE (ATA) may include, for example, Fast-ATA, ATAPI, and Enhanced IDE (EIDE). Serial Advanced Technology Attachment (SATA) is a type of serial data communication interface that is compatible with various ATA standards of parallel data communication interfaces which are used by Integrated Drive Electronics (IDE) devices. The 40 wires in the IDE interface can be reduced to six wires in the SATA interface. For example, 40 parallel signals for the IDE can be converted into 6 serial signals for SATA to be transmitted between each other. The SATA has been widely used because of its faster data transmission and reception rate, and its less resource consumption in the host102used for data transmission and reception, SATA may support connections with up to 30 external devices to a single transceiver included in the host102. In addition, SATA can support hot plugging that allows an external device to be attached or detached from the host102, even while data communication between the host102and another device is being executed. Thus, the memory system110can be connected or disconnected as an additional device, like a device supported by a Universal Serial Bus (USB) even when the host102is powered on. For example, in the host102having an eSATA port, the memory system110may be freely detached like an external hard disk. Small Computer System Interface (SCSI) is a type of serial data communication interface used for connection between a computer, a server, and/or other peripheral devices. The SCSI can provide a high transmission speed, as compared with other interfaces such as IDE and SATA. In SCSI, the host102and at least one peripheral device (e.g., memory system110) are connected in series, but data transmission and reception between the host102and each peripheral device may be performed through a parallel data communication. In SCSI, it is easy to connect to, or disconnect from, the host102, a device such as the memory system110. SCSI can support connections of 15 other devices to a single transceiver included in host102. Serial Attached SCSI (SAS) can be understood as a serial data communication version of the SCSI. In SAS, not only the host102and a plurality of peripheral devices are connected in series, but also data transmission and reception between the host102and each peripheral device may be performed in a serial data communication scheme. SAS can support connection between the host102and the peripheral device through a serial cable instead of a parallel cable, to easily manage equipment using SAS and enhance or improve operational reliability and communication performance. SAS may support connections of eight external devices to a single transceiver included in the host102. The Non-volatile memory express (NVMe) is a type of interface based at least on a Peripheral Component Interconnect Express (PCIe) designed to increase performance and design flexibility of the host102, servers, computing devices, and the like equipped with the non-volatile memory system110. PCIe can use a slot or a specific cable for connecting the host102(e.g., a computing device) and the memory system110(e.g., a peripheral device). For example, PCIe can use a plurality of pins (for example, 18 pins, 32 pins, 49 pins, 82 pins, etc.) and at least one wire (e.g., x1, x4, x8, x16, etc.) to achieve high speed data communication over several hundred MB per second (e.g., 250 MB/s, 500 MB/s, 984.6250 MB/s, 1969 MB/s, and etc.). According to an embodiment, the PCIe scheme may achieve bandwidths of tens to hundreds of Giga bits per second. A system using the NVMe can make the most of an operation speed of the non-volatile memory system110, such as an SSD, which operates at a higher speed than a hard disk. According to an embodiment, the host102and the memory system110may be connected through a universal serial bus (USB). The Universal Serial Bus (USB) is a type of scalable, hot-pluggable plug-and-play serial interface that can provide cost-effective standard connectivity between the host102and a peripheral device, such as a keyboard, a mouse, a joystick, a printer, a scanner, a storage device, a modem, a video camera, and the like. A plurality of peripheral devices such as the memory system110may be coupled to a single transceiver included in the host102. Referring toFIG.2, the error correction circuitry138can correct error bits of the data to be processed in (e.g., output from) the memory device150, which may include an error correction code (ECC) encoder and an ECC decoder. The ECC encoder can perform error correction encoding of data to be programmed in the memory device150to generate encoded data into which a parity bit is added and store the encoded data in memory device150. The ECC decoder can detect and correct errors contained in data read from the memory device150when the controller130reads the data stored in the memory device150. For example, after performing error correction decoding on the data read from the memory device150, the error correction circuitry138can determine whether the error correction decoding has succeeded and output an instruction signal (e.g., a correction success signal or a correction fail signal). The error correction circuitry138can use a parity bit generated during the ECC encoding process for correcting the error bit of the read data. When the number of the error bits is greater than or equal to a threshold number of correctable error bits, the error correction circuitry138might not correct error bits but instead may output an error correction fail signal indicating failure in correcting the error bits. According to an embodiment, the error correction circuitry138may perform an error correction operation based on a coded modulation such as a low density parity check (LDPC) code, a Bose-Chaudhuri-Hocquenghem (BCH) code, a turbo code, a Reed-Solomon (RS) code, a convolution code, a recursive systematic code (RSC), a trellis-coded modulation (TCM), a Block coded modulation (BCM), and so on. The error correction circuitry138may include all circuits, modules, systems, and/or devices for performing the error correction operation based on at least one of the above described codes. For example, the ECC decoder may perform hard decision decoding or soft decision decoding to data transmitted from the memory device150. The hard decision decoding can be understood as one of two methods broadly classified for error correction. Hard decision decoding may include an operation of correcting an error by reading digital data of ‘0’ or ‘1’ from a non-volatile memory cell in the memory device150. Because the hard decision decoding handles a binary logic signal, the circuit/algorithm design or configuration may be simpler and processing speed may be faster than soft decision decoding. Soft decision decoding may quantize a threshold voltage of a non-volatile memory cell in the memory device150by two or more quantized values (e.g., multiple bit data, approximate values, an analog value, and the like) in order to correct an error based on the two or more quantized values. The controller130can receive two or more alphabets or quantized values from a plurality of non-volatile memory cells in the memory device150, and then perform a decoding based on information generated by characterizing the quantized values as a combination of information such as conditional probability or likelihood. According to an embodiment, the ECC decoder may use a low-density parity-check and a generator matrix (LDPC-GM) code among methods designed for the soft decision decoding. The low-density parity-check (LDPC) code uses an algorithm that can read values of data from the memory device150in several bits according to reliability, not simply data of 1 or 0 like hard decision decoding, and iteratively repeats it through a message exchange in order to improve reliability of the values. Then, the values are finally determined as data of 1 or 0. For example, a decoding algorithm using LDPC codes can be understood as probabilistic decoding. Hard decision decoding in which the value output from a non-volatile memory cell is coded as 0 or 1. Compared to hard decision decoding, soft decision decoding can determine the value stored in the non-volatile memory cell based on the stochastic information. Regarding bit-flipping (which may be considered an error that can occur in the memory device150), soft decision decoding may provide improved probability of correcting error and recovering data, as well as provide reliability and stability of corrected data. The LDPC-GM code may have a scheme in which internal LDGM codes can be concatenated in series with high-speed LDPC codes. According to an embodiment, the ECC decoder may use, for example, low-density parity-check convolutional codes (LDPC-CCs) code for soft decision decoding. The LDPC-CCs code may have a scheme using a linear time encoding and a pipeline decoding based on a variable block length and a shift register. According to an embodiment, the ECC decoder may use, for example, a Log Likelihood Ratio Turbo Code (LLR-TC) for soft decision decoding. The Log Likelihood Ratio (LLR) may be calculated as a non-linear function for a distance between a sampled value and an ideal value. In addition, Turbo Code (TC) may include a simple code (for example, a Hamming code) in two or three dimensions, and repeat decoding in a row direction and a column direction to improve reliability of values. The power management unit (PMU)140may control electrical power provided in the controller130. The PMU140may monitor the electrical power supplied to the memory system110(e.g., a voltage supplied to the controller130) and provide the electrical power to components included in the controller130. The PMU140can not only detect power-on or power-off, but also can generate a trigger signal to enable the memory system110to back up a current state urgently when the electrical power supplied to the memory system110is unstable. According to an embodiment, the PMU140may include a device or a component capable of accumulating electrical power that may be used in an emergency. The memory interface142may serve as an interface for handling commands and data transferred between the controller130and the memory device150, in order to allow the controller130to control the memory device150in response to a command or a request input from the host102. The memory interface142may generate a control signal for the memory device150and may process data input to, or output from, the memory device150under the control of the processor134in a case when the memory device150is a flash memory. For example, when the memory device150includes a NAND flash memory, the memory interface142includes a NAND flash controller (NFC). The memory interface142can provide an interface for handling commands and data between the controller130and the memory device150. In accordance with an embodiment, the memory interface142can be implemented through, or driven by, firmware called a Flash Interface Layer (FIL) for exchanging data with the memory device150. According to an embodiment, the memory interface142may support an open NAND flash interface (ONFi), a toggle mode, or the like, for data input/output with the memory device150. For example, the ONFi may use a data path (e.g., a channel, a way, etc.) that includes at least one signal line capable of supporting bi-directional transmission and reception in a unit of 8-bit or 16-bit data. Data communication between the controller130and the memory device150can be achieved through at least one interface regarding an asynchronous single data rate (SDR), a synchronous double data rate (DDR), and a toggle double data rate (DDR). The memory144may be a type of working memory in the memory system110or the controller130, while storing temporary or transactional data which occurred or is delivered for operations in the memory system110and the controller130. For example, the memory144may temporarily store read data output from the memory device150in response to a request from the host102, before the read data is output to the host102. In addition, the controller130may temporarily store write data input from the host102in the memory144, before programming the write data in the memory device150. When the controller130controls operations such as data read, data write, data program, data erase, etc., of the memory device150, a piece of data transmitted or generated between the controller130and the memory device150of the memory system110may be stored in the memory144. In addition to the read data or write data, the memory144may store information (e.g., map data, read requests, program requests, etc.) used for inputting or outputting data between the host102and the memory device150. According to an embodiment, the memory144may include a command queue, a program memory, a data memory, a write buffer/cache, a read buffer/cache, a data buffer/cache, a map buffer/cache, and/or the like. The controller130may allocate some storage space in the memory144for a component which is established to carry out a data input/output operation. For example, the write buffer established in the memory144may be used to temporarily store target data subject to a program operation. In an embodiment, the memory144may be implemented with a volatile memory. For example, the memory144may be implemented with a static random access memory (SRAM), a dynamic random access memory (DRAM), or both. AlthoughFIG.2illustrates, for example, the memory144disposed within the controller130, the embodiments are not limited thereto. The memory144may be located within or external to the controller130. For instance, the memory144may be embodied by an external volatile memory having a memory interface transferring data and/or signals between the memory144and the controller130. The processor134may control the overall operations of the memory system110. For example, the processor134can control a program operation or a read operation of the memory device150, in response to a write request or a read request entered from the host102. According to an embodiment, the processor134may execute firmware to control the program operation or the read operation in the memory system110. Herein, the firmware may be referred to as a flash translation layer (FTL). An example of the FTL is later described in detail, referring toFIG.3. According to an embodiment, the processor134may be implemented with a microprocessor or a central processing unit (CPU). According to an embodiment, the memory system110may be implemented with at least one multi-core processor. The multi-core processor is a type of circuit or chip in which two or more cores, which are considered distinct processing regions, are integrated. For example, when a plurality of cores in the multi-core processor drive or execute a plurality of flash translation layers (FTLs) independently, data input/output speed (or performance) of the memory system110may be improved. According to an embodiment, the data input/output (I/O) operations in the memory system110may be independently performed through different cores in the multi-core processor. The processor134in the controller130may perform an operation corresponding to a request or a command input from the host102. Further, the memory system110may be independent of a command or a request input from an external device such as the host102. In one case, an operation performed by the controller130in response to the request or the command input from the host102may be considered a foreground operation, while an operation performed by the controller130independently (e.g., regardless of the request or the command input from the host102) may be considered a background operation. The controller130can perform foreground or background operations for read, write or program, erase and the like, regarding a piece of data in the memory device150. In addition, a parameter set operation corresponding to a set parameter command or a set feature command as a set command transmitted from the host102may be considered a foreground operation. As a background operation without a command transmitted from the host102, the controller130can perform garbage collection (GC), wear leveling (WO, bad block management for identifying and processing bad blocks, or the like. The background operations may be performed in relation to a plurality of memory blocks152,154,156included in the memory device150. According an embodiment, substantially similar operations may be performed as both the foreground operation and the background operation. For example, when the memory system110performs garbage collection in response to a request or a command input from the host102(e.g., Manual GC), garbage collection can be considered a foreground operation. When the memory system110performs garbage collection independently of the host102(e.g., Auto GC), garbage collection can be considered a background operation. When the memory device150includes a plurality of dies (or a plurality of chips) including non-volatile memory cells, the controller130may be configured to perform parallel processing regarding plural requests or commands input from the host102in order to improve performance of the memory system110. For example, the transmitted requests or commands may be divided and processed in parallel within at least some of a plurality of planes, a plurality of dies or a plurality of chips included in the memory device150. The memory interface142in the controller130may be connected to a plurality of planes, dies or chips in the memory device150through at least one channel and at least one way. When the controller130distributes and stores data in the plurality of dies through each channel or each way in response to requests or commands associated with a plurality of pages including non-volatile memory cells, plural operations corresponding to the requests or the commands can be performed individually or in parallel. Such a processing method or scheme can be considered an interleaving method. Because data input/output speed of the memory system110operating with the interleaving method may be faster than that without the interleaving method, data I/O performance of the memory system110can be improved. By way of example but not limitation, the controller130can recognize statuses regarding a plurality of channels (or ways) associated with a plurality of memory dies included in the memory device150. The controller130may determine the status of each channel or each way as one of, for example, a busy status, a ready status, an active status, an idle status, a normal status, and/or an abnormal status. The determination of which channel or way an instruction (and/or a data) is delivered through by the controller can be associated with a physical block address, e.g., which die(s) the instruction (and/or the data) is delivered into. The controller130can refer to descriptors delivered from the memory device150. The descriptors can include a block or page of parameters that describe something about the memory device150, which is data with a set format or structure. For instance, the descriptors may include device descriptors, configuration descriptors, unit descriptors, and the like. The controller130can refer to, or use, the descriptors to determine which channel(s) or way(s) an instruction or a data is exchanged via. Referring toFIG.2, the memory device150in the memory system110may include the plurality of memory blocks152,154,156. Each of the plurality of memory blocks152,154,156includes a plurality of non-volatile memory cells. According to an embodiment, the memory block152,154,156can be a group of non-volatile memory cells erased together. The memory block152,154,156may include a plurality of pages which is a group of non-volatile memory cells read or programmed together. In an embodiment, each memory block152,154,156may have a three-dimensional stack structure for high integration. Further, the memory device150may include a plurality of dies, each die including a plurality of planes, each plane including the plurality of memory blocks152,154,156. Configuration of the memory device150can be different for performance of the memory system110. In the memory device150shown inFIG.2, the plurality of memory blocks152,154,156are included. The plurality of memory blocks152,154,156can be any of single-level cell (SLC) memory blocks, multi-level cell (MLC) Cell) memory blocks, or the like, according to the number of bits that can be stored or represented in one memory cell. An SLC memory block includes a plurality of pages implemented by memory cells, each storing one bit of data. An SLC memory block can have high data I/O operation performance and high durability. The MLC memory block includes a plurality of pages implemented by memory cells, each storing multi-bit data (e.g., two bits or more). The MLC memory block can have larger storage capacity for the same space compared to the SLC memory block. The MLC memory block can be highly integrated in view of storage capacity. In an embodiment, the memory device150may be implemented with MLC memory blocks such as a double level cell (DLC) memory block, a triple-level cell (TLC) memory block, a quadruple-level cell (QLC) memory block and a combination thereof. The double-level cell (DLC) memory block may include a plurality of pages implemented by memory cells, each capable of storing 2-bit data. The triple-level cell (TLC) memory block can include a plurality of pages implemented by memory cells, each capable of storing 3-bit data. The quadruple-level cell (QLC) memory block can include a plurality of pages implemented by memory cells, each capable of storing 4-bit data. In another embodiment, the memory device150can be implemented with a block including a plurality of pages Implemented by memory cells, each capable of storing five or more bits of data. According to an embodiment, the controller130may use an MLC memory block included in the memory device150such as an SLC memory block that stores one-bit data in one memory cell. A data input/output speed of the MLC memory block can be slower than that of the SLC memory block. That is, when the MLC memory block is used as the SLC memory block, a margin for a read or program operation can be reduced. The controller130can utilize a faster data input/output speed of the MLC memory block when using the MLC memory block as the SLC memory block. For example, the controller130can use the MLC memory block as a buffer to temporarily store a puce of data, because the buffer may require a high data input/output speed for improving performance of the memory system110. Further, according to an embodiment, the controller130may program pieces of data in an MLC memory block a number of times without performing an erase operation on the MLC memory block included in the memory device150. Non-volatile memory cells have a feature that does not support data overwrite. However, the controller130may use a feature in which an MLC may store mufti-bit data, in order to program plural pieces of 1-bit data in the MLC a number of times. For a MLC overwrite operation, the controller130may store the number of program times as separate operation information when a single piece of 1-bit data is programmed in a non-volatile memory cell. According to an embodiment, an operation for uniformly levelling threshold voltages of non-volatile memory cells can be carried out before another piece of data is overwritten in the same non-volatile memory cells. In an embodiment, the memory device150is embodied as a non-volatile memory such as a flash memory, for example, as a NAND flash memory, a NOR flash memory, and the like. In an embodiment, the memory device150may be implemented by at least one of a phase change random access memory (PCRAM), a ferroelectrics random access memory (FRAM), a spin injection magnetic memory (STT-RAM), and a spin transfer torque magnetic random access memory (STT-MRAM), or the like. Referring toFIG.3, a controller130in a memory system operates along with the host102and the memory device150. As illustrated, the controller130includes a host interface132, a flash translation layer (FTL)240, as well as the memory interface142, and the memory144previously identified in connection withFIG.2. According to an embodiment, the error correction circuitry138illustrated inFIG.2may be included in the flash translation layer (FTL)240. In another embodiment, the error correction circuitry138may be implemented as a separate module, a circuit, firmware, or the like, which is included in, or associated with, the controller130. The host interface132may be capable of handling commands, data, and the like transmitted from the host102. By way of example but not limitation, the host interface132may include a corn land queue56, a buffer manager52, and an event queue54. The command queue56may sequentially store commands, data, and the like, received from the host102and output them to the buffer manager52, for example, in an order in which they are stored. The buffer manager52may classify, manage, or adjust the commands, the data, and the like, which are received from the command queue56. The event queue54may sequentially transmit events for processing the commands, the data, and the like, received from the buffer manager52. A plurality of commands or data having the same characteristic (e.g., read or write commands) may be transmitted from the host102, or plurality of commands and data having different characteristics may be transmitted to the memory system110after being mixed or jumbled by the host102. For example, a plurality of commands for reading data (read commands) may be delivered, or commands for reading data (read command) and programming/writing data (write command) may be alternately transmitted to the memory system110. The host interface132may store commands, data, and the like, which are transmitted from the host102, to the command queue56sequentially. Thereafter, the host interface132may estimate or predict what type of internal operation the controller130will perform according to the characteristics of commands, data, and the like, which have been entered from the host102. The host interface132can determine a processing order and a priority of commands, data and the like, based at least on their characteristics. According to characteristics of commands, data, and the like transmitted from the host102, the buffer manager52in the host interface132is configured to determine whether the buffer manager should store commands, data, and the like, in the memory144, or whether the buffer manager should deliver the commands, the data, and the like into the flash translation layer (FTL)240. The event queue54receives events, entered from the buffer manager52, which are to be internally executed and processed by the memory system110or the controller130in response to the commands, the data, and the like, transmitted from the host102, in order to deliver the events into the flash translation layer (FTL)240in the order received. In accordance with an embodiment, the flash translation layer (FTL)240illustrated inFIG.3may implement a mufti-thread scheme to perform the data input/output (I/O) operations. A multi-thread FTL may be implemented through a multi-core processor using multi-thread included in the controller130. In accordance with an embodiment, the flash translation layer (FTL)240can include a host request manager (HRM)46, a map manager (MM)44, a state manager42, and a block manager48. The host request manager (HRM)46can manage the events entered from the event queue54. The map manager (MM)44can handle or control a map data. The state manager42can perform garbage collection (GC) or wear leveling (WL). The block manager48can execute commands or instructions onto a block in the memory device150. By way of example but not limitation, the host request manager (HRM)46can use the map manager (MM)44and the block manager48to handle or process requests according to the read and program commands, and events which are delivered from the host interface132. The host request manager (HRM)46can send an inquiry request to the map manager (MM)44, to determine a physical address corresponding to the logical address which is entered with the events. The host request manager (HRM)46can send a read request with the physical address to the memory interface142, to process the read request (handle the events). In an embodiment, the host request manager (HRM)46can send a program request (write request) to the block manager48to program data to a specific empty page (no data) in the memory device150, and then can transmit a map update request corresponding to the program request to the map manager (MM)44, in order to update an item relevant to the programmed data in information of mapping the logical-physical addresses to each other. The block manager48can convert a program request delivered from the host request manager (HRM)46, the map manager (MM)44, and/or the state manager42, into a flash program request used for the memory device150in order to manage flash blocks in the memory device150. To maximize or enhance program or write performance of the memory system110(e.g., seeFIG.2), the block manager48may collect program requests and send flash program requests for multiple-plane and one-shot program operations to the memory interface142. In an embodiment, the block manager48sends several flash program requests to the memory interface142to enhance or maximize parallel processing of the multi-channel and multi-directional flash controller. In an embodiment, the block manager48can be configured to manage blocks in the memory device150according to the number of valid pages, select and erase blocks having no valid pages when a free block is needed, and select a block including the least number of valid pages when it is determined that garbage collection is to be performed. The state manager42can perform garbage collection to move the valid data to an empty block and erase the blocks containing the moved valid data so that the block manager48may have enough free blocks (empty blocks with no data). When the block manager48provides information regarding a block to be erased to the state manager42, the state manager42may check all flash pages of the block to be erased to determine whether each page is valid. For example, to determine validity of each page, the state manager42can identify a logical address recorded in an out-of-band (00B) area of each page. To determine whether each page is valid, the state manager42can compare the physical address of the page with the physical address mapped to the logical address obtained from the inquiry request. The state manager42sends a program request to the block manager48for each valid page. A mapping table can be updated through the update of the map manager44when the program operation is complete. The map manager44can manage a logical-physical mapping table. The map manager44can process various requests, for example, queries, updates, and the like, which are generated by the host request manager (HRM)46or the state manager42. The map manager44may store the entire mapping table in the memory device150(e.g., a flash/non-volatile memory) and cache mapping entries according to the storage capacity of the memory144. When a map cache miss occurs while processing inquiry or update requests, the map manager44may send a read request to the memory interface142to load a relevant mapping table stored in the memory device150. When the number of dirty cache blocks in the map manager44exceeds a certain threshold, a program request can be sent to the block manager48so that a dean cache block is made and the dirty map table may be stored in the memory device150. When garbage collection is performed, the state manager42copies valid page(s) into a free block, and the host request manager (HRM)46can program the latest version of the data for the same logical address of the page and currently issue an update request. When the state manager42requests the map update in a state in which copying of valid page(s) is not completed normally, the map manager44might not perform the mapping table update. This is because the map request is issued with old physical information when the state manger42requests a map update and a valid page copy is completed later. The map manager44may perform a map update operation to ensure accuracy when, or only if, the latest map table still points to the old physical address. FIG.4illustrates a first example of a method for operating a data processing system including a host and a memory system. Particularly,FIG.4shows how to set a ZNS by the host102and the memory system110, and perform a data input/output operation based on the ZNS. Referring toFIG.4, during an initial setup procedure (SETUP PROCEDURE) after power is applied to the host102and the memory system110, the host102and the memory system110can check whether the power is normally applied (POWER-ON/CONNECTION). The host102may recognize the memory system110and set a ZNS to store data in the memory system110or read the stored data from the memory system110. The host102can transmit, to the memory system110, information regarding the ZNS corresponding to a plurality of applications being executed therein. The memory system110can allocate a non-volatile memory block for the ZNS transmitted from the host102(ZNS CONFIGURATION). After the initial setup procedure, the host102and the memory system110may perform an operation such as a data input/output operation corresponding to a user request (RUN-TIME PROCEDURE). After the memory system110allocates the non-volatile memory block for the ZNS transmitted from the host102, the host102may execute an application in response to the user's request. When the application is executed, the host102can request data input/output of the memory system110based on the ZNS. For example, the host102can send a read request or a write/program request along with the ZNS to the memory system110, and the memory system110can perform a data input/output operation corresponding to the read request or the write/program request (DATA I/O OPERATIONS WITH INITIAL CONFIGURATION). While at least one data input/output operation has been performed or a data input/output operation is being performed, the host102or the memory system110may perform an operation of resetting a storage space associated with the ZNS (RECONFIGURED ALLOCATION OPERATION). For example, the operation of resetting the storage space associated with the ZNS may be performed through the first and second requests RES/RCS described inFIG.1. In addition, in response to a characteristic of data generated by the host102or an operation state of the memory block in the memory system110, the host102can send the first and second requests RES/RCS to the memory system110. The number of bits of data stored in a non-volatile memory cell included in a memory block in the memory system110may be adjusted (increased or decreased). According to an embodiment, in response to the first request RES, the memory system110can maintain a storage capacity corresponding to the ZNS used by the host102and deactivate another ZNS which is configured during the initial set procedure between the memory system110and the host102but not used after the initial set procedure, so that the memory system110can notify information of the reconfiguration to the host102. In addition, in response to the second request RCS, the memory system110can maintain the storage capacity corresponding to the ZNS and activate the deactivated ZNS, so that the memory system110can notify information of the resetting to the host102. In this case, the memory system110can read data from a plurality of memory blocks including a memory cell storing 1-bit data and re-program the read data in a single memory block including a memory cell storing mufti-bit data, so that the memory system110can secure available memory blocks allocated for the activated ZNS. After the operation of resetting the storage space of the ZNS is performed, the host102may continue the data input/output operation through the ZNS. At this time, as a storage capacity of the ZNS is kept unchanged, an application can request input/output data of the memory system110in the same manner as before. However, when the memory system110deactivates at least one ZNS (e.g., release memory block allocation for the at least one ZNS), the host102may not use the ZNS deactivated by the memory system110(DATA I/O OPERATIONS WITH RECONFIGURED ALLOCATION). FIG.5illustrates a memory system including a non-volatile memory device supporting a namespace divided by plural zones. Referring toFIG.5, a plurality of applications312,314,316(refer toFIGS.1to4) can use ZNS to request data I/O operation of the controller130. A plurality of memory blocks included in the memory device150may be allocated for each of three ZNSs322,324,326. A memory block322_1allocated for a first ZNS322will be described. Herein, the memory block322_1may be considered a group of memory cells which are erased together at a time by an erase operation. A first application (APPL1,312) can use the first ZNS (322, zone #0). The ZNS can reduce influence which occurs from a difference between a logical address scheme used by the host102and a physical address scheme used in the memory device150. The first application (APPL1, 312) may generate data and assign, to the data, a specific logical address in a range of logical addresses, assigned to the first ZNS (322, zone #0). Such data generated by the first application (APPL1,312) may be sequentially stored in the memory block322_1allocated for the first applications (APPL1,312). Each of the plurality of applications312,314,310may use a designated or assigned ZNS among the plurality of ZNSs zone #0, zone #1, zone #3, zone #n. As described inFIG.1, according to an embodiment, the plurality of ZNSs may be allocated for one application. According to an embodiment, the plurality of applications312,314,316may share a single ZNS. In a logical address scheme, different ranges of logical addresses can be assigned in advance to each of the plurality of ZNSs zone #0, zone #1, zone #3, zone #n, which individually corresponds to the plurality of applications312,314,316. Each of the plurality of applications312,314,316might not use undesignated or unassigned ones of the plurality of ZNSs zone #0, zone #1, zone #3, zone #n. That is, a logical address pre-allocated to a specific ZNS may not be used by other applications using other ZNSs among the plurality of ZNSs zone #0, zone #1, zone #3, zone #n. When a ZNS is not shared by plural applications, this scheme of ZNS can avoid a phenomenon in which data generated by the plural applications may be mixed and jumbled in a single memory block, which is common in a conventional non-volatile memory device. In a scheme using different addresses such as a logical address and a physical address, both a logical address and a physical address are sequentially assigned to data items generated by an application, thereby making it easier to perform garbage collection. According to an embodiment, the host102may change storage spaces allocated for the ZNSs zone #0, zone #1, zone #3, zone #n. At least some unallocated memory blocks in the memory device150may be additionally allocated to the ZNSs322,324,326. According to an embodiment, when all data stored in a specific ZNS is deleted or when the specific ZNS is not further used, the host102may notify it to the memory system110. The memory system110may deactivate the specific ZNS according to a notification input from the host102, perform an erase operation on memory blocks allocated for the specific ZNS, or initialize setting values for the memory blocks allocated for the specific ZNS. In response to a request input from the host102, the memory system110can deactivate a specific ZNS in which no data is stored, or additionally allocate a memory block, which is not allocated for any ZNS, for the specific ZNS. FIG.6illustrates a setting change of a storage space in the non-volatile memory device supporting the namespace divided by plural zones. InFIG.6, in a memory system, a memory block allocated for a plurality of ZNSs can include a non-volatile memory cell (TLC) capable of storing 3-bit data as an example. InFIG.6, a memory device including a non-volatile memory cell (TLC) capable of storing 3-bit data has been described as an example, but embodiments of the present disclosure are not limited thereto. For example, an embodiment can be applicable to a memory device including non-volatile memory cells, each non-volatile memory cell being capable of storing 2-bit data or 4-bit data. Referring toFIG.6, there may be some characteristics of data that the host stores in the memory system. For example, a data temperature of data may be determined according to an access frequency, an update frequency, and the like. Data can be classified into 4 types: hot, warm, cold, and icy. The access frequency of data decreases in an order of hot, warm, cold, and icy, and the update frequency of data may go down in the same order. The application executed by the host may transmit a request for programming data to the memory system in response to data temperature. Because icy data is not frequently accessed or updated, the performance of the memory system might be not deteriorated even if the icy data is stored through a ZNS in a memory block including non-volatile memory cells (TLCs) capable of storing 3-bit data. On the other hand, because hot data can be accessed or updated frequently, a data input/output operation might not be performed fast in a case when the hot data is stored through the ZNS in the memory block including non-volatile memory cells (TLC) capable of storing 3-bit data. As the number of bits of data stored in the non-volatile memory cell increases, an operation of storing or reading data in or from the non-volatile memory cell can be performed more slowly. As the number of bits of data stored in the non-volatile memory cell decreases, the operation of storing or reading data in or from the non-volatile memory cell can be performed faster. To improve or enhance I/O performance of the memory system, the memory system110can adjust a memory block, which is allocated for a ZNS assigned to the hot data, to store 1-bit data instead of 3-bit data in non-volatile memory cells included in the memory block. When the memory cells included in the memory block allocated for a specific ZNS assigned to the hot data is adjusted to store 1-bit data, not 3-bit data, a storage capacity of the specific ZNS might be practically reduced by a third. However, the memory system110can fix the storage capacity of the specific ZNS even if the number of bits of data is adjusted. To avoid a reduction in the storage capacity of the specific ZNS, the memory system can deactivate two other ZNSs corresponding to other memory blocks including non-volatile memory cells (TLC) capable of storing 3-bit data. That is, the memory system may also adjust memory cells to store 1-bit data within memory blocks of the two other ZNSs and incorporate the memory blocks from the two other ZNSs into the particular ZNS. Referring toFIG.6, three TLC ZNSs associated with non-volatile memory cells TLC, each capable of storing 3-bit data, can be adjusted into one SLC ZNS associated with non-volatile memory cells SLC, each capable of storing 1-bit data. According to another embodiment, one SLC ZNS associated with non-volatile memory cells SLC, each capable of storing 1-bit data, can be adjusted into three TLC ZNSs associated with non-volatile memory cells TLC, each capable of storing 3-bit data. FIG.7illustrates a change in memory blocks according to a setting change of the storage space in the non-volatile memory device supporting the namespace divided by plural zones. Referring toFIG.7, a host workload including a data input/output operation input from a host is transmitted along with a ZNS associated with a non-volatile memory cell (TLC) capable of storing 3-bit data. Data can be stored in the memory system through first to third TLC ZNSs TLC Zone #0, TLC Zone #1, TLC Zone #2. For example, 72 MB storage capacity may be originally allocated for each of the first to third ZNSs TLC Zone #0, TLC Zone #1, TLC Zone #2. Four TLC memory blocks included in the memory device may be originally allocated for the first to third ZNSs TLC Zone #0, TLC Zone #1, TLC Zone #2. The TLC memory block can store 18 MB of data. To speed up the data input/output operation requested by the host, the memory system can adjust a number of bits of data stored in a memory cell included in a memory block, which is allocated for a ZNS. Four memory blocks including TLCs each capable of storing 3-bit data can be originally allocated for the second ZNS TLC Zone #1 therefore having 72 MB storage capacity. As a result of the adjusting of the number of bits of data stored in a memory cell included in a memory block allocated for the second ZNS TLC Zone #1 of 72 MB storage capacity, the second ZNS TLC Zone #1 may become a fourth ZNS SLC Zone #0 having 4 SLC memory blocks, each capable of 6 MB data, and thus of 24 MB storage capacity. At this time, 8 number of TLC memory blocks originally allocated for the first and third ZNSs TLC Zone #0 and TLC Zone #2 may also be adjusted to SLC memory blocks and may be incorporated into the fourth ZNS SLC Zone #0. Now, the fourth ZNS SLC Zone #0 may have 12 number of SLC memory blocks and thus may be of 72 MB storage capacity, which stays the same as the original storage capacity. Because the 8 number of TLC memory blocks originally allocated for the first and third ZNSs TLC Zone #0, TLC Zone #2 are re-allocated (i.e., adjusted to SLC memory blocks and incorporated) for the fourth ZNS SLC Zone #0, the first and third ZNSs TLC Zone #0, TLC Zone #2 may be deactivated. According to an embodiment, the fourth ZNS SLC Zone #0 having SLC memory blocks may be adjusted back to the first to third ZNS TLC Zone #0, TLC Zone #1, TLC Zone #2 having TLC memory blocks without change to the original storage capacity. In this case, the SLC memory block is adjusted back to a TLC memory block, and the 12 TLC memory blocks are allocated four each to the first to third ZNSs TLC Zone #0, TLC Zone #1, TLC Zone #2. Through this operation, the deactivated first and third TLC ZNSs TLC Zone #0, TLC Zone #2 can be activated for a data input/output operation, FIG.8illustrates a second example of the method for operating the data processing system including the host and the memory system. The method of operating the data processing system illustrated inFIG.8is a reconfiguration procedure between the host102and the memory system110, so as to change or adjust a storage space of the ZNS. Referring toFIG.8, the host may perform a host workload corresponding to a user request (342). The host workload may include an operation for storing data, generated by an application executed by a host, to a memory system or reading data stored from the memory system. The host may determine whether to re-configure ZNSs based on a characteristic of data, a priority of data, a data input/output speed required by an application, an operation state of the memory device, and the like (344). The host may transmit data to the memory system with re-configuration information regarding the ZNSs (346). Referring toFIG.1, the first and second requests RES/RCS, transmitted by the host102to the memory system110, can be understood as an example for re-configuration regarding the ZNSs. Before programming data, the memory system may perform a reconfiguration for the ZNSs, based on a request which is input from the host (352). After reconfiguring the ZNSs, the memory system may program data in a location corresponding to the reconfigured ZNS (354). After the ZNS is reconfigured and the data are programmed, the memory system may transmit to the host a response ACK regarding the reconfiguration and data program (356). Although not shown, according to an embodiment, even after the reconfiguration for the ZNSs requested from the host and the program of data according to the reconfigured ZNSs are successfully completed, the memory system may reset or reconfigure the ZNSs in response to another host's request. For example, the reconfiguration of the ZNSs may be performed during an initial procedure after the host and the memory system are powered on and coupled with each other. Further, as described inFIG.4, even after the host and the memory system perform data input/output operations for a certain period of time, the reconfiguration of the ZNSs can be performed in response to a user's request or host's request. When the data program is completed according to the reconfiguration for the ZNSs, the memory system can perform an operation corresponding to the host request, so that the ZNSs can be reconfigured or reset. For reconfiguration, the memory system can initialize settings regarding the ZNSs or the memory blocks corresponding to the ZNS, in response to the host request. As above described, a memory system or a data processing system according to an embodiment of the present disclosure may utilize a non-volatile memory device that supports a namespace divided by plural zones, each zone corresponding to characteristics of data. Further, the memory system according to another embodiment of the present disclosure may increase a lifespan of the non-volatile memory device supporting the namespace divided by the plural zones. While the present teachings have been illustrated and described with respect to the specific embodiments, it will be apparent to those skilled in the art in light of the present disclosure that various changes and modifications may be made without departing from the spirit and scope of the disclosure as defined in the following claims. | 89,493 |
11861190 | DESCRIPTION The present invention is directed to systems and methods of allocating RAM. According to embodiments, methods and apparatuses of optimizing the access to RAM are provided. FIG.1illustrates a simplified block diagram of a system100comprising a RAM block allocator102according to an embodiment, in communication with a microcontroller104comprising an instruction port106and a data port108. Here, the microcontroller includes separate infrastructure dedicated to processing either:instructions for performing programming, ordata that is handled (stored, retrieved, modified) by that programming. The RAM block allocator and microcontroller are in communication via two separate channels:instruction channel110in communication with respective instruction ports106and112; anddata channel114in communication with respective data ports108and116. The instruction channel and the data channel are indirectly coupled to at least two RAM blocks118(one block exclusively for instructions, one block exclusively for data) via respective RAM ports120of the (configurable) RAM block allocator. Storage capacity of memory blocks in communication with the instruction channel may be combined logically to serve as instruction memory for the microcontroller. Capacity of memory blocks in communication with the data channel may be combined logically to serve as data memory. Program instructions are typically stored in the instruction memory, while the data upon which it operates is commonly stored in the data memory. Generally, a memory block connected to the data channel cannot be used to store program instructions, and a memory block connected to the instruction channel cannot be used to store program data. However some applications may require a larger amount of data, while other applications may involve a larger program size. Accordingly, it may be difficult for the designer to predict in advance, how much memory should be physically connected to the instruction versus data channels. Such an up-front decision by the designer, may undesirably later result in insufficient allocation of memory space of one type, while excess memory of the other type is left idle. Accordingly, embodiments of the present invention allow configurable allocation of memory blocks as between instruction storage and data storage purposes, utilizing the RAM block allocator. Specifically, by referencing incoming configuration signal122sent via configuration line124and received at configuration port126, an allocation engine128of the RAM block allocator may be configured at product deployment time to:logically associate at least one of the available memory blocks with the instruction channel; andlogically associate at least one of the remaining available memory blocks to the data channel. This configuration allows the designer to determine at deployment time, how much of the available memory is allocated for those two (data, instruction) purposes. This achieves flexibility together with high performance. In particular, the separate channels dedicated to instructions and data prevent bottlenecks that might arise if only a single channel were used for one memory access at a time. FIG.2is a simplified block diagram illustrating the operating environment200of an allocation engine202according to an embodiment. At runtime, configuration signal204is received. That configuration signal is transmitted by a designer and received at a configuration port of the RAM allocator. This configuration signal indicates the memory that is to be specifically allocated for storage of instruction information, and the memory that is to be specifically allocated for storage of data information. The allocation engine processes the configuration signal, and in response generates206an address map208containing particular details for the routing of incoming instructions and data for storage in appropriate memory blocks allocated thereto. The address map is stored in a non-transitory storage medium210accessible to the allocator engine, for later reference. An incoming instruction signal250is received by the allocation engine. That instruction signal may be a read or a write. The allocation engine references252the address map208, and in response issues a control signal254to a connection point256. As a result, the instruction signal is routed through the connection point to memory block258that has been allocated to store exclusively instruction information. Also at runtime, an incoming data signal260is received by the allocation engine. That instruction signal may again be a read or a write. The allocation engine references252the address map208, and in response issues a control signal262to a different connection point264. As a result, the data signal is routed through the connection point to memory block266that has been allocated to store exclusively data information. FIG.3illustrates a detailed view of a RAM allocator according to an embodiment. The instruction channel350and data channel352emanating from the ports of the microcontroller, are shown at the top side of the diagram. Each of those (instruction, data) channels may typically comprise an Address bus354, a Payload bus356, and a Read/Write (R/W) signal358. The R/W signal determines if a READ or WRITE access is to be performed. As an example, only three RAM memory blocks310-312are shown inFIG.3. Each memory block may have its own Address, Payload and R/W signals. It is noted that each such memory block may be in the form of discrete memory chips that are soldered down onto a Printed Circuit Board (PCB) if the processor/memory complex is implemented on a system board. Alternatively, the memory block may be in the form of a memory “chiplet” or a RAM macro block, if the processor/memory complex is implemented as an integrated chip. For ease of illustration, the combined Address, Payload, and R/W signals of the Instruction/Data Channels and the memory blocks are represented as broad busses303in the middle ofFIG.3. Thick dark circles in the diagram represent configurable cross points305. These cross points may be activated at deployment time in order to selectively couple each of the memory blocks appropriately to the instruction channel or the data channel. For example, memory block310may be configured to be coupled to the instruction channel such that instruction accesses would be routed to memory block310. Memory blocks311and312may be coupled to the data channel such that Data accesses are routed to those RAM blocks. Configurable Address Map units320,321,322ensure that each memory block responds to (and only to) the addresses allocated to that block. For example, if memory block311has a capacity of 2Kwords, while312has a capacity of 1Kwords, the RAM block allocator may be configured to map the first 2Kword of its Data Memory to311, and the next 1Kword to312. Subsequently, when a Data Channel READ access to word address “2176” is received, Address Map321may prevent memory block311from responding. Address Map322causes memory block312to return the content of its physical location “128” (since “2176”−“2048”=“128”). This description of the configurable cross points305, the address mapping units320,321,322, and other elements shown inFIG.3represents one possible exemplary embodiment of the current invention. Examples of RAM block allocators may be implemented through one or more different mechanisms, including but not limited to:configuration pins,integrated programmable eFuses,simple configuration registers, orother relevant mechanisms. For a sophisticated CPU, RAM block allocation could be implemented utilizing a simple micro-controller, micro-sequencer, or some Look Up Table (LUT), to provide flexibility in operation. FIG.4is a simplified flow chart showing a method400according to an embodiment. During design time401, a design input402is received from a designer. At403, an address map is created and stored. This address map includes a plurality of configurations for allocating instruction information and data information between available memory blocks. During runtime404, a configuration signal is received406. This configuration signal determines a particular configuration for allocating instruction information and data information between available memory blocks. At407, an incoming instruction signal is received. At408, the address map is referenced based upon the configuration signal and the characteristics of the instruction signal. At410, a connection point (e.g., pin, fuse, register) is controlled based upon the address map. At412, the instruction signal is routed to an appropriate memory block that has been allocated to store exclusively instruction information. At414, an incoming data signal is received. Returning to408, the address map is again referenced based upon the characteristics of the data signal. Returning to410, a different connection point is then controlled based upon the reference to the address map. At416, the data signal is routed to a different memory block that has been allocated to store exclusively data information. Embodiments of a RAM allocator according to embodiments, may offer one or more benefits. For example, embodiments afford a degree of flexibility for the designer to allocate at deployment time, the available memory to different purposes. This flexibility is allowed while still taking advantage of the performance that comes with having multiple independent (data, instruction) memory channels. Example To illustrate the mapping for RAM block allocation according to an exemplary embodiment, consider the following simplified scenario. A system has three RAM blocks, of sizes 64 KB, 32 KB, and 16 KB. At design time, the following two possible exemplary configurations are included in the address map. RAM #2RAM #3Total InstructionTotal DataConfig #RAM #1 (64 KB)(32 KB)(16 KB)MemoryMemoryAAllocate to INSTAllocateAllocate96 KB16 KBto INSTto DATA(64 + 32)BAllocate to INSTAllocateAllocate64 KB48 KBto DATAto DATA(32 + 16) Here, in Config #A, there is 96 KB worth of instruction memory. This instruction memory may be accessible via an address range such as 0x0_0000-0x0_5FFF (assuming each location is 32 bits wide). In Config #A, there is also 16 KB worth of data memory. This data memory may be accessed via another address range such as 0x8_0000-0x8_0FFF (again assuming 32 bits per location). These particular addresses just mentioned above, are given as examples for purposes of illustration only. Accesses beyond those mapped ranges have undefined behavior, since that is all the memory that is available. In this case, when an instruction access comes in from the dedicated microcontroller port via the instruction channel, the memory allocator needs to look at the given address, and route it to one of RAM blocks #1 or #2. For example, it may be such that 0x0_0000-0x0_03FFF (16 K×32 bits=64 KB) is mapped to RAM #1, and 0x0_4000-0x0_5FFF is mapped to RAM #2. Let us say the incoming address is 0x0_4001. The allocator would cause the 2nd location of RAM #2 to be accessed by this request, the 1st location of RAM #2 being mapped to 0x0_4000. Similarly, if Config #B were used, the allocator would perform a corresponding mapping for RAM #2 and RAM #3, upon a data access. For implementation, the available memory may comprise more than the three RAM blocks described in the simple example above. And, various configurations implemented within the address map created at design time, can allow different combinations of those RAM blocks to be allocated. Specifically, at design time the number of available memory blocks may be determined and fixed, and a number of supported configurations {A,B,C,D . . . } may be chosen by the designer. The designer may choose to support all possible combinations of allocating instruction/data between available memory blocks for full flexibility, or to support a subset thereof in order to reduce design complexity. Those chosen configurations (and their corresponding mappings) are designed into the address map (e.g.,208inFIG.2), and made known to the software programmer. At deployment time, the programmer chooses from amongst the supported configurations, by supplying the configuration signal (e.g.,204inFIG.2). No provision is made for the programmer to choose an unsupported configuration not already designed into the address map at design time. While the above is a full description of the specific embodiments, various modifications, alternative constructions and equivalents may be used. Given the various applications and embodiments as described herein, the above description and illustrations should not be taken as limiting the scope of the present invention which is defined by the appended claims. | 12,794 |
11861191 | DETAILED DESCRIPTION Some memory systems, such as non-volatile memory systems (e.g., memory systems that include non-volatile memory cells, such as NAND memory cells or Flash memory cells), may store data (e.g., parameter data) that contains information about the memory system. For example, the parameter data may instruct the memory system what commands it can perform, how to interface with a host system, what standard (or standards) the memory system is capable of employing, and other vital information. In some instances, the parameter data may not be able to be modified by a user of the memory system (e.g., a user may not be able to write, erase, or otherwise alter the data). As such, if the parameter data becomes corrupt (e.g., if one or more bits of the data become corrupt), the memory system may be incapable of functioning properly. To account for this issue, some memory systems may store multiple copies of parameter data. For example, a memory system may include n copies of the data (e.g., in a table format) such that, if one copy becomes corrupt, the copy can be replaced to ensure the memory system functions properly. However, a single copy of parameter data occupies a large storage area of the memory system, thus storing multiple copies may diminish the amount of storage available to the memory system for storing other data. Accordingly, a memory system configured to correct errors in parameter data and thus reduce the quantity of stored copies may be desirable. A memory system configured to correct errors associated with parameter data is described herein. In some examples, a memory system may store data that contains instructions about what commands the memory system can perform, how the memory system interfaces with a host system, what standard (or standards) the memory system is capable of employing, and other vital information. Such data, which may be stored in a table at the memory system, may be collectively referred to as “parameter data.” In some instances, when the parameter data becomes corrupt (e.g., when one or more bits of the parameter table “flip” to an incorrect bit value), the memory system or a host system coupled with the memory system may identify and correct the error using an error control code (ECC). In some examples, when a memory system boots for a first time, an ECC for the parameter data may be generated. The ECC may be generated either by the memory system or by the host system. Accordingly, during subsequent boot sequences, the memory system or host system may use the ECC to identify and correct the errors. In some instances, an error may occur when a single bit in the parameter data flips to an incorrect bit value (e.g., flips from a “1” to a “0” or vice versa). Such errors may be referred to herein as “bitflips”, although the parameter data may be subject to other types of errors that are correctable by the ECC. Additionally or alternatively, the ECC generated by the memory system or host system may be configured to correct a finite quantity of errors of the parameter data. Thus, each time an error is corrected (e.g., during a single boot process), the memory system or host system may increment a counter. If the quantity of errors corrected satisfies a threshold value (e.g., a quantity of errors that the ECC is capable of correcting), a new ECC may be generated or selected. The new ECC (e.g., the ECC selected or generated when the threshold value is satisfied) may be configured to correct a different (e.g., a larger) quantity of errors than the prior ECC. Because the size and complexity of an ECC may depend on the quantity of errors it is configured to correct, employing a counter to determine when to generate or select another ECC may reduce the amount of storage of the memory device that is occupied by the ECC. Moreover, by utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory system, which may improve the overall storage capabilities of the memory system. Features of the disclosure are initially described in the context of systems, devices, and circuits with reference toFIGS.1through2. Features of the disclosure are described in the context of process flow diagrams with reference toFIGS.3-6. These and other features of the disclosure are further illustrated by and described in the context of block diagrams and flowcharts that relate to parameter table protection for a memory system with reference toFIGS.7-10. FIG.1illustrates an example of a system100that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The system100includes a host system105coupled with a memory system110. A memory system110may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system110may be or include a Universal Flash Storage (UFS) device, an embedded Multi-Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities. The system100may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device. The system100may include a host system105, which may be coupled with the memory system110. In some examples, this coupling may include an interface with a host system controller106, which may be an example of a controller or control component configured to cause the host system105to perform various operations in accordance with examples as described herein. The host system105may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system105may include an application configured for communicating with the memory system110or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA) controller). The host system105may use the memory system110, for example, to write data to the memory system110and read data from the memory system110. Although one memory system110is shown inFIG.1, the host system105may be coupled with any quantity of memory systems110. The host system105may be coupled with the memory system110via at least one physical host interface. The host system105and the memory system110may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system110and the host system105). Examples of a physical host interface may include, but are not limited to, a SATA interface, a UFS interface, an eMMC interface, a PCIe interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), and a Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller106of the host system105and a memory system controller115of the memory system110. In some examples, the host system105may be coupled with the memory system110(e.g., the host system controller106may be coupled with the memory system controller115) via a respective physical host interface for each memory device130included in the memory system110, or via a respective physical host interface for each type of memory device130included in the memory system110. The memory system110may include a memory system controller115and one or more memory devices130. A memory device130may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereof). Although two memory devices130-aand130-bare shown in the example ofFIG.1, the memory system110may include any quantity of memory devices130. Further, if the memory system110includes more than one memory device130, different memory devices130within the memory system110may include the same or different types of memory cells. The memory system controller115may be coupled with and communicate with the host system105(e.g., via the physical host interface) and may be an example of a controller or control component configured to cause the memory system110to perform various operations in accordance with examples as described herein. The memory system controller115may also be coupled with and communicate with memory devices130to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device130—among other such operations—which may generically be referred to as access operations. In some cases, the memory system controller115may receive commands from the host system105and communicate with one or more memory devices130to execute such commands (e.g., at memory arrays within the one or more memory devices130). For example, the memory system controller115may receive commands or operations from the host system105and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices130. In some cases, the memory system controller115may exchange data with the host system105and with one or more memory devices130(e.g., in response to or otherwise in association with commands from the host system105). For example, the memory system controller115may convert responses (e.g., data packets or other signals) associated with the memory devices130into corresponding signals for the host system105. The memory system controller115may be configured for other operations associated with the memory devices130. For example, the memory system controller115may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from the host system105and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices130. The memory system controller115may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller115. The memory system controller115may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry. The memory system controller115may also include a local memory120. In some cases, the local memory120may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller115to perform functions ascribed herein to the memory system controller115. In some cases, the local memory120may additionally or alternatively include static random access memory (SRAM) or other memory that may be used by the memory system controller115for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller115. Additionally or alternatively, the local memory120may serve as a cache for the memory system controller115. For example, data may be stored in the local memory120if read from or written to a memory device130, and the data may be available within the local memory120for subsequent retrieval for or manipulation (e.g., updating) by the host system105(e.g., with reduced latency relative to a memory device130) in accordance with a cache policy. Although the example of the memory system110inFIG.1has been illustrated as including the memory system controller115, in some cases, a memory system110may not include a memory system controller115. For example, the memory system110may additionally or alternatively rely upon an external controller (e.g., implemented by the host system105) or one or more local controllers135, which may be internal to memory devices130, respectively, to perform the functions ascribed herein to the memory system controller115. In general, one or more functions ascribed herein to the memory system controller115may in some cases instead be performed by the host system105, a local controller135, or any combination thereof. In some cases, a memory device130that is managed at least in part by a memory system controller115may be referred to as a managed memory device. An example of a managed memory device is a managed NAND (MNAND) device. A memory device130may include one or more arrays of non-volatile memory cells. For example, a memory device130may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), electrically erasable programmable ROM (EEPROM), or any combination thereof. Additionally or alternatively, a memory device130may include one or more arrays of volatile memory cells. For example, a memory device130may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells. In some examples, a memory device130may include (e.g., on a same die or within a same package) a local controller135, which may execute operations on one or more memory cells of the respective memory device130. A local controller135may operate in conjunction with a memory system controller115or may perform one or more functions ascribed herein to the memory system controller115. For example, as illustrated inFIG.1, a memory device130-amay include a local controller135-aand a memory device130-bmay include a local controller135-b. In some cases, a memory device130may be or include a NAND device (e.g., NAND flash device). A memory device130may be or include a memory die160. For example, in some cases, a memory device130may be a package that includes one or more dies160. A die160may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die160may include one or more planes165, and each plane165may include a respective set of blocks170, where each block170may include a respective set of pages175, and each page175may include a set of memory cells. In some cases, a NAND memory device130may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device130may include memory cells configured to each store multiple bits of information, which may be referred to as multi-level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more generically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry. In some cases, planes165may refer to groups of blocks170, and in some cases, concurrent operations may take place within different planes165. For example, concurrent operations may be performed on memory cells within different blocks170so long as the different blocks170are in different planes165. In some cases, an individual block170may be referred to as a physical block, and a virtual block180may refer to a group of blocks170within which concurrent operations may occur. For example, concurrent operations may be performed on blocks170-a,170-b,170-c, and170-dthat are within planes165-a,165-b,165c, and165-d, respectively, and blocks170-a,170-b,170-c, and170-dmay be collectively referred to as a virtual block180. In some cases, a virtual block may include blocks170from different memory devices130(e.g., including blocks in one or more planes of memory device130-aand memory device130-b). In some cases, the blocks170within a virtual block may have the same block address within their respective planes165(e.g., block170-amay be “block 0” of plane165-a, block170-bmay be “block 0” of plane165-b, and so on). In some cases, performing concurrent operations in different planes165may be subject to one or more restrictions, such as concurrent operations being performed on memory cells within different pages175that have the same page address within their respective planes165(e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes165). In some cases, a block170may include memory cells organized into rows (pages175) and columns (e.g., strings, not shown). For example, memory cells in a same page175may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line). For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page175may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block170may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re-written with new data. Thus, for example, a used page175may in some cases not be updated until the entire block170that includes the page175has been erased. The system100may include any quantity of non-transitory computer readable media that support parameter table protection for a memory system. For example, the host system105, the memory system controller115, or a memory device130(e.g., a local controller135) may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system105, memory system controller115, or memory device130. For example, such instructions, if executed by the host system105(e.g., by the host system controller106), by the memory system controller115, or by a memory device130(e.g., by a local controller135), may cause the host system105, memory system controller115, or memory device130to perform one or more associated functions as described herein. In some cases, a memory system110may utilize a memory system controller115to provide a managed memory system that may include, for example, one or more memory arrays and related circuitry combined with a local (e.g., on-die or in-package) controller (e.g., local controller135). An example of a managed memory system is a managed NAND (MNAND) system. In some examples, the memory system110may store parameter data at one or more memory devices130or at the local memory120. As described herein, the host system105or the memory system110(e.g., the memory system controller115) may be configured to generate an ECC in order to correct errors in the parameter data. For example, upon booting the memory system110for a first time, the memory system controller115may generate and store an ECC. Thus, upon subsequent boot sequences, the memory system controller115may identify (e.g., using the ECC) one or more errors in the parameter data. Accordingly, the ECC may be used to correct the error(s), which may allow for the memory system110to function properly (e.g., to avoid a boot failure and, subsequently, a system crash). In other examples, upon booting the memory system110for a first time, the memory system controller115may transmit a copy of the parameter data to the host system105. The host system105, upon receiving the copy of the parameter data, may generate an ECC and transmit the ECC to the memory system110for storage. Thus, upon subsequent boot sequences of the memory system110, the host system105may receive a copy of the parameter data and access the ECC (e.g., stored at the memory system110) to identify and correct one or more errors in the parameter data. Accordingly, the host system105may transmit the corrected parameter data to the memory system110, and the memory system controller115may store the corrected parameter data as a new copy or overwrite the existing (e.g., corrupt) copy. By utilizing an ECC to correct errors in parameter data, either by the host system105or by the memory system110, fewer copies of the parameter data may be stored to the memory system110, which may improve its overall storage capabilities. FIG.2illustrates an example of a system200that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The system200may be an example of a system100as described with reference toFIG.1or aspects thereof. The system200may include a memory system210configured to store data received from the host system205and to send data to the host system205, if requested by the host system205using access commands (e.g., read commands or write commands). The system200may implement aspects of the system100as described with reference toFIG.1. For example, the memory system210and the host system205may be examples of the memory system110and the host system105, respectively. The memory system210may include memory devices240to store data transferred between the memory system210and the host system205, e.g., in response to receiving access commands from the host system205, as described herein. The memory devices240may include one or more memory devices as described with reference toFIG.1. For example, the memory devices240may include NAND memory, PCM, self-selecting memory, 3D cross point, other chalcogenide-based memories, FERAM, MRAM, NOR (e.g., NOR flash) memory, STT-MRAM, CBRAM, RRAM, or OxRAM. The memory system210may include a storage controller230for controlling the passing of data directly to and from the memory devices240, e.g., for storing data, retrieving data, and determining memory locations in which to store data and from which to retrieve data. The storage controller230may communicate with memory devices240directly or via a bus (not shown) using a protocol specific to each type of memory device240. In some cases, a single storage controller230may be used to control multiple memory devices240of the same or different types. In some cases, the memory system210may include multiple storage controllers230, e.g., a different storage controller230for each type of memory device240. In some cases, a storage controller230may implement aspects of a local controller135as described with reference toFIG.1. The memory system210may additionally include an interface220for communication with the host system205and a buffer225for temporary storage of data being transferred between the host system205and the memory devices240. The interface220, buffer225, and storage controller230may be for translating data between the host system205and the memory devices240, e.g., as shown by a data path250, and may be collectively referred to as data path components. Using the buffer225to temporarily store data during transfers may allow data to be buffered as commands are being processed, thereby reducing latency between commands and allowing arbitrary data sizes associated with commands. This may also allow bursts of commands to be handled, and the buffered data may be stored or transmitted (or both) once a burst has stopped. The buffer225may include relatively fast memory (e.g., some types of volatile memory, such as SRAM or DRAM) or hardware accelerators or both to allow fast storage and retrieval of data to and from the buffer225. The buffer225may include data path switching components for bi-directional data transfer between the buffer225and other components. The temporary storage of data within a buffer225may refer to the storage of data in the buffer225during the execution of access commands. That is, upon completion of an access command, the associated data may no longer be maintained in the buffer225(e.g., may be overwritten with data for additional access commands). In addition, the buffer225may be a non-cache buffer. That is, data may not be read directly from the buffer225by the host system205. For example, read commands may be added to a queue without an operation to match the address to addresses already in the buffer225(e.g., without a cache address match or lookup operation). The memory system210may additionally include a memory system controller215for executing the commands received from the host system205and controlling the data path components in the moving of the data. The memory system controller215may be an example of the memory system controller115as described with reference toFIG.1. A bus235may be used to communicate between the system components. In some cases, one or more queues (e.g., a command queue260, a buffer queue265, and a storage queue270) may be used to control the processing of the access commands and the movement of the corresponding data. This may be beneficial, e.g., if more than one access command from the host system205is processed concurrently by the memory system210. The command queue260, buffer queue265, and storage queue270are depicted at the interface220, memory system controller215, and storage controller230, respectively, as examples of a possible implementation. However, queues, if used, may be positioned anywhere within the memory system210. Data transferred between the host system205and the memory devices240may take a different path in the memory system210than non-data information (e.g., commands, status information). For example, the system components in the memory system210may communicate with each other using a bus235, while the data may use the data path250through the data path components instead of the bus235. The memory system controller215may control how and if data is transferred between the host system205and the memory devices240by communicating with the data path components over the bus235(e.g., using a protocol specific to the memory system210). If a host system205transmits access commands to the memory system210, the commands may be received by the interface220, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). Thus, the interface220may be considered a front end of the memory system210. Upon receipt of each access command, the interface220may communicate the command to the memory system controller215, e.g., via the bus235. In some cases, each command may be added to a command queue260by the interface220to communicate the command to the memory system controller215. The memory system controller215may determine that an access command has been received based on the communication from the interface220. In some cases, the memory system controller215may determine the access command has been received by retrieving the command from the command queue260. The command may be removed from the command queue260after it has been retrieved therefrom, e.g., by the memory system controller215. In some cases, the memory system controller215may cause the interface220, e.g., via the bus235, to remove the command from the command queue260. Upon the determination that an access command has been received, the memory system controller215may execute the access command. For a read command, this may mean obtaining data from the memory devices240and transmitting the data to the host system205. For a write command, this may mean receiving data from the host system205and moving the data to the memory devices240. In either case, the memory system controller215may use the buffer225for, among other things, temporary storage of the data being received from or sent to the host system205. The buffer225may be considered a middle end of the memory system210. In some cases, buffer address management (e.g., pointers to address locations in the buffer225) may be performed by hardware (e.g., dedicated circuits) in the interface220, buffer225, or storage controller230. To process a write command received from the host system205, the memory system controller215may first determine if the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the write command. In some cases, a buffer queue265may be used to control a flow of commands associated with data stored in the buffer225, including write commands. The buffer queue265may include the access commands associated with data currently stored in the buffer225. In some cases, the commands in the command queue260may be moved to the buffer queue265by the memory system controller215and may remain in the buffer queue265while the associated data is stored in the buffer225. In some cases, each command in the buffer queue265may be associated with an address at the buffer225. That is, pointers may be maintained that indicate where in the buffer225the data associated with each command is stored. Using the buffer queue265, multiple access commands may be received sequentially from the host system205and at least portions of the access commands may be processed concurrently. If the buffer225has sufficient space to store the write data, the memory system controller215may cause the interface220to transmit an indication of availability to the host system205(e.g., a “ready to transfer” indication), e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). As the interface220subsequently receives from the host system205the data associated with the write command, the interface220may transfer the data to the buffer225for temporary storage using the data path250. In some cases, the interface220may obtain from the buffer225or buffer queue265the location within the buffer225to store the data. The interface220may indicate to the memory system controller215, e.g., via the bus235, if the data transfer to the buffer225has been completed. Once the write data has been stored in the buffer225by the interface220, the data may be transferred out of the buffer225and stored in a memory device240. This may be done using the storage controller230. For example, the memory system controller215may cause the storage controller230to retrieve the data out of the buffer225using the data path250and transfer the data to a memory device240. The storage controller230may be considered a back end of the memory system210. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, that the data transfer to a memory device of the memory devices240has been completed. In some cases, a storage queue270may be used to aid with the transfer of write data. For example, the memory system controller215may push (e.g., via the bus235) write commands from the buffer queue265to the storage queue270for processing. The storage queue270may include entries for each access command. In some examples, the storage queue270may additionally include a buffer pointer (e.g., an address) that may indicate where in the buffer225the data associated with the command is stored and a storage pointer (e.g., an address) that may indicate the location in the memory devices240associated with the data. In some cases, the storage controller230may obtain from the buffer225, buffer queue265, or storage queue270the location within the buffer225from which to obtain the data. The storage controller230may manage the locations within the memory devices240to store the data (e.g., performing wear-leveling, garbage collection, and the like). The entries may be added to the storage queue270, e.g., by the memory system controller215. The entries may be removed from the storage queue270, e.g., by the storage controller230or memory system controller215upon completion of the transfer of the data. To process a read command received from the host system205, the memory system controller215may again first determine if the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the read command. In some cases, the buffer queue265may be used to aid with buffer storage of data associated with read commands in a similar manner as discussed above with respect to write commands. For example, if the buffer225has sufficient space to store the read data, the memory system controller215may cause the storage controller230to retrieve the data associated with the read command from a memory device240and store the data in the buffer225for temporary storage using the data path250. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, when the data transfer to the buffer225has been completed. In some cases, the storage queue270may be used to aid with the transfer of read data. For example, the memory system controller215may push the read command to the storage queue270for processing. In some cases, the storage controller230may obtain from the buffer225or storage queue270the location within the memory devices240from which to retrieve the data. In some cases, the storage controller230may obtain from the buffer queue265the location within the buffer225to store the data. In some cases, the storage controller230may obtain from the storage queue270the location within the buffer225to store the data. In some cases, the memory system controller215may move the command processed by the storage queue270back to the command queue260. Once the data has been stored in the buffer225by the storage controller230, the data may be transferred out of the buffer225and sent to the host system205. For example, the memory system controller215may cause the interface220to retrieve the data out of the buffer225using the data path250and transmit the data to the host system205, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). For example, the interface220may process the command from the command queue260and may indicate to the memory system controller215, e.g., via the bus235, that the data transmission to the host system205has been completed. The memory system controller215may execute received commands according to an order (e.g., a first-in, first-out order, according to the order of the command queue260). For each command, the memory system controller215may cause data corresponding to the command to be moved into and out of the buffer225, as discussed above. As the data is moved into and stored within the buffer225, the command may remain in the buffer queue265. A command may be removed from the buffer queue265, e.g., by the memory system controller215, if the processing of the command has been completed (e.g., if data corresponding to the access command has been transferred out of the buffer225). If a command is removed from the buffer queue265, the address previously storing the data associated with that command may be available to store data associated with a new command. The memory system controller215may additionally be configured for operations associated with the memory devices240. For example, the memory system controller215may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., LBAs) associated with commands from the host system205and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices240. That is, the host system205may issue commands indicating one or more LBAs and the memory system controller215may identify one or more physical block addresses indicated by the LBAs. In some cases, one or more contiguous LBAs may correspond to noncontiguous physical block addresses. In some cases, the storage controller230may be configured to perform one or more of the above operations in conjunction with or instead of the memory system controller215. In some cases, the memory system controller215may perform the functions of the storage controller230and the storage controller230may be omitted. In some examples, the memory system210may store parameter data at one or more memory devices240. As described herein, the host system205or the memory system210(e.g., the memory system controller215) may be configured to generate an ECC in order to correct errors in the parameter data. For example, upon booting the memory system210for a first time, the memory system controller215may generate and store an ECC. Thus, upon subsequent boot sequences, the memory system controller215may identify (e.g., using the ECC) one or more errors in the parameter data. Accordingly, the ECC may be used to correct the error(s), which may allow for the memory system210to function properly (e.g., to avoid a boot failure and, subsequently, a system crash). In other examples, upon booting the memory system210for a first time, the memory system controller215may transmit a copy of the parameter data to the host system205. The host system205, upon receiving the copy of the parameter data, may generate an ECC and transmit the ECC to the memory system210for storage. Thus, upon subsequent boot sequences of the memory system210, the host system205may receive a copy of the parameter data and access the ECC (e.g., stored at the memory system210) to identify and correct one or more errors in the parameter data. Accordingly, the host system205may transmit the corrected parameter data to the memory system210, and the memory system controller215may store the corrected parameter data as a new copy or overwrite the existing (e.g., corrupt) copy. By utilizing an ECC to correct errors in parameter data, either by the host system205or by the memory system210, fewer copies of the parameter data may be stored to the memory system210, which may improve its overall storage capabilities. FIG.3illustrates an example of a process flow diagram300that supports parameter table protection for a memory system in accordance with examples as disclosed herein. In some examples, the process flow diagram300may implement aspects of system100, system200, or a combination thereof. Accordingly, the operations described by the process flow diagram300may be performed at or by a host system (e.g., a host system105as described with reference toFIG.1, a host system205as described with reference toFIG.2), a memory system (e.g., a memory system110as described with reference toFIG.1, a memory system210as described with reference toFIG.2), or a combination thereof. The operations described with reference toFIG.3may result in the generation of an ECC, which can be used to identify and correct errors in parameter data. By utilizing an ECC to correct errors in parameter data, either by a host system or by a memory system, fewer copies of the parameter data may be stored to the memory system, which may improve its overall storage capabilities. FIG.3may illustrate operations performed by a host system or a memory system after an ECC is generated. That is, upon booting a memory system for a first time (e.g., a first occurrence after manufacturing, after installation, etc.), the memory system or the host system may generate an original ECC, which may be stored to the memory system and used to identify and correct errors in parameter data (e.g., parameter data stored to the memory system). As used herein, booting a memory system may refer to the process of providing power to one or more hardware components of the memory system or initiating software stored to one or more aspects of the memory system. In some instances, if the memory system stores more than one copy of parameter data, an ECC may be generated for each copy of the stored parameter data. Moreover, the ECC generated may be an example of an error correction code, an error control code, or an error detection code. Accordingly,FIG.3illustrates subsequent boot sequences of the memory system where errors in the parameter data may be identified and corrected using the ECC. At block305, a memory system may initiate a boot up process. As described above, the memory system may have already experienced an initial boot sequence, thus the boot sequence at block305may be a second (or a subsequent) boot sequence (e.g., a second or subsequent occurrence of the memory system booting) where an ECC has already been generated and stored to the memory system. At block310, the parameter data and corresponding ECC may be read from the memory system. As described herein, the parameter data and ECC may be read from the memory system, but either the memory system or the host system may use the ECC to correct errors in the parameter data. Accordingly, for on-die error correction (e.g., error correction performed by the memory system), at block310a memory controller or other component of the memory system may read the parameter data and ECC in order to identify and correct errors in the parameter data. In other examples, for host system error correction (e.g., error correction performed by the host system), at block310the parameter data may be transmitted to the host system and the host system may subsequently access the ECC stored at the memory system to identify and correct errors in the parameter data. At block315, either the memory system or the host system may determine whether the parameter data includes one or more errors (e.g., any bitflips or other types of errors). For on-die error correction, a memory controller or other component of the memory system may generate an ECC based on the current parameter data and may compare the generated ECC to the original ECC (e.g., the ECC generated and stored upon booting the memory system for a first time). In other examples, the host system may generate an ECC based on the received parameter data and may compare the generated ECC to the original ECC (e.g., the ECC generated and stored upon booting the memory system for a first time). In either instance, if the ECCs match, the parameter data is error-free. However, if the ECCs don't match then the parameter data may contain one or more errors, which may be due to one or more bits “flipping” (e.g., inadvertently changing from a “1” to a “0” or vice versa). If the parameter data is correct (e.g., if the parameter data does not contain any errors), the process flow may proceed to block335, whereas if the parameter data includes one or more errors, the process flow may proceed to block320. At block320, the host system or the memory system may correct one or more errors in the parameter data. For example, if the host system or memory system determines instances of bitflips in the parameter data, the host system or the memory system may use the originally generated ECC to correct the parameter data, which may allow the memory system to boot properly. For on-die error correction, a memory controller or other component of the memory system may correct the parameter data and may store the corrected parameter data to the memory system. For error correction performed by the host system, the host system may correct the parameter data and transmit the corrected parameter data back to the memory system for storage. In either instance, the corrected parameter data may be stored as a new copy, while in other instances the corrupt parameter data may be overwritten by the corrected parameter data. In some examples (not shown), either the host system or the memory system may increment a counter for each error in the parameter data that is corrected. The counter may be located at and/or managed by either the memory system (e.g., a memory controller of the memory system) or by the host system. For example, upon correcting a first error for a first time, the counter may be incremented (e.g., from “0” to “1”) and upon correcting a second error for a first time, the counter may again be incremented (e.g., from “1” to “2”) and so forth. In some instances, the counter may be reset each time the memory system reboots. At block325, after correcting the parameter data at320and incrementing a value of the counter, the value of the counter may be compared with a threshold value. The threshold value may be less than the total quantity of errors that the ECC is capable of correcting. For example, the ECC may be configured to correct a fixed quantity of errors and the threshold may be based off of the fixed quantity (e.g., threshold=¾×maximum quantity of correctable errors). By setting the threshold value less than the total quantity of errors the ECC is configured to correct, the memory system may be less susceptible to aliasing during subsequent error correction operations. Accordingly, either the host system or the memory system may be configured to compare the value of the counter to the threshold. If the value of the counter satisfies the threshold value, the host system or memory system may perform an ECC assessment at block330, whereas if the value of the counter does not satisfy the threshold value, the process flow may proceed to block335. At block330, the memory system or the host system may conduct an ECC assessment based on the value of the counter satisfying the threshold. In a first example of the host system or memory system performing an ECC assessment, the memory system or host system may generate a second ECC using the corrected parameter data. For example, the second ECC may be configured to correct a greater quantity of errors than the originally generated ECC. As such, during subsequent boot operations, the memory system or host system may use the second ECC to identify and correct errors, which may allow the memory system or host system to correct a larger quantity errors than it was previously capable of. Moreover, upon generating the ECC, the memory system or host system may update the threshold value (e.g., the threshold used at block325) to account for the second ECC being able to correct a larger quantity of errors. In a second example of the host system or the memory system performing an ECC assessment, the memory system or host system may generate multiple ECCs upon the first boot sequence of the memory system. For example, the memory system or the host system may generate a set of ECCs associated with the parameter data, where each ECC is configured to correct a different quantity of errors. Accordingly, when the memory system or host system determines that a value of the counter exceeds the threshold quantity of errors correctable by the ECC, the memory system or the host system may select a new (e.g., a second) ECC for use. The second ECC may be configured to correct a larger quantity of errors than the prior ECC. Upon the value of the counter satisfying the threshold value again, the memory system or the host system may continue to select an ECC capable of correcting a larger quantity of errors than the prior ECC. In a third example of the host system or the memory system performing an ECC assessment, the memory system or host system may store corrected parameter data at different location than the prior parameter data (e.g., the corrupt parameter data) was stored. For example, the corrupt parameter data may be stored at a first portion of the memory system and, upon correction, the corrected parameter data may be stored to a second portion of the memory system. In any instance, by utilizing an ECC to correct errors in parameter data, either by a host system or by a memory system, fewer copies of the parameter data may be stored to the memory system, which may improve its overall storage capabilities. FIG.4illustrates an example of a process flow diagram400that supports parameter table protection for a memory system in accordance with examples as disclosed herein. In some examples, the process flow diagram400may implement aspects of system100, system200, or a combination thereof. Accordingly, the operations described by the process flow diagram400may be performed at or by a host system405, a memory system410, or a combination thereof. The operations described with reference toFIG.4may result in the generation of an ECC by memory system410, which can be used to identify and correct errors in parameter data. By utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory system410, which may improve its overall storage capabilities. At425, the memory system410may boot up for a first time (e.g., a first boot sequence may occur at the memory system410). As described herein, the first occurrence of a memory system410booting may be after manufacturing the memory system410, during a first testing operation of the memory system410, after installing the memory system410in a product, or a similar situation. As used herein, booting a memory system410may refer to the process of providing power to one or more hardware components of the memory system410or initiating software stored to one or more aspects of the memory system410. In some instances, initiating software stored to one or more aspects of the memory system410may be initiated by signaling received from the host system405. At430, the memory controller415may transmit a request to the memory device420for parameter data stored to the memory device420. In some instances, the parameter data may have been stored to the memory device420during manufacturing (e.g., the parameter data may be hardcoded to the memory device420). In some instances, the parameter data may be stored to a dedicated portion of the memory device420, for example a portion of the memory device420that is dedicated to storing operational data such as the parameter data. The request transmitted at430may initiate reading the parameter data from the memory device420. At435, the parameter data may be communicated from the memory device420to the memory controller415. At440, the memory controller415may generate one or more ECCs that are associated with the parameter data. As described herein, in some examples the memory controller415may generate a single ECC that is configured to correct a fixed quantity of errors in the parameter data during subsequent boot operations. However, in other examples, the memory controller415may generate multiple ECCs that are each configured to correct a different fixed quantity of errors. Accordingly, when multiple ECCs are generated, the memory controller415may be configured select a different ECC (e.g., an ECC configured to correct a larger quantity of errors) when a threshold quantity of errors in the parameter data are corrected. Selecting a different ECC when a threshold quantity of errors are correct may prevent aliasing from occurring. At445, the memory controller415may store the ECC(s) to the memory device420. In some examples, the ECC(s) may be stored to a same portion of the memory device420as the parameter data. In other examples, the ECC(s) may be stored to a portion of the memory device dedicated for storing error correcting or error detecting codes. At450, the memory system410may boot up for a second time (e.g., a second boot sequence may occur at the memory system410). As described herein, the second occurrence of a memory system410booting may occur when power is provided to one or more hardware components of the memory system410or when software stored to one or more aspects of the memory system410is initiated. In some instances, initiating software stored to one or more aspects of the memory system410may be initiated by signaling received from the host system405. At455, the memory controller415may transmit a request to the memory device420for the parameter data and the ECC stored to the memory device420(e.g., at445). The request transmitted at430may initiate reading the parameter data and the ECC from the memory device420. At460, the parameter data and the ECC may be communicated from the memory device420to the memory controller415. At465, the memory controller415may identify and correct errors included in the parameter data. In some instances, the memory controller415may identify and correct errors in the parameter data by generating an ECC (e.g., a second ECC) based on the current parameter data and comparing the second ECC to the ECC received from the memory device420. If the ECCs match, then the parameter data may not include any errors. However, if the ECCs do not match, one or more errors may exist and the memory controller415may use the ECC received from the memory device420to correct the parameter data. At470, the memory controller415may store the corrected parameter data to the memory device420. In some examples, the memory controller415may overwrite the existing parameter data with the corrected parameter data, whereas in other examples the memory controller415may save the parameter data to a different location of the memory device420. At475, the memory controller415may increment a counter for each error in the parameter data that is corrected. The counter may be located at and/or managed by the memory controller415. For example, upon correcting a first error for a first time, the counter may be incremented (e.g., from “0” to “1”) and upon correcting a second error for a first time, the counter may again be incremented (e.g., from “1” to “2”) and so forth. In some instances, the counter may be reset each time the memory system410reboots. At480, the memory controller415may compare a value of the counter to a threshold value, which may be less than the total quantity of errors that the ECC is capable of correcting. For example, the ECC may be configured to correct a fixed quantity of errors and the threshold may be based off of the fixed quantity (e.g., threshold=34×maximumquantityofcorrectableerrors By setting the threshold value less than the total quantity of aliasing during subsequent error correction operations. Accordingly, the memory controller415may be configured to compare the value of the counter to the threshold. If the value of the counter satisfies the threshold value, the memory controller415may perform an ECC assessment (e.g., at485), whereas if the value of the counter does not satisfy the threshold value, the memory system410may continue its boot sequence. At485, the memory controller415conduct an ECC assessment based on the value of the counter satisfying the threshold. In a first example, the memory controller415may generate a second ECC using the corrected parameter data. For example, the second ECC may be configured to correct a greater quantity of errors than the originally generated ECC. As such, during subsequent boot operations, the memory controller415may use the second ECC to identify and correct errors, which may allow the memory controller415to correct a larger quantity errors than it was previously capable of. Moreover, upon generating the ECC, the memory controller415may update the threshold value (e.g., the threshold used at block325) to account for the second ECC being able to correct a larger quantity of errors. In a second example, the memory controller415may generate multiple ECCs upon the first boot sequence of the memory system410. As described above, the memory controller415may have generated multiple ECCs (e.g., at440). Accordingly, when the memory controller415determines that a value of the counter exceeds the threshold quantity of errors correctable by the ECC, the memory controller415may select a new (e.g., a second) ECC for use. The second ECC may be configured to correct a larger quantity of errors than the prior ECC. In a third example, the memory controller415may store corrected parameter data at different location than the prior parameter data (e.g., the corrupt parameter data) was stored. For example, the corrupt parameter data may be stored at a first portion of the memory device420and, upon correction, the corrected parameter data may be stored to a second portion of the memory device420. In some instances, some aspects of the process flow diagram400may instead be performed by the host system405(e.g., as opposed to the memory controller415) as described below with reference toFIG.5. For example, the generated ECC (e.g., at440) may instead be generated by the host system405, or the host system405may identify and correct errors in the parameter data (e.g., at465), increment the counter (e.g., at475), compare the counter to a threshold (e.g., at480), and/or perform an ECC assessment (e.g., at485). The operations performed by the host system405and the memory system410may be a matter of design choice that are selected during manufacturing of the respective components. In any instance, by utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory device420, which may improve its overall storage capabilities. FIG.5illustrates an example of a process flow diagram500that supports parameter table protection for a memory system in accordance with examples as disclosed herein. In some examples, the process flow diagram500may implement aspects of system100, system200, or a combination thereof. Accordingly, the operations described by the process flow diagram500may be performed at or by a host system505, a memory system510, or a combination thereof. The operations described with reference toFIG.5may result in the generation of an ECC by the host system505, which can be used to identify and correct errors in parameter data. By utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory system510, which may improve its overall storage capabilities. At521, the memory system510may boot up for a first time (e.g., a first boot sequence may occur at the memory system510). As described herein, the first occurrence of a memory system510booting may be after manufacturing the memory system510, during a first testing operation of the memory system510, after installing the memory system510in a product, or a similar situation. As used herein, booting a memory system510may refer to the process of providing power to one or more hardware components of the memory system510or initiating software stored to one or more aspects of the memory system510. In some instances, initiating software stored to one or more aspects of the memory system510may be initiated by signaling received from the host system505. At522, the host system505may transmit a request to the memory controller515for parameter data stored to the memory device520. In some instances, the parameter data may have been stored to the memory device520during manufacturing (e.g., the parameter data may be hardcoded to the memory device520). In some instances, the parameter data may be stored to a dedicated portion of the memory device520, for example a portion of the memory device520that is dedicated to storing operational data such as the parameter data. At523, the memory controller515may transmit the request from the host system505to the memory device520. In other examples, the memory controller515may receive the request and generate another request (e.g., a second request) that is transmitted to the memory device520. The request (e.g., at523) may initiate reading the parameter data from the memory device520. At524, the parameter data may be communicated from the memory device520to the memory controller515, and at525the memory controller515may transmit the parameter data to the host system505. In other examples, the parameter data may be communicated directly to the host system505. At526, the host system505may generate one or more ECCs that are associated with the parameter data. As described herein, in some examples the host system505may generate a single ECC that is configured to correct a fixed quantity of errors in the parameter data during subsequent boot operations. However, in other examples, the host system505may generate multiple ECCs that are each configured to correct a different fixed quantity of errors. Accordingly, when multiple ECCs are generated, the host system505may be configured to select a different ECC (e.g., an ECC configured to correct a larger quantity of errors) when a threshold quantity of errors in the parameter data are corrected. Selecting a different ECC when a threshold quantity of errors are correct may prevent aliasing from occurring. At527, the host system505may transmit the generated ECC(s) to the memory controller515and at528, the memory controller515may store the ECC(s) to the memory device520. In some examples, the ECC(s) may be stored to a same portion of the memory device520as the parameter data. In other examples, the ECC(s) may be stored to a portion of the memory device dedicated for storing error correcting or error detecting codes. At529, the memory system510may boot up for a second time (e.g., a second boot sequence may occur at the memory system510). As described herein, the second occurrence of a memory system510booting may occur when power is provided to one or more hardware components of the memory system510or when software stored to one or more aspects of the memory system510is initiated. In some instances, initiating software stored to one or more aspects of the memory system510may be initiated by signaling received from the host system505. At530, the host system505may transmit a request to the memory controller515for the parameter data and the ECC stored to the memory device520(e.g., at528). At531, the memory controller515may relay the request transmitted from the host system505which may initiate reading the parameter data and the ECC from the memory device520. In other examples, the memory controller515may receive the request and generate another request (e.g., a second request) that is transmitted to the memory device520. The request (e.g., at531) may initiate reading the parameter data from the memory device520. At532, the parameter data and the ECC may be communicated from the memory device520to the memory controller515and at533, the memory controller515may transmit the parameter data and the ECC to the host system505. In other examples, the parameter data and ECC may be communicated directly to the host system505. At534, the host system505may identify and correct errors included in the parameter data. In some instances, the host system505may identify and correct errors in the parameter data by generating an ECC (e.g., a second ECC) based on the current parameter data and comparing the second ECC to the ECC received from the memory device520(or the memory controller515). If the ECCs match, then the parameter data may not include any errors. However, if the ECCs do not match, one or more errors may exist and the host system505may use the ECC received from the memory device520to correct the parameter data. At535, the host system505may transmit the corrected parameter data to the to the memory controller515and at536, the memory controller515may store the corrected parameter data to the memory device520. In other examples, the host system505may transmit the corrected parameter data directly to the memory device520. In some examples, the host system505or the memory controller515may overwrite the existing parameter data with the corrected parameter data, whereas in other examples the host system505or the memory controller515may save the parameter data to a different location of the memory device520. At537, the host system505may increment a counter for each error in the parameter data that is corrected. The counter may be located at and/or managed by the host system505. For example, upon correcting a first error for a first time, the counter may be incremented (e.g., from “0” to “1”) and upon correcting a second error for a first time, the counter may again be incremented (e.g., from “1” to “2”) and so forth. In some instances, the counter may be reset each time the memory system510reboots. At538, the host system505may compare a value of the counter to a threshold value, which may be less than the total quantity of errors that the ECC is capable of correcting. For example, the ECC may be configured to correct a fixed quantity of errors and the threshold may be based off of the fixed quantity (e.g., threshold=¾×maximum quantity of correctable errors). By setting the threshold value less than the total quantity of errors the ECC is configured to correct, the memory system510may be less susceptible to aliasing during subsequent error correction operations. Accordingly, the host system505may be configured to compare the value of the counter to the threshold. If the value of the counter satisfies the threshold value, the host system505may perform an ECC assessment (e.g., at539), whereas if the value of the counter does not satisfy the threshold value, the host system505may allow the memory system510to continue its boot sequence. At539, the host system505may conduct an ECC assessment based on the value of the counter satisfying the threshold. In a first example, the host system505may generate a second ECC using the corrected parameter data. For example, the second ECC may be configured to correct a greater quantity of errors than the originally generated ECC. As such, during subsequent boot operations, the host system505may use the second ECC to identify and correct errors, which may allow the host system505to correct a larger quantity errors than it was previously capable of. Moreover, upon generating the ECC, the host system505may update the threshold value (e.g., the threshold used at block325) to account for the second ECC being able to correct a larger quantity of errors. In a second example, the host system505may generate multiple ECCs upon the first boot sequence of the memory system510. As described above, the host system505may have generated multiple ECCs (e.g., at526). Accordingly, when the host system505determines that a value of the counter exceeds the threshold quantity of errors correctable by the ECC, the host system505may select a new (e.g., a second) ECC for use. The second ECC may be configured to correct a larger quantity of errors than the prior ECC. In a third example, the host system505may store corrected parameter data at different location than the prior parameter data (e.g., the corrupt parameter data) was stored. For example, the corrupt parameter data may be stored at a first portion of the memory device520and, upon correction, the corrected parameter data may be stored to a second portion of the memory device520. In some instances, some aspects of the process flow diagram500may instead be performed by the memory system510(e.g., as opposed to the host system505) as described above with reference toFIG.4. For example, the generated ECC (e.g., at526) may instead be generated by the memory controller515, or the memory controller515may identify and correct errors in the parameter data (e.g., at534), increment the counter (e.g., at537), compare the counter to a threshold (e.g., at538), and/or perform an ECC assessment (e.g., at539). The operations performed by the host system505and the memory system510may be a matter of design choice that are selected during manufacturing of the respective components. In any instance, by utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory device520, which may improve its overall storage capabilities. FIG.6illustrates an example of a block diagram600that supports parameter table protection for a memory system in accordance with examples as disclosed herein. In some examples, the memory system610may implement aspects of system100, system200, or a combination thereof. Accordingly, the block diagram600may illustrate a host system605and a memory system610. In some examples, the host system605may include a host system controller615and the memory system610may include a parameter data620, which may refer to a portion of the memory system610that is configured to store parameter data620. In some examples, the memory system610may not include a memory controller and thus the host system605may have direct access to the memory devices of the memory system610. The host system controller615may be configured to communicate with the memory system610to generate an ECC that can be used to identify and correct errors in parameter data. By utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory system610, which may improve its overall storage capabilities. In a first example, the host system controller615may generate one or more ECCs that are associated with the parameter data620upon the memory system610booting for a first time. As described herein, in some examples the host system controller615may generate a single ECC that is configured to correct a fixed quantity of errors in the parameter data during subsequent boot operations. However, in other examples, the host system controller615may generate multiple ECCs that are each configured to correct a different fixed quantity of errors. Accordingly, when multiple ECCs are generated, the host system controller615may be configured to select a different ECC (e.g., an ECC configured to correct a larger quantity of errors) when a threshold quantity of errors in the parameter data are corrected. Selecting a different ECC when a threshold quantity of errors are correct may prevent aliasing from occurring. The host system controller615may identify and correct errors included in the parameter data620. In some instances, the host system controller615may identify and correct errors in the parameter data620by generating an ECC (e.g., a second ECC) based on the current parameter data620and comparing the second ECC to the ECC received from the memory system610(e.g., that was generated upon the memory system610booting for a first time). If the ECCs match, then the parameter data may not include any errors. However, if the ECCs do not match, one or more errors may exist and the host system controller615may use the ECC received from the memory system610to correct the parameter data as described herein. The host system controller615may store the corrected parameter data620to the memory system610. In some examples, the host system controller615may overwrite the existing parameter data620with the corrected parameter data620, whereas in other examples the host system controller615may save the parameter data to a different location of the memory system610. In another example, the memory system610may include ECC capabilities and thus the host system controller615may initiate the generation of one or more ECCs by the memory system610. That is, upon the memory system610booting for a first time, the host system controller615may initiate the generation of one or more ECCs (e.g., by an ECC engine or other circuitry associated with the memory system610) configured to correct errors in the parameter data620. The host system controller615may then, upon the memory system610booting for a subsequent time, initiate the correction of the parameter data620using the generated ECC(s). Accordingly, in the examples described herein with reference toFIG.6, the host system605(e.g., the host system controller615) may correct the parameter data620or initiate the correction of the parameter data620by the memory system610. In either instance, by utilizing an ECC to correct errors in parameter data, fewer copies of the parameter data may be stored to the memory system610, which may improve its overall storage capabilities. FIG.7shows a block diagram700of a memory system720that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The memory system720may be an example of aspects of a memory system as described with reference toFIGS.1through6. The memory system720, or various components thereof, may be an example of means for performing various aspects of parameter table protection for a memory system as described herein. For example, the memory system720may include an identification component725, a generation component730, an error control component735, a counter component740, a storing component745, a comparison component750, a determination component755, an increasing component760, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The identification component725may be configured as or otherwise support a means for identifying, upon booting a memory system for a first occurrence, parameter data for operating the memory system stored to a non-volatile memory device of the memory system. In some examples, the identification component725may be configured as or otherwise support a means for identifying, upon booting the memory system for a second occurrence, a first error associated with the parameter data using the ECC. In some examples, the identification component725may be configured as or otherwise support a means for identifying, upon booting the memory system for a third occurrence, a second error associated with the parameter data using the generated ECC. The generation component730may be configured as or otherwise support a means for generating, at the memory system, an ECC associated with the parameter data based on identifying the parameter data stored to the non-volatile memory device of the memory system. The error control component735may be configured as or otherwise support a means for correcting, at the memory system, the first error associated with the parameter data based on identifying the first error. In some examples, the error control component735may be configured as or otherwise support a means for correcting, at the memory system, the second error associated with the parameter data based on identifying the second error associated with the parameter data and incrementing the counter. In some examples, the error control component735may be configured as or otherwise support a means for selecting a fourth ECC for correcting a fourth quantity of errors associated with the parameter data based on determining that the value of the counter is above the first threshold value and below the second threshold value, where the fourth ECC is configured to correct a quantity of errors that is greater than the first threshold value. In some examples, the error control component735may be configured as or otherwise support a means for selecting a third ECC for correcting a third quantity of errors associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the third ECC is configured to correct a quantity of errors that is greater than the threshold value. In some examples, the error control component735may be configured as or otherwise support a means for generating, at the memory system, a second ECC associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the second ECC is configured to correct a second quantity of errors greater than the first quantity of errors correctable by the ECC. In some examples, the error control component735may be configured as or otherwise support a means for replacing the ECC with the second ECC based on generating the second ECC associated with the parameter data. In some examples, the counter component740may be configured as or otherwise support a means for incrementing a counter based on correcting the first error associated with the parameter data, where a value of the counter is associated with a quantity of corrected errors of the parameter data. In some examples, the counter component740may be configured as or otherwise support a means for incrementing the counter based on identifying the second error associated with the parameter data. In some examples, the parameter data is stored to a first portion of the non-volatile memory device of the memory system, and the storing component745may be configured as or otherwise support a means for storing the corrected parameter data to a second portion of the non-volatile memory device of the memory system that is different than the first portion of the non-volatile memory device of the memory system. In some examples, the comparison component750may be configured as or otherwise support a means for comparing the value of the counter to a threshold value associated with the ECC based on incrementing the counter, where correcting the first error associated with the parameter data is based on the value of the counter failing to satisfy the threshold value. In some examples, the determination component755may be configured as or otherwise support a means for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the determination component755may be configured as or otherwise support a means for determining that the value of the counter is above a first threshold value associated with a first quantity of errors correctable by the ECC and below a second threshold value associated with a second quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the determination component755may be configured as or otherwise support a means for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the increasing component760may be configured as or otherwise support a means for increasing the threshold value based on selecting the third ECC for correcting the third quantity of errors associated with the parameter data. In some examples, the increasing component760may be configured as or otherwise support a means for increasing the threshold value based on generating the second ECC for correcting the second quantity of errors associated with the parameter data. In some examples, the threshold value is based on a quantity of errors that are correctable by the ECC. FIG.8shows a block diagram800of a host system820that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The host system820may be an example of aspects of a host system as described with reference toFIGS.1through6. The host system820, or various components thereof, may be an example of means for performing various aspects of parameter table protection for a memory system as described herein. For example, the host system820may include a reception component825, an error control component830, a transmission component835, a counter component840, a determination component845, an increasing component850, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The reception component825may be configured as or otherwise support a means for receiving, at a host system, parameter data for operating a memory system based on the memory system being booted for a first occurrence. In some examples, the reception component825may be configured as or otherwise support a means for receiving, at the host system, the parameter data from the memory system based on the memory system being booted for a second occurrence and after transmitting the ECC associated with the parameter data to the memory system. In some examples, the reception component825may be configured as or otherwise support a means for receiving, at the host system, the parameter data from the memory system based on the memory system being booted for a third occurrence. The error control component830may be configured as or otherwise support a means for generating, at the host system, an ECC associated with the parameter data based on receiving the parameter data from the memory system. In some examples, the error control component830may be configured as or otherwise support a means for correcting, at the host system, a first error associated with the parameter data based on receiving the parameter data from the memory system. In some examples, the error control component830may be configured as or otherwise support a means for comparing the value of the counter to a threshold value associated with the ECC based on incrementing the counter, where correcting the first error associated with the parameter data is based on the value of the counter failing to satisfy the threshold value. In some examples, the error control component830may be configured as or otherwise support a means for generating, at the host system, a second ECC associated with the parameter data based at least in part determining that the value of the counter satisfies the threshold value, where the second ECC is configured to correct a second quantity of errors that is greater than the first quantity of errors correctable by the ECC. In some examples, the error control component830may be configured as or otherwise support a means for identifying a second error associated with the parameter data based on receiving the parameter data upon the memory system being booted for the third occurrence. In some examples, the error control component830may be configured as or otherwise support a means for correcting, at the host system, the second error associated with the parameter data based on identifying the second error and incrementing the counter. In some examples, the error control component830may be configured as or otherwise support a means for selecting a third ECC for correcting a third quantity of errors associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the third ECC is configured to correct a quantity of errors that is greater than the threshold value. In some examples, the error control component830may be configured as or otherwise support a means for selecting a fourth ECC for correcting a fourth quantity of errors associated with the parameter data based on determining that the value of the counter is above the first threshold value and below the second threshold value, where the fourth ECC is configured to correct a quantity of errors that is greater than the first threshold value. The transmission component835may be configured as or otherwise support a means for transmitting the ECC associated with the parameter data to the memory system based on generating the ECC associated with the parameter data. In some examples, the transmission component835may be configured as or otherwise support a means for transmitting corrected parameter data to the memory system based on correcting the first error associated with the parameter data. In some examples, the transmission component835may be configured as or otherwise support a means for transmitting the second ECC to the memory system based on generating the second ECC. In some examples, the counter component840may be configured as or otherwise support a means for incrementing a counter based on correcting the first error associated with the parameter data, where a value of the counter is associated with a quantity of corrected errors of the parameter data. In some examples, the counter component840may be configured as or otherwise support a means for incrementing the counter based on identifying the second error associated with the parameter data. In some examples, the counter component840may be configured as or otherwise support a means for determining that the value of the counter is above a first threshold value associated with a first quantity of errors correctable by the ECC and below a second threshold value associated with a second quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the determination component845may be configured as or otherwise support a means for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the determination component845may be configured as or otherwise support a means for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter. In some examples, the increasing component850may be configured as or otherwise support a means for increasing the threshold value based on generating the second ECC for correcting the second quantity of errors associated with the parameter data. In some examples, the increasing component850may be configured as or otherwise support a means for increasing the threshold value based on selecting the third ECC for correcting the third quantity of errors associated with the parameter data. In some examples, the threshold value is based on a quantity of errors that are correctable by the ECC. FIG.9shows a flowchart illustrating a method900that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The operations of method900may be implemented by a memory system or its components as described herein. For example, the operations of method900may be performed by a memory system as described with reference toFIGS.1through7. In some examples, a memory system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory system may perform aspects of the described functions using special-purpose hardware. At905, the method may include identifying, upon booting a memory system for a first occurrence, parameter data for operating the memory system stored to a non-volatile memory device of the memory system. The operations of905may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of905may be performed by an identification component725as described with reference toFIG.7. At910, the method may include generating, at the memory system, an ECC associated with the parameter data based on identifying the parameter data stored to the non-volatile memory device of the memory system. The operations of910may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of910may be performed by a generation component730as described with reference toFIG.7. At915, the method may include identifying, upon booting the memory system for a second occurrence, a first error associated with the parameter data using the ECC. The operations of915may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of915may be performed by an identification component725as described with reference toFIG.7. At920, the method may include correcting, at the memory system, the first error associated with the parameter data based on identifying the first error. The operations of920may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of920may be performed by an error control component735as described with reference toFIG.7. In some examples, an apparatus as described herein may perform a method or methods, such as the method900. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor), or any combination thereof for performing the following aspects of the present disclosure: Aspect 1: The apparatus, including features, circuitry, logic, means, or instructions, or any combination thereof for identifying, upon booting a memory system for a first occurrence, parameter data for operating the memory system stored to a non-volatile memory device of the memory system; generating, at the memory system, an ECC associated with the parameter data based on identifying the parameter data stored to the non-volatile memory device of the memory system; identifying, upon booting the memory system for a second occurrence, a first error associated with the parameter data using the ECC; and correcting, at the memory system, the first error associated with the parameter data based on identifying the first error. Aspect 2: The apparatus of aspect 1, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for incrementing a counter based on correcting the first error associated with the parameter data, where a value of the counter is associated with a quantity of corrected errors of the parameter data. Aspect 3: The apparatus of aspect 2, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for comparing the value of the counter to a threshold value associated with the ECC based on incrementing the counter, where correcting the first error associated with the parameter data is based on the value of the counter failing to satisfy the threshold value. Aspect 4: The apparatus of aspect 3, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for the threshold value is based on a quantity of errors that are correctable by the ECC. Aspect 5: The apparatus of any of aspects 2 through 4, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for identifying, upon booting the memory system for a third occurrence, a second error associated with the parameter data using the generated ECC; incrementing the counter based on identifying the second error associated with the parameter data; and correcting, at the memory system, the second error associated with the parameter data based on identifying the second error associated with the parameter data and incrementing the counter. Aspect 6: The apparatus of aspect 5, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter and selecting a third ECC for correcting a third quantity of errors associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the third ECC is configured to correct a quantity of errors that is greater than the threshold value. Aspect 7: The apparatus of aspect 6, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for increasing the threshold value based on selecting the third ECC for correcting the third quantity of errors associated with the parameter data. Aspect 8: The apparatus of any of aspects 5 through 7, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter is above a first threshold value associated with a first quantity of errors correctable by the ECC and below a second threshold value associated with a second quantity of errors correctable by the ECC based on incrementing the counter and selecting a fourth ECC for correcting a fourth quantity of errors associated with the parameter data based on determining that the value of the counter is above the first threshold value and below the second threshold value, where the fourth ECC is configured to correct a quantity of errors that is greater than the first threshold value. Aspect 9: The apparatus of any of aspects 2 through 8, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter and generating, at the memory system, a second ECC associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the second ECC is configured to correct a second quantity of errors greater than the first quantity of errors correctable by the ECC. Aspect 10: The apparatus of aspect 9, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for replacing the ECC with the second ECC based on generating the second ECC associated with the parameter data. Aspect 11: The apparatus of any of aspects 9 through 10, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for increasing the threshold value based on generating the second ECC for correcting the second quantity of errors associated with the parameter data. Aspect 12: The apparatus of any of aspects 1 through 11 where the parameter data is stored to a first portion of the non-volatile memory device of the memory system and the method, apparatuses, and non-transitory computer-readable medium, further includes operations, features, circuitry, logic, means, or instructions, or any combination thereof for storing the corrected parameter data to a second portion of the non-volatile memory device of the memory system that is different than the first portion of the non-volatile memory device of the memory system. FIG.10shows a flowchart illustrating a method1000that supports parameter table protection for a memory system in accordance with examples as disclosed herein. The operations of method1000may be implemented by a host system or its components as described herein. For example, the operations of method1000may be performed by a host system as described with reference toFIGS.1through6and8. In some examples, a host system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the host system may perform aspects of the described functions using special-purpose hardware. At1005, the method may include receiving, at a host system, parameter data for operating a memory system based on the memory system being booted for a first occurrence. The operations of1005may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1005may be performed by a reception component825as described with reference toFIG.8. At1010, the method may include generating, at the host system, an ECC associated with the parameter data based on receiving the parameter data from the memory system. The operations of1010may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1010may be performed by an error control component830as described with reference toFIG.8. At1015, the method may include transmitting the ECC associated with the parameter data to the memory system based on generating the ECC associated with the parameter data. The operations of1015may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1015may be performed by a transmission component835as described with reference toFIG.8. At1020, the method may include receiving, at the host system, the parameter data from the memory system based on the memory system being booted for a second occurrence and after transmitting the ECC associated with the parameter data to the memory system. The operations of1020may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1020may be performed by a reception component825as described with reference toFIG.8. At1025, the method may include correcting, at the host system, a first error associated with the parameter data based on receiving the parameter data from the memory system. The operations of1025may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1025may be performed by an error control component830as described with reference toFIG.8. At1030, the method may include transmitting corrected parameter data to the memory system based on correcting the first error associated with the parameter data. The operations of1030may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1030may be performed by a transmission component835as described with reference toFIG.8. In some examples, an apparatus as described herein may perform a method or methods, such as the method1000. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor), or any combination thereof for performing the following aspects of the present disclosure: Aspect 13: The apparatus, including features, circuitry, logic, means, or instructions, or any combination thereof for receiving, at a host system, parameter data for operating a memory system based on the memory system being booted for a first occurrence; generating, at the host system, an ECC associated with the parameter data based on receiving the parameter data from the memory system; transmitting the ECC associated with the parameter data to the memory system based on generating the ECC associated with the parameter data; receiving, at the host system, the parameter data from the memory system based on the memory system being booted for a second occurrence and after transmitting the ECC associated with the parameter data to the memory system; correcting, at the host system, a first error associated with the parameter data based on receiving the parameter data from the memory system; and transmitting corrected parameter data to the memory system based on correcting the first error associated with the parameter data. Aspect 14: The apparatus of aspect 13, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for incrementing a counter based on correcting the first error associated with the parameter data, where a value of the counter is associated with a quantity of corrected errors of the parameter data. Aspect 15: The apparatus of aspect 14, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter; generating, at the host system, a second ECC associated with the parameter data based at least in part determining that the value of the counter satisfies the threshold value, where the second ECC is configured to correct a second quantity of errors that is greater than the first quantity of errors correctable by the ECC; and transmitting the second ECC to the memory system based on generating the second ECC. Aspect 16: The apparatus of aspect 15, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for increasing the threshold value based on generating the second ECC for correcting the second quantity of errors associated with the parameter data. Aspect 17: The apparatus of any of aspects 14 through 16, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for comparing the value of the counter to a threshold value associated with the ECC based at least in part on incrementing the counter, where correcting the first error associated with the parameter data is based on the value of the counter failing to satisfy the threshold value. Aspect 18: The apparatus of aspect 17, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for the threshold value is based on a quantity of errors that are correctable by the ECC. Aspect 19: The apparatus of any of aspects 14 through 18, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for receiving, at the host system, the parameter data from the memory system based on the memory system being booted for a third occurrence; identifying a second error associated with the parameter data based on receiving the parameter data upon the memory system being booted for the third occurrence; incrementing the counter based on identifying the second error associated with the parameter data; and correcting, at the host system, the second error associated with the parameter data based on identifying the second error and incrementing the counter. Aspect 20: The apparatus of aspect 19, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter satisfies a threshold value associated with a first quantity of errors correctable by the ECC based on incrementing the counter and selecting a third ECC for correcting a third quantity of errors associated with the parameter data based on determining that the value of the counter satisfies the threshold value, where the third ECC is configured to correct a quantity of errors that is greater than the threshold value. Aspect 21: The apparatus of aspect 20, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for increasing the threshold value based on selecting the third ECC for correcting the third quantity of errors associated with the parameter data. Aspect 22: The apparatus of any of aspects 19 through 21, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the value of the counter is above a first threshold value associated with a first quantity of errors correctable by the ECC and below a second threshold value associated with a second quantity of errors correctable by the ECC based on incrementing the counter and selecting a fourth ECC for correcting a fourth quantity of errors associated with the parameter data based on determining that the value of the counter is above the first threshold value and below the second threshold value, where the fourth ECC is configured to correct a quantity of errors that is greater than the first threshold value. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to a condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. If a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other if the switch is open. If a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The terms “if,” “when,” “based on,” or “based on” may be used interchangeably. In some examples, if the terms “if,” “when,” “based on,” or “based on” are used to describe a conditional action, a conditional process, or connection between portions of a process, the terms may be interchangeable. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” if a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” if a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a hyphen and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. For example, the various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 115,930 |
11861192 | DETAILED DESCRIPTION FIG.1is a block diagram of a storage system according to an example embodiment. Referring toFIG.1, a storage system10according to an example embodiment may include a host11and a storage device100. The storage system10may be a computing system, which is configured to process a variety of information, such as a personal computer (PC), a notebook, a laptop, a server, a workstation, a tablet PC, a smartphone, a digital camera, a black box, etc. The host11may control overall operations of the storage system10. For example, the host11may store data in the storage device100or may read data stored in the storage device100. Under control of the host11, the storage device100may store data or may send the stored data to the host11. The storage device100may include a storage controller110and a non-volatile memory device120. The non-volatile memory device120may include a plurality of memory chips MC, which may be, e.g., a plurality of flash memory chips. Each of the plurality of memory chips MC may store data. The storage controller110may store data in the non-volatile memory device120or may read data stored in the non-volatile memory device120. The non-volatile memory device120may operate under control of the storage controller110. The non-volatile memory device120may be, e.g., a NAND flash memory device or one of various storage devices, which may retain data stored therein even though a power is turned off, such as a PRAM, an MRAM, a RRAM, and an FRAM. The storage controller110may include a status checker111, a redirection device112, and a monitoring device113. The status checker111may check a status of the non-volatile memory device120and may generate a status information set. The status information set may include a plurality of status information SI. The status information SI may be information indicating a status of a flash memory region (hereinafter referred to as a “memory region”) (e.g., a flash memory based memory chip MC or a flash memory block in the memory chip MC). The status information SI will be described in more detail with reference toFIGS.5A and5B. The status checker111may determine whether each of a plurality of memory regions satisfies available memory conditions, based on the plurality of status information SI. Each of the plurality of memory regions may be a flash memory region. The available memory conditions may be criteria for determining whether a corresponding memory region is available. The available memory conditions will be described in more detail with reference toFIG.9. The status checker111may determine whether each of the plurality of memory regions satisfies the available memory conditions, based on the plurality of status information SI, and may provide a result of the determination to the host11or the redirection device112. The redirection device112may be a device redirecting a write operation according to a write request. The redirection may mean to change a memory region where a write operation will be performed. The redirection device112may communicate with the status checker111. For example, when the status checker111determines that a first memory region corresponding to a write request received from the host11is not available, the redirection device112may, under control of the status checker111, perform the write operation in a second memory region instead of the first memory region. The redirection device112may output redirection result information to the host11. Workload unbalance between memory regions in the non-volatile memory device120may be solved, e.g., balanced, by the redirection of the redirection device112. The redirection device112will be described in more detail with reference toFIGS.7and10. The monitoring device113may be a device that monitors a status of the non-volatile memory device120. The monitoring device113may communicate with the status checker111. For example, without a separate request from the host11, the monitoring device113may monitor statuses of the memory regions of the non-volatile memory device120every reference time, i.e., periodically, and may update the plurality of status information SI of the status checker111. The reference time may indicate a period by which the monitoring device113performs monitoring. The status checker111may output monitoring information to the host11based on the plurality of status information SI thus updated. The host11may not request a write request for an unavailable memory region, based on the monitoring information. As such, workload unbalance between memory regions in the non-volatile memory device120may be solved, e.g., balanced. The monitoring device113will be described in more detail with reference toFIGS.8and11. As described above, according to an example embodiment, the storage controller110may redirect a write request for an unavailable memory region to another memory region. Also, the storage controller110may monitor a status of the non-volatile memory device120periodically without a separate request from the host11, and may provide monitoring information to the host11. FIG.2is a block diagram illustrating a storage controller ofFIG.1in detail, according to an example embodiment. Referring toFIGS.1and2, the storage controller110may communicate with the host11and the non-volatile memory device120. The storage controller110may include a processor114, a static random access memory (SRAM)115, a read only memory (ROM)116, an error correcting code (ECC) engine117, a host interface circuit118, a status management device, and a non-volatile memory interface circuit119. The processor114may control overall operations of the storage controller110. The SRAM115may be used as a buffer memory, a cache memory, or a working memory of the storage controller110. The ROM116may store a variety of information that is used for the storage controller110to operate, e.g., in the form of firmware. The ECC engine117may detect and correct an error of data read from the non-volatile memory device120. In general, as the number of operations performed in the non-volatile memory device120increases (such as a write operation and an erase operation), an error level of the non-volatile memory device120may increase. The ECC engine117may have an error correction capacity of a given level. In the case where an error of data read from the non-volatile memory device120exceeds an error correction capacity of the ECC engine117, the error of the data read from the non-volatile memory device120may not be corrected. To minimize the situation where an error is not corrected by the ECC engine117, the status management device may distribute workloads of memory regions in the non-volatile memory device120. The host interface circuit118may provide for communications between the storage controller110and the host11. The host interface circuit118may be implemented based on, e.g., at least one of various interfaces such as a SATA (Serial ATA) interface, a PCIe (Peripheral Component Interconnect Express) interface, a SAS (Serial Attached SCSI) interface, an NVMe (Nonvolatile Memory express) interface, and an UFS (Universal Flash Storage) interface. The host interface circuit118may output, to the host11, information indicating that a first memory region of a plurality of memory regions in the non-volatile memory device120does not satisfy available memory conditions. The host interface circuit118may output, to the host11, information indicating that a second memory region of a plurality of memory regions in the non-volatile memory device120satisfies the available memory conditions. The status management device may include the status checker111, the redirection device112, and the monitoring device113described with reference toFIG.1. The status management device may be provided in the form of software, hardware, or a combination thereof. In the case where the status management device is provided in the form of software, the status management device may be stored in the SRAM115and may be driven by the processor114. The host interface circuit118may receive redirection result information from the redirection device112. The host interface circuit118may output the redirection result information to the host11. The host interface circuit118may receive monitoring information from the status checker111or the monitoring device113. The host interface circuit118may output the monitoring information to the host11. The non-volatile memory interface circuit119may provide for communications between the storage controller110and the non-volatile memory device120. The non-volatile memory interface circuit119may be implemented based on, e.g., a NAND interface. The status checker111, the redirection device112, the monitoring device113, the processor114, the SRAM115, the ROM116, the ECC engine117, the host interface circuit118, and the non-volatile memory interface circuit119may be interconnected through a bus. FIG.3Ais a block diagram illustrating a memory chip MC ofFIG.1in detail, according to an example embodiment.FIG.3Bis a diagram illustrating one memory block BLK of a plurality of memory blocks in a memory cell array121inFIG.3A. Referring toFIGS.1,3A, and3B, the non-volatile memory device120may include a plurality of memory chips MC. The memory chip MC may communicate with the storage controller110. For example, the memory chip MC may receive an address ADD, a command CMD, and a control signal CTR from the storage controller110. The memory chip MC may exchange data with the storage controller110. The memory chip MC may include the memory cell array121, an address decoder122, a control logic and voltage generating circuit123, a page buffer124, and an input/output (I/O) circuit125. The memory cell array121may include a plurality of memory blocks, e.g., a plurality of flash memory blocks. A structure of each of the plurality of memory blocks may be similar to a structure of the memory block BLK illustrated inFIG.3B. The memory block BLK illustrated inFIG.3Bmay correspond to, e.g., a physical erase unit of the non-volatile memory device120, or a page unit, a word line unit, a sub-block unit, etc. Referring toFIG.3B, the memory block BLK may include a plurality of cell strings CS11, CS12, CS21, and CS22. The plurality of cell strings CS11, CS12, CS21, and CS22may be arranged in a row direction and a column direction. For brevity of drawing, four cell strings CS11, CS12, CS21, and CS22are illustrated inFIG.3B, but the number of cell strings may be increased or decreased in the row direction or the column direction. Cell strings placed at the same column from among the plurality of cell strings CS11, CS12, CS21, and CS22may be connected with the same bit line. For example, the cell strings CS11and CS21may be connected with a first bit line BL1, and the cell strings CS12and CS22may be connected with a second bit line BL2. Each of the plurality of cell strings CS11, CS12, CS21, and CS22may include a plurality of cell transistors. In each cell string, the plurality of cell transistors may be implemented with a charge trap flash (CTF) memory cell. The plurality of cell transistors may be stacked in a height direction that is a direction perpendicular to a plane (e.g., a semiconductor substrate (not illustrated)) defined by the row direction and the column direction. In each cell string, the plurality of cell transistors may be connected in series between a corresponding bit line (e.g., BL1or BL2) and a common source line CSL. For example, in each cell string, the plurality of cell transistors may include string selection transistors SSTa and SSTb, dummy memory cells DMC1and DMC2, memory cells MC1to MC4, and ground selection transistors GSTa and GSTb. The serially-connected string selection transistors SSTa and SSTb may be provided between the serially-connected memory cells MC1to MC4and a corresponding bit line (e.g., BL1and BL2). The serially-connected ground selection transistors GSTa and GSTb may be provided between the serially-connected memory cells MC1to MC4and the common source line CSL. The second dummy memory cell DMC2may be provided between the serially-connected string selection transistors SSTa and SSTb and the serially-connected memory cells MC1to MC4, and the first dummy memory cell DMC1may be provided between the serially-connected memory cells MC1to MC4and the serially-connected ground selection transistors GSTa and GSTb. In the plurality of cell strings CS11, CS12, CS21, and CS22, memory cells placed at the same height from among the memory cells MC1to MC4may share the same word line. For example, the first memory cells MC1of the plurality of cell strings CS11, CS12, CS21, and CS22may be placed at the same height from the substrate (not illustrated) and may share a first word line WL1. The second memory cells MC2of the plurality of cell strings CS11, CS12, CS21, and CS22may be placed at the same height from the substrate (not illustrated) and may share a second word line WL2. Likewise, the third memory cells MC3of the plurality of cell strings CS11, CS12, CS21, and CS22may be placed at the same height from the substrate (not illustrated) and may share a third word line WL3, and the fourth memory cells MC4of the plurality of cell strings CS11, CS12, CS21, and CS22may be placed at the same height from the substrate (not illustrated) and may share a fourth word line WL4. Dummy memory cells placed at the same height from among the dummy memory cells DMC1and DMC2of the plurality of cell strings CS11, CS12, CS21, and CS22may share the same dummy word line. For example, the first dummy memory cells DMC1of the plurality of cell strings CS11, CS12, CS21, and CS22may share a first dummy word line DWL1, and the second dummy memory cells DMC2of the plurality of cell strings CS11, CS12, CS21, and CS22may share a second dummy word line DWL2. In the plurality of cell strings CS11, CS12, CS21, and CS22, string selection transistor placed at the same height and the same row from among the string selection transistor SSTa or SSTb of the plurality of cell strings CS11, CS12, CS21, and CS22may be connected with the same string selection line. For example, the string selection transistors SSTb of the cell strings CS11and CS12may share a string selection line SSL1b, and the string selection transistors SSTa of the cell strings CS11and CS12may share a string selection line SSL1a. The string selection transistors SSTb of the cell strings CS21and CS22may share a string selection line SSL2b, and the string selection transistors SSTa of the cell strings CS21and CS22may share a string selection line SSL2a. Ground selection transistors placed at the same height and the same row from among the ground selection transistors GSTa and GSTb of the plurality of cell strings CS11, CS12, CS21, and CS22may share the same ground selection line. For example, the ground selection transistors GSTb of the cell strings CS11and CS12may be connected with a ground selection line GSL1b, and the ground selection transistors GSTa of the cell strings CS11and CS12may be connected with a ground selection line GSL1a. The ground selection transistors GSTb of the cell strings CS21and CS22may be connected with a ground selection line GSL2b, and the ground selection transistors GSTa of the cell strings CS21and CS22may be connected with a ground selection line GSL2a. The memory block BLK illustrated inFIG.3Bis an example, and it will be understood that the number of cell strings may be increased or decreased, and the number of rows of cell strings and the number of columns of cell strings may be increased or decreased depending on the number of cell strings. Also, in the memory block BLK, the number of cell transistors may be increased or decreased, the height of the memory block BLK may be increased or decreased depending on the number of cell transistors, and the number of lines connected with the cell transistors may be increased or decreased depending on the number of cell transistors. A memory region may correspond to one of the plurality of memory chips MC in one storage device100, may correspond to one of the plurality of memory blocks BLK in one memory chip MC, or may correspond to one page, one word line, one sub memory block, etc., in the memory block BLK. A memory region may refer to any area that is physically separated to store data in the non-volatile memory device120. The status checker111of the storage controller110may determine whether each of a plurality of memory regions satisfies the available memory conditions, based on the plurality of status information SI. Referring again toFIGS.1and3A, the address decoder122may receive the address ADD from the storage controller110. The address decoder122may be connected with the memory cell array121through string selection lines SSL, word lines WL, and ground selection lines GSL. The address decoder122may decode the address ADD and may control voltages to be applied to the string selection lines SSL, the word lines WL, and the ground selection lines GSL based on a decoding result. The control logic and voltage generating circuit123may receive the command CMD and the control signal CTR from the storage controller110. The control logic and voltage generating circuit123may control the address decoder122, the page buffer124, and the I/O circuit125based on the command CMD and the control signal CTR. The control logic and voltage generating circuit123may generate various voltages (e.g., read voltages, program voltages, verification voltages, and erase voltages) necessary for the non-volatile memory device120to operate. The page buffer124may be connected with the memory cell array121through bit lines BL. The page buffer124may receive data from the I/O circuit125through data lines DL. The page buffer124may control the bit lines BL based on the received data such that data are stored in the memory cell array121. The page buffer124may read data stored in the memory cell array121by sensing voltages of the bit lines BL. The page buffer124may provide the read data to the I/O circuit125through the data lines DL. The I/O circuit125may be connected with the page buffer124through the data lines DL. The I/O circuit125may provide data received from the storage controller110to the page buffer124through the data lines DL. The I/O circuit125may output the data received through the data lines DL to the storage controller110. The address ADD, the command CMD, the control signal CTR, and the data described with reference toFIG.3Amay be transmitted/received through the non-volatile memory interface circuit119of the storage controller110. FIG.4is a block diagram illustrating a storage device operating in a multi-tenancy environment, according to an example embodiment. Referring toFIG.4, the storage device100may communicate with the host11. The storage device100may include the storage controller110and a plurality of memory regions MR1to MRN. For example, the plurality of memory regions MR1to MRN may be physically separated memory regions in the non-volatile memory device120ofFIG.1. The storage controller110may generate the status information set. The status information set may include the plurality of status information SI, which may be, e.g., SI1to SIN. The plurality of status information SI1to SIN may correspond to the plurality of memory regions MR1to MRN, respectively. For example, the first status information SI1may indicate a status of the first memory region MR1, the second status information SI2may indicate a status of the second memory region MR2, etc. The storage device100may operate in the multi-tenancy environment. The multi-tenancy environment may mean an environment in which the same service is provided to multiple users. For example, the storage device100may be used as a storage medium of a server for cloud computing. In the multi-tenancy environment, to reduce the interference between users and to reinforce security, the users may use different memory regions. For example, user A may use the first memory region MR1, user B may use the second memory region MR2, and user C may use the third memory region MR3. In general, an excessive workload may be focused on a specific memory region depending on the inclination of the user, a size of data, etc. For example, in the case where user A frequently uploads or downloads a large amount of data, data processing in the first memory region MR1may fail, the first memory region MR1may be worn out, or data processing in the first memory region MR1may be delayed. According to an example embodiment, the storage device100may allocate an additional memory region (e.g., the fourth memory region MR4) to user A, or may allocate the second memory region MR2, which is allocated to user B but has a low frequency of use, to both user A and user B. In the case where the storage device100operates in a single-tenancy environment in which all the memory regions MR1to MRN are allocated to a specific user, data for the specific user may be uniformly distributed into the plurality of memory regions MR1to MRN. On the other hand, in the case of the multi-tenancy environment, as the plurality of memory regions MR1to MRN are exclusively allocated to multiple users, the unbalance of workload may occur. The multi-tenancy environment is described by way of an example with reference toFIG.4for a case in which an unbalanced workload may occur, but the storage device100may also be implemented to operate in the single-tenancy environment or in a personal computing device, and may provide workload balancing, efficient processing, and/or security of associated data, and for data to be evenly stored in a plurality of memory regions. FIGS.5A and5Bare diagrams for describing status information of a memory region, according to an example embodiment.FIG.5Ais a diagram for describing status information about a specific memory region by way of example.FIG.5Bis a diagram for describing a plurality of status information respectively corresponding to a plurality of memory regions by way of example. Status information for a specific memory region will be described with reference toFIG.5A. The status information may be used to determine whether a corresponding memory region is available. The case where a memory region is unavailable may be determined by multiple factors. Accordingly, the status information may include multiple sub conditions corresponding to multiple factors. The detailed contents of the status information will be described with reference to an index, contents, and a value. The status information may include information about an average erase count. For example, referring to index 1, the status information may include a value corresponding to an average erase count. The average erase count may indicate an average of values obtained by counting erase operations each performed on a plurality of memory cells in the corresponding memory region. When a program operation and an erase operation are excessively performed in the corresponding memory region, the corresponding memory region may be worn out, and thus, the reliability of data may be reduced. To address this issue, a storage controller may determine a memory region having a high average erase count as unavailable. The status information may include information about a valid page count (VPC) ratio. For example, referring to index 2, the status information may include a value corresponding to a VPC ratio. The VPC ratio may indicate a ratio of used pages to all pages in a corresponding memory region. A page may be a unit by which the page buffer124ofFIG.3Aprocesses data, and may correspond to memory cells of the memory block BLK ofFIG.3B, which are connected with one word line. When a VPC ratio of a corresponding memory region is 100%, the memory region may be in a state where a capacity is full (i.e., the corresponding memory region is unavailable). The status information may include information about a number of bad blocks. For example, referring to index 3, the status information may include a value corresponding to the number of bad blocks. A bad block may be a memory block that is initially damaged in a manufacturing phase or may be a memory block that is damaged later, e.g., after iterations of program and erase operations. A bad block may be a memory block incapable of normally storing data. In the case where a memory region corresponds to one memory chip, a bad block may correspond to a memory block, in the case where a memory region corresponds to a memory block, a bad block may correspond to a sub block in the memory block. The status information may include information about a write amplification factor (WAF). For example, referring to index 4, the status information may include a value corresponding to a WAF. A WAF may be a value obtained by dividing a size of data written in a memory region by a size of data requested from a host. For an additional operation such as a garbage collection operation, a storage device may write data, the amount of which is larger than that of data requested from a host. As a size of the WAF becomes smaller, a workload of a storage device may decrease. In general, a size of the WAF is not less than “1”. The WAF may be used as one of factors for determining a workload of a memory region. The status information may include information about a memory usage ratio. For example, referring to index 5, the status information may include a value corresponding to a memory usage ratio. A memory usage ratio may indicate how much a corresponding memory region is used depending on a write operation, a read operation, a garbage collection operation, a defense code, etc. A defense code may mean an algorithm for suppressing or restoring data damage of a storage device. Because the VPC ratio indicates a data storage capacity of a memory region and the memory usage ratio indicates how much an operation associated with data processing is performed, the memory usage ratio is distinguishable from the VPC ratio. A state where a memory usage ratio is high may be referred to as a “busy state”, and a state where a memory usage ratio is low may be referred to as an “idle state”. As described above, according to an example embodiment, a storage controller may include a plurality of status information respectively corresponding to a plurality of memory regions. The plurality of status information may include one or more of the average erase count of the corresponding memory region, the VPC ratio of the corresponding memory region, the number of bad blocks of the corresponding memory region, the WAF of the corresponding memory region, and the memory usage ratio of the corresponding memory region. Referring toFIG.5B, the plurality of status information SI1to SI4respectively corresponding to the plurality of memory regions MR1to MR4are illustrated. The first status information SI1may indicate a status of the first memory region MR1. For example, the first status information SI1may indicate that the average erase count is 200, the VPC ratio is 100%, the number of bad blocks is 6, the WAF is 5, and the memory usage ratio is 65%. The second status information SI2may indicate a status of the second memory region MR2. For example, the second status information SI2may indicate that the average erase count is 300, the VPC ratio is 70%, the number of bad blocks is 10, the WAF is 2, and the memory usage ratio is 85%. The third status information SI3may indicate a status of the third memory region MR3. For example, the third status information SI3may indicate that the average erase count is 200, the VPC ratio is 80%, the number of bad blocks is 5, the WAF is 4, and the memory usage ratio is 30%. The fourth status information SI4may indicate a status of the fourth memory region MR4. For example, the fourth status information SI4may indicate that the average erase count is 500, the VPC ratio is 70%, the number of bad blocks is 7, the WAF is 1, and the memory usage ratio is 90%. The storage controller110may determine whether the respective memory regions MR1to MR4are available, based on the plurality of status information SI1to SI4. For example, because the VPC ratio in the first status information SI1is 100%, the storage controller110may determine the first memory region MR1as unavailable. Because the number of bad blocks in the second status information SI2is more than the number of bad blocks corresponding to each of the remaining status information SI1, SI3, and SI4, the storage controller110may determine the second memory region MR2as unavailable. Because the average erase count in the fourth status information SI4is greater than the average erase count corresponding to each of the remaining status information SI1, SI2, and SI3, the storage controller110may determine the fourth memory region MR4as unavailable. The storage controller110may determine the third memory region MR3as available, based on the third status information SI3. However, the availability of the memory regions may be variously determined, and the available memory conditions will be described in detail with reference toFIG.9. FIGS.6A,6B, and6Care diagrams for describing a memory region, on which a workload is concentrated, according to an example embodiment. FIG.6Adescribes a memory region where the VPC ratio is concentrated. Referring toFIG.6A, the storage controller110may have the first to fourth status information SI1to SI4stored therein. The first to fourth status information SI1to SI4may indicate VPC ratios of the first to fourth memory regions MR1to MR4, respectively. In an example, the VPC ratios of the first to fourth memory regions MR1to MR4may be 100%, 50%, 90%, and 70%, respectively. The storage controller110may receive a write request for the first memory region MR1from a host. Because the VPC ratio of the first memory region MR1is 100%, a write operation in the first memory region MR1may fail. The storage controller110may thus redirect the write request based on the first to fourth status information SI1to SI4. FIG.6Bdescribes a memory region where the average erase count is concentrated. Referring toFIG.6B, the storage controller110may include the first to fourth status information SI1to SI4. The first to fourth status information SI1to SI4may indicate average erase counts of the first to fourth memory regions MR1to MR4, respectively. In an example, the average erase counts of the first to fourth memory regions MR1to MR4may be 100, 1000, 100, and 100, respectively. The storage controller110may receive a write request for the second memory region MR2from the host. Because the average erase count of the second memory region MR2is relatively greater than the average erase counts of the remaining memory regions MR1, MR3, and MR4, the second memory region MR2may wear out before the remaining memory regions MR1, MR3, and MR4. To prevent this issue, the storage controller110may thus redirect the write request based on the first to fourth status information SI1to SI4. FIG.6Cdescribes a memory region where the memory usage ratio is concentrated. Referring toFIG.6C, the storage controller110may include the first to fourth status information SI1to SI4. The first to fourth status information SI1to SI4may indicate memory usage ratios of the first to fourth memory regions MR1to MR4, respectively. In an example, the memory usage ratios of the first to fourth memory regions MR1to MR4may be 10%, 5%, 95%, and 5%, respectively. The storage controller110may receive a write request for the third memory region MR3from the host. Because the memory usage ratio of the third memory region MR3is high (e.g., as a defense code is running), a latency in the third memory region MR3may be longer than latencies of the remaining memory regions MR1, MR2, and MR4. As such, in the case where a write request is processed in the third memory region MR3, data processing may be delayed. To prevent this issue, the storage controller110may thus redirect the write request based on the first to fourth status information SI1to SI4. As described above, cases where data processing may fail, a memory region may be worn out, and data processing may be delayed are described with reference toFIGS.6A,6B, and6C by way of example. The present example embodiment may redirect a write request for an unavailable memory region, and may periodically monitor a status of a memory region and provide monitoring information to a host. FIG.7is a diagram for describing a method for redirecting a write request, according to an example embodiment. Referring toFIG.7, a storage controller may communicate with the host11and may include the status checker111and the redirection device112. The status checker111may receive a write request for the first memory region MR1from the host11. The status checker111may check a status of the non-volatile memory device120and may generate a status information set SIS. The status information set SIS may include the plurality of status information SI1to SIN. The plurality of status information SI1to SIN may correspond to the plurality of memory regions MR1to MRN, respectively. The status checker111may determine whether the first memory region MR1is available, based on the status information set SIS. In an example, the status checker111may determine that the first memory region MR1is unavailable. For example, the first status information SI1may indicate that a capacity of the first memory region MR1is fully occupied (i.e., that the VPC ratio of the first memory region MR1is 100%). The status checker111may generate redirection information indicating that the second memory region MR2is selected instead of the first memory region MR1. The status checker111may output the redirection information to the redirection device112. The redirection device112may communicate with the non-volatile memory device120. The redirection device112may perform a write operation on the second memory region MR2, based on the redirection information from the status checker111. That is, the redirection device112may store data, which are requested from the host11so as to be written in the first memory region MR1, in the second memory region MR2, not the first memory region MR1. The redirection device112may output redirection result information RRI to the host11. The redirection result information RRI may be information indicating that write data according to the write request is processed in the second memory region MR2. For example, the redirection result information RRI may include, e.g., one or more of a logical block address (LBA) corresponding to a redirected memory region of the plurality of memory regions MR1to MRN, information of a memory region corresponding to a write request, information of the redirected memory region, a reason for being redirected, and the status information set SIS. For example, the redirection result information RRI may indicate that an LBA corresponding to the redirected memory region MR2of the plurality of memory regions MR1to MRN is “0xABCD”. The redirection result information RRI may indicate that the memory region corresponding to the write request is the first memory region MR1. The redirection result information RRI may indicate that the redirected memory region is the second memory region MR2. The redirection result information RRI may indicate that the reason for being redirected is a capacity exceeded. The redirection result information RRI may include the status information set SIS including the plurality of status information SI1to SIN. The host11may receive the redirection result information RRI from the storage controller. The host11may recognize that data corresponding to the write request are stored in the second memory region MR2, based on the redirection result information RRI. To read the data redirected to the second memory region MR2, the host11may output, to the storage controller, a read request corresponding to the data of the write request and including information of the second memory region MR2being the redirected memory region. FIG.8is a diagram for describing a method for monitoring a status of a memory region, according to an example embodiment. Referring toFIG.8, a storage controller may communicate with the host11and may include the status checker111and the monitoring device113. The monitoring device113may communicate with the non-volatile memory device120. The monitoring device113may monitor the plurality of memory regions MR1to MRN in the non-volatile memory device120. The monitoring device113may monitor the non-volatile memory device120periodically, e.g., every reference time, without a separate request from the host11. The monitoring device113may update the status information set SIS of the status checker111based on monitoring the plurality of memory regions MR1to MRN. The status information set SIS may include the plurality of status information SI1to SIN. The plurality of status information SI1to SIN may correspond to the plurality of memory regions MR1to MRN, respectively. The status checker111may determine whether the plurality of memory regions MR1to MRN are available, based on the updated status information set SIS. The status checker111may generate the monitoring information MI indicating whether the plurality of memory regions MR1to MRN are available. For example, the updated first status information SI1may indicate that a capacity of the first memory region MR1is fully occupied (i.e., that the VPC ratio of the first memory region MR1is 100%). The status checker111may determine the first memory region MR1as unavailable, based on the updated first status information SI1. The status checker111may generate the monitoring information MI indicating that the first memory region MR1of the plurality of memory regions MR1to MRN is unavailable. The status checker111may output the monitoring information MI to the host11. The monitoring information MI may include, e.g., one or more of information of an unavailable memory region of the plurality of memory regions MR1to MRN, a reason why the unavailable memory region is unavailable, information of an available memory region of the plurality of memory regions MR1to MRN, and the updated status information set SIS. For example, the monitoring information MI may indicate that an unavailable memory region of the plurality of memory regions MR1to MRN is the first memory region MR1. The monitoring information MI may indicate that a reason why the unavailable memory region is unavailable is a capacity exceeded. The monitoring information MI may indicate that an available memory region of the plurality of memory regions MR1to MRN is the second memory region MR2. The monitoring information MI may include the updated status information set SIS including the plurality of status information SI1to SIN. The host11may receive the monitoring information MI from the storage controller. The host11may determine that the first memory region MR1is unavailable, based on the monitoring information MI. The host11may determine that the second memory region MR2is available, based on the monitoring information MI. The host11may output, to the storage controller, a write request for the second memory region MR2available from among the plurality of memory regions MR1to MRN. FIG.9is a diagram illustrating available memory conditions in detail, according to an example embodiment. Referring toFIG.9, the available memory conditions may be a reference for determining whether a corresponding memory region of a plurality of memory regions is available. For example, the status checker111ofFIGS.7and8may determine whether each of the plurality of memory regions is available, based on available memory conditions ofFIG.9. According to an example embodiment, the available memory conditions may include one or more of a condition associated with the average erase count, a condition associated with the VPC ratio, a condition associated with the number of bad blocks, a condition associated with the WAF, and a condition associated with the memory usage ratio. In an example embodiment, a storage controller may determine whether a corresponding memory region is available based on the average erase count of the corresponding memory region. For example, referring to index 1, when a value obtained by subtracting a minimum value of a plurality of average erase counts of the plurality of memory regions from the average erase count of the corresponding memory region is smaller than an erase count threshold value, the storage controller may determine the corresponding memory region as available. When the condition according to index 1 is satisfied, a corresponding bit flag may be determined as a first value (e.g., “1”). When the condition according to index 1 is not satisfied, the corresponding bit flag may be determined as a second value (e.g., “0”). That the condition according to index 1 is not satisfied may mean that an erase count of the corresponding memory region is considerably greater than those of the remaining memory regions of the plurality of memory regions. To prevent a reduction in performance of a storage device due to the concentrated wearing-out of the corresponding memory region, a memory region that does not satisfy the condition according to index 1 may be determined as unavailable. In an example embodiment, the storage controller may determine whether a corresponding memory region is available based on the VPC ratio of the corresponding memory region. For example, referring to index 2, when the VPC ratio of the corresponding memory region is smaller than a maximum value of a plurality of VPC ratios of the plurality of memory regions, the storage controller may determine the corresponding memory region as available. When the condition according to index 2 is satisfied, a corresponding bit flag may be determined as the first value (e.g., “1”). When the condition according to index 2 is not satisfied, the corresponding bit flag may be determined as the second value (e.g., “0”). That the condition according to index 2 is not satisfied may mean that a free capacity of the corresponding memory region of the plurality of memory regions is the smallest. To prevent data from being intensively stored in the corresponding memory region, i.e., to prevent a workload from being concentrated in the corresponding memory region in the following read operations, a memory region that does not satisfy the condition according to index 2 may be determined as unavailable. In an example embodiment, the storage controller may determine whether a corresponding memory region is available based on the number of bad blocks in the corresponding memory region. For example, referring to index 3, when the number of bad blocks of the corresponding memory region is smaller than a maximum value of the numbers of bad blocks of the plurality of memory regions, the storage controller may determine the corresponding memory region as available. When the condition according to index 3 is satisfied, a corresponding bit flag may be determined as the first value (e.g., “1”). When the condition according to index 3 is not satisfied, the corresponding bit flag may be determined as the second value (e.g., “0”). That the condition according to index 3 is not satisfied may mean that the number of bad blocks of the corresponding memory region of the plurality of memory regions is the greatest. In the case where write operations are further performed in the corresponding memory region, the number of bad blocks of the corresponding memory region may increase. In this case, an error that exceeds an error correction capability of an ECC engine in the storage controller may frequently occur, thereby causing the reduction in reliability of the storage device. To prevent this issue, a memory region that does not satisfy the condition according to index 3 may be determined as unavailable. In an example embodiment, a storage controller may determine whether a corresponding memory region is available, based on the WAF of the corresponding memory region. For example, referring to index 4, when the WAF of the corresponding memory region is smaller than a WAF threshold value, the storage controller may determine the corresponding memory region as available. When the condition according to index 4 is satisfied, a corresponding bit flag may be determined as the first value (e.g., “1”). When the condition according to index 4 is not satisfied, the corresponding bit flag may be determined as the second value (e.g., “0”). That the condition according to index 4 is not satisfied may mean that the corresponding memory region is inappropriate to process a write request. For example, in the case where a write request for storing data of 4 KB is received from a host, a memory region having a WAF of 10 stores data of 40 KB, but a memory region having a WAF of 2 stores data of 8 KB. To prevent a write operation from being performed in an inappropriate memory region having a high WAF, e.g., to prevent a resource of the storage device from being wasted, a memory region that does not satisfy the condition according to index 4 may be determined as unavailable. In an example embodiment, the storage controller may determine whether a corresponding memory region is available based on the memory usage ratio of the corresponding memory region. For example, referring to index 5, when the memory usage ratio of the corresponding memory region is smaller than a memory usage ratio threshold value, the storage controller may determine the corresponding memory region as available. When the condition according to index 5 is satisfied, a corresponding bit flag may be determined as the first value (e.g., “1”). When the condition according to index 5 is not satisfied, the corresponding bit flag may be determined as the second value (e.g., “0”). That the condition according to index 5 is not satisfied may mean that the corresponding memory region fails to process a write request because of processing any other operation (e.g., another write request, another read request, a garbage collection operation, or a defense code). For example, when the memory usage ratio of the corresponding memory region is 100%, because the corresponding memory region is capable of processing a write request after completing at least a portion of a previously requested operation, a speed in which data are processed may decrease. To prevent this issue, a memory region that does not satisfy the condition according to index 5 may be determined as unavailable. FIG.10is a flowchart for describing a method for redirecting a write request, according to an example embodiment. An operating method of a storage controller that redirects a write request will be described with reference toFIG.10. The storage controller may communicate with a host and a plurality of memory regions. In operation S110, the storage controller may receive a write request for a first memory region of the plurality of memory regions from the host. In an example embodiment, the plurality of memory regions may respectively correspond to a plurality of memory chips in one storage device or may respectively correspond to a plurality of memory blocks in one memory chip. In operation S120, the storage controller may determine that the first memory region is unavailable, based on a status information set. The status information set may include a plurality of status information respectively corresponding to the plurality of memory regions. The status information may include one or more of the average erase count of a corresponding memory region, the VPC ratio of the corresponding memory region, the number of bad blocks of the corresponding memory region, the WAF of the corresponding memory region, and the memory usage ratio of the corresponding memory region. In an example embodiment, the storage controller may determine that the first memory region is unavailable based on the available memory conditions ofFIG.9. In operation S130, the storage controller may generate redirection information indicating that a second memory region of the plurality of memory regions is selected instead of the first memory region. In operation S130the storage controller may determine that the second memory region satisfies the available memory conditions, based on the status information set. Operation S130may include selecting the second memory region satisfying the available memory conditions and generating the redirection information. In this case, the available memory conditions may be similar to the available memory conditions ofFIG.9. In operation S140, the storage controller may perform a write operation in the second memory region, based on the redirection information generated in operation S130. Operation S140may include updating status information of the second memory region, in the status information set, based on the write operation. In operation S145, the storage controller may output, to the host, redirection result information indicating that write data of the write request are processed in the second memory region. In an example embodiment, the redirection result information may include one or more of a logical block address corresponding to a redirected memory region of the plurality of memory regions, information of a memory region corresponding to a write request, information of the redirected memory region, a reason for being redirected, and a status information set. In an example embodiment, the operating method of the storage controller may further include receiving a read request, which corresponds to write data and includes information of a second memory region, from the host after performing operation S145. FIG.11is a flowchart for describing a method for monitoring a status of a memory region, according to an example embodiment. An operating method of a storage controller that monitors a status of a memory region will be described with reference toFIG.11. The storage controller may communicate with a host and a plurality of memory regions. In operation S210, the storage controller may monitor the plurality of memory regions. Operation S210may include determining whether each of the plurality of memory regions satisfies available memory conditions, periodically, e.g., every reference time. In this case, the available memory conditions may be similar to the available memory conditions ofFIG.9. In operation S215, the storage controller may update the status information set based on the monitoring in operation S210. The status information set may include a plurality of status information respectively corresponding to the plurality of memory regions. In an example embodiment, the status information may include one or more of the average erase count of a corresponding memory region, the VPC ratio of the corresponding memory region, the number of bad blocks of the corresponding memory region, the WAF of the corresponding memory region, and the memory usage ratio of the corresponding memory region. In operation S220, the storage controller may determine that the plurality of memory regions are available, based on the updated status information set. In operation S230, the storage controller may generate monitoring information indicating that a first memory region of the plurality of memory regions is not available. In an example embodiment, the monitoring information may include one or more of information of an unavailable memory region of the plurality of memory regions, a reason why the unavailable memory region is unavailable, information of an available memory region of the plurality of memory regions, and an updated status information set. In operation S240, the storage controller may output the monitoring information to the host. In an example embodiment, in operation S240, the storage controller may output, to the host, the monitoring information including information indicating that the second memory region satisfies the available memory conditions. The operating method of the storage controller may further include receiving a write request for the second memory region of the plurality of memory regions from the host after performing operation S240. Afterwards, the storage controller may perform a write operation in the second memory region based on the write request, and may update the status information of the second memory region, in the status information set, based on the write operation. The storage controller may receive a read request, which corresponds to the write operation and includes the information of the second memory region, from the host. FIG.12is a block diagram of a solid state drive (SSD) system to which a storage device according to an example embodiment is applied. Referring toFIG.12, an SSD system1000may include a host1100and a storage device1200. The host1100may correspond to the host11ofFIGS.1,4,7, and8. The storage device1200may exchange a signal SIG with the host1100through a signal connector1201, and may receive a power PWR through a power connector1202. The storage device1200may include a plurality of non-volatile memories1211to121N, an SSD controller1220, an auxiliary power supply1230, and a buffer memory1240. The plurality of non-volatile memories1211to121N may correspond to the plurality of memory chips MC ofFIG.1, or may correspond to the plurality of memory regions MR1to MRN ofFIGS.4,7, and8. The plurality of non-volatile memories1211to121N may operate under control of the SSD controller1220. The SSD controller1220may correspond to the storage controller110ofFIGS.1,2,4,7, and8. The SSD controller1220may perform the operating methods ofFIGS.10and11. The SSD controller1220may control the non-volatile memories1211to121N in response to the signal SIG from the host1100. In an example embodiment, as in the storage controller described with reference toFIGS.1to11, the SSD controller1220may redirect a write request for an unavailable memory region (e.g., a non-volatile memory), may periodically monitor a status of a memory region, and may provide monitoring information to the host1100. As such, the storage device1200may help to suppress a failure of data processing and wearing-out of a memory region, and may enable high data processing speed. The auxiliary power supply1230may be connected with the host1100through the power connector1202. The auxiliary power supply1230may be charged by the power PWR from the host1100. When the power is not smoothly supplied from the host1100, the auxiliary power supply1230may provide a power for driving the SSD device1200. The buffer memory1240may be used as a buffer memory of the storage device1200. By way of summation and review, a flash memory device may be used as a high-capacity storage medium. For example, the flash memory device may be used as a storage medium of a server for cloud computing. In a multi-tenancy environment where the same service is provided to multiple users, to reduce the interference between users and reinforce security, the flash memory device may be configured to independently manage data for each user in physically separated memory regions. However, an excessive workload may be focused on a specific memory region depending on the inclination of the user, a size of data, etc., which may cause data processing failure, a specific memory region becoming worn out, or data processing being delayed. As described above, embodiments relate a storage controller redirecting a write operation and an operating method thereof. According to an example embodiment, a storage controller may redirect a write request for an unavailable memory region, periodically monitors a status of a memory region, and/or provide monitoring information to a host, which may help avoid a failure of data processing and wearing-out of a memory region, and may provide a high data processing speed. Another example embodiment is directed to a corresponding an operating method. Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims. | 57,007 |
11861193 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to emulating memory sub-systems that have different performance characteristics. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. The memory sub-systems are continuously evolving to incorporate changes that can both benefit and harm the performance of host systems. Even a small change to the performance of a memory sub-system can have adverse effects on the performance of a host system. For example, a negative change that increases the memory latency or decreases the memory bandwidth can have a disproportionate effect on the host system. The disproportionate effect can be an exponential decrease in the performance of the host system and can even cause the host system to fail (e.g., buffer overflows, race conditions, etc.). To detect and address the adverse the effects, the host systems are often tested with the changed memory sub-system. The changes can include hardware changes or software changes that can change any part of the memory sub-system, such as, the memory devices (e.g., DRAM memory cells), the memory controllers (e.g., memory sub-system controller, local media controller), the host controller interface, other portion of memory sub-system, or a combination thereof. Testing the changes to the memory sub-system can be challenging because the hardware or software can be delayed, expensive, defective, or otherwise unavailable. Aspects of the present disclosure address the above and other deficiencies by providing technology that enables a host system to use its existing memory sub-system to emulate the characteristics of a target memory sub-system. The characteristics of the target memory sub-system (e.g., target characteristics) can include performance characteristics related to the latency or bandwidth of reading data, writing data, copying data, moving data, other data storage operation, or a combination thereof. The target memory sub-system can include features that adversely affect the performance characteristics (e.g., slower media, slower interface, slower interconnects, slower controller, additional overhead, etc.). The technology can determine the target characteristics and update the configuration of the host system so that the existing memory sub-system exhibits the performance characteristics that are the same or similar to the target characteristics. The updates can include introducing interconnect hops to the memory data path, decreasing the bus speed, partitioning the bandwidth, loading the memory controllers using memory intensive programs, other configuration change, or a combination thereof. Each of the configuration changes can alter the characteristics in different ways and the technology can evaluate the different changes and identify a configuration that can successfully emulate the performance of the target memory sub-system. The configured host system can then be tested to approximate how the host system would operate if it included the target memory sub-system. Advantages of the technology disclosed herein include, but are not limited to, emulating memory sub-systems. The technology can use the emulation to provide a proof of concept or prototype of a host system that uses the target memory sub-system. In one example, the host system can include DRAM as main memory and can configure the host system so that the DRAM emulates the performance characteristics of another type of volatile memory, non-volatile memory (e.g., Persistent Memory (PMEM)), or other memory type. The host system with the emulated memory sub-system can be tested without using the actual target memory sub-system. This can enable testing before the target memory sub-system is available and can avoid or reduce the cost (e.g., time and money) associated with acquiring the target memory sub-system. This can also avoid the cost to install the target memory sub-system and reconfigure the host system, which can include other dependencies (e.g., hardware or software development costs). The testing of the host system can include performance testing (e.g., benchmarking), failure testing (e.g., functional testing), other testing, or a combination thereof. The technology can also enable end users to emulate different memory sub-systems to profile how hardware and software of the host system is affected by the performance of the memory sub-system. FIG.1illustrates an example computing system100that includes a memory sub-system110and a host system120in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more non-volatile memory devices (e.g., memory device130), one or more volatile memory devices (e.g., memory device140), or a combination of such. Each memory device130or140can be one or more memory component(s). A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory module (NVDIMM). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to different types of memory sub-system110.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components or devices, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components or devices), whether wired or wireless, including connections such as electrical, optical, magnetic, and the like. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface, which can communicate over a system bus. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include negative-and (NAND) type flash memory and write-in-place memory, such as three-dimensional cross-point (“3D cross-point”) memory. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as NAND type flash memory (e.g., 2D NAND, 3D NAND) and 3D cross-point array of non-volatile memory cells are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAIVI), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117) configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for memory management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The host system120can include an emulation component224that enables host system120to modify the configuration of host system120and one or more memory sub-systems110to emulate the use of a different memory sub-system. These and other features of emulation component224are discussed below. FIG.2is a detailed block diagram of host system120that can use a memory sub-system110to emulate a memory sub-system that is absent from host system120, in accordance to examples of the disclosure. Host system120can be the same or similar to host system120ofFIG.1and can be referred to as a host machine, host device, or other term. Host system120can be or include one or more servers (e.g., on-premise server, cloud server, edge server), personal computers (e.g, laptop, desktop, workstation), mobile phones (e.g., smart phone), vehicles (e.g., autonomous vehicle(AV), electric vehicle (EV), aerial vehicle), Internet of Things (IoT) devices (e.g., smart speaker, kitchen appliance), industrial control systems (e.g., traffic light, gas meter), other devices, or a combination thereof. In the example illustrated inFIG.2, host system120may include computing resources210and an operating system220. Computing resources210includes one or more CPUs211A-C, memory controllers215A-C, and memory nodes240A-C that are arranged in a computing topology250. Each of the CPUs211A-C can be associated with memory sub-system110, which can include a plurality of memory controllers215A-C and a plurality of memory nodes240A-C that are interconnected using interconnects230A-Z. Each of CPUs211A-C can have a local memory controller that controls access to one or more of the memory nodes240A-C. The CPU can use the local memory controller to access a local memory node and can use a remote memory controller to access a remote memory node. In the example shown inFIG.2, each of memory controllers215A-C can be a part of the package or die of the respective CPUs211A-C (e.g., integrated memory controllers). In other examples, memory controllers215A-C can be separate from CPUs211A-C (e.g., discrete memory controllers). Interconnects230A-Z can provide communication channels between computing resources210. Interconnects230A-C can be CPU-to-Memory interconnects that connect CPUs211A-C to their respective local memory nodes240A-C. Interconnects230Y-Z can be CPU-to-CPU interconnects that connect CPUs211A-C to one another. There can also or alternatively be interconnects between non-adjacent hardware resources (not shown), such as an interconnect between CPU211A and CPU211C or between a CPU and remote memory nodes240B and240C. Interconnects230A-Z can include one or more interfaces, connectors, adapters, other piece of hardware or software, or a combination thereof. Interconnects230A-Z can implement a standard or proprietary communication protocol that includes or is based on Compute Express Link™ (CXL), Peripheral Component Interconnect™ (e.g., PCI, PCIe), Non-Volatile Memory Express™ (NVMe), Advanced Host Controller Interface™ (AHCI), Serial Advanced Technology Attachment Interface™ (e.g., SATA, mSATA), Small Computer System Interface™ (SCSI, iSCSI), Integrated Drive Electronics™ (e.g., IDE, EIDE), InfiniBand™, other communication technology, or a combination thereof. Memory nodes240A-C can each include one or more memory devices140. Memory device140can be made up of bits arranged in a two-dimensional grid of memory cells. Memory cells are etched onto a silicon wafer in an array of columns (also hereinafter referred to as bitlines) and rows (also hereinafter referred to as wordlines). A wordline can refer to one or more rows of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline can constitute the address of the memory cell. A block can refer to a unit of the memory device140used to store data and can include a group of memory cells, a wordline group, a wordline, or individual memory cells. As discussed above, each memory device can be a memory module and include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or various types of non-volatile dual in-line memory module (NVDIMM). In one example, memory nodes240A-C can implement Non-Uniform Memory Access (NUMA) and be referred to as NUMA nodes. NUMA is a computer memory design used in multiprocessing and the memory access time can depend on the memory location relative to the memory controller. Under NUMA, a CPU can access its own local memory faster than non-local memory. This can result in the CPU having a lower latency and higher bandwidth when accessing a local memory node and having a higher latency and lower bandwidth when accessing a remote memory node. This may occur because CPU211A can use a single interconnect to access local memory node and can use multiple interconnects to access a remote memory nodes. For example, CPU211A can access local memory node240A using memory controller215A and interconnect230A that has a combined latency of 80 nanosecond (ns) and a bandwidth of 107 Gigabytes per second (GiB/s). CPU211A can access remote memory node240B using memory controller215B and interconnects230B and230Y that has a combined latency of 130 ns (larger latency of 80 ns+50 ns) and a bandwidth of approximately 35 GiB/s (e.g., smaller bandwidth that is lesser of 35 and 107 GiB/s). Computing topology250is the arrangement of computing resources210of host system120. Computing topology250can define the arrangement of CPUs (e.g., CPU topology), memory nodes (e.g., Memory topology, NUMA topology), interconnects (e.g., interconnect topology, bus topology), or a combination thereof. In one example, computing topology250may be specific to hardware devices and be referred to as a hardware topology or hardware layout. Computing topology250can be based on the layout of sockets on one or more printed circuit boards. A socket is a hardware interface that contains one or more mechanical components for providing mechanical and electrical connections between the printed circuit board (PCB) and one of the computing resources210(e.g., CPU). Each socket can receive the computing resource and communicably couple it with one or more other computing resources. A socket that receives a CPU is referred to as a CPU socket. Host system120can include more than one socket and can be referred to as a multi-socket host system (e.g., multi-socket server). For illustration purposes, host system120is a three socket server (CPU211A-C) but in other examples it may be an N socket server, wherein N is any positive integer (e.g., 2, 4, 8, 16). The computing resources210in computing topology250can be managed by an operating system220. Operating system220may be any program or combination of programs that are capable of managing computing resources of host system120. Operating system220may include a kernel comprising one or more kernel space programs (e.g., physical device driver, virtual device driver,) for interacting with virtual hardware devices or physical hardware devices. In one example, operating system220may include Linux™ (e.g., Fedora™, Ubuntu™), Unix (e.g., Solaris™), Microsoft Windows™, Apple Macintosh™, other operating system, or a combination thereof. Operating system220can manage the execution of configuration analysis component222, emulation component224, and performance testing component226. Configuration analysis component222can determine the configuration of the host system120and discover parameters that are available to change the configuration. Emulation component224can determine characteristics of a target memory sub-system that is absent from host system120and update the configuration to emulate the target memory sub-system. Emulation component224can evaluate multiple candidate configurations to in order to identify the configuration that most closely emulates the performance characteristics of the target memory sub-system. Performance testing component226can run one or more tests of host system120after it is configured to emulate the targeted memory sub-system. The tests may be benchmark tests, functional tests, real world workloads, other tests, or a combination thereof. Components222,224, and226are discussed in more detail below in regards toFIG.4can be implemented as one or more application programs, kernel programs (e.g., device drivers), other programs, or a combination thereof. FIG.3is a block diagram illustrating an example of how a configuration300of host system120can be updated to make the memory sub-system110emulate a target memory sub-system with different characteristics. In the example shown inFIG.3, host system120can include multiple CPUs211A-B and a first CPU211A can provide a load that makes the memory sub-system emulate a target memory sub-system and the second CPU211B can run a performance test using the emulated memory sub-system. In other examples, host system120can use multiple CPUs to provide the load and a different CPU to perform the test. In either example, host system120can be updated as illustrated by configurations300A-Z to change the characteristics305A-Z of memory sub-system110. Characteristics305A-Z include one or more physical operating characteristics of memory sub-system110of host system120. Characteristics305A-Z can relate to how the memory sub-system operates on data, which can include transmitting data, processing data, storing data, accessing data, reading data, writing data, transforming data, formatting data, encoding/decoding data, encrypting/decrypting data, other operation, or a combination thereof. Characteristics305A-Z can include one or more characteristics that represent the performance (e.g., performance characteristics), features (e.g., feature characteristics), or functions (e.g., functional characteristics) of the memory sub-system. Each of characteristics305A-Z can relate to one or more measurements of time, quantity, capacity, speed, clock cycles, misses, faults, errors, failures, successes, other property, or combination thereof. The measurements can correspond to particular data operations on a particular quantity of data. The data operations can include one or more load operations (e.g., reads), store operations (e.g., writes), copy operations (e.g., copy-on-writes), modify operations (e.g., read-modify-writes), other operation, or a combination thereof. The data operation can be initiated or submitted by a requestor and fulfilled or completed by a provider. The requestor can be the memory controller, CPU, processor core, interconnect, other computing resource, or a combination thereof. The provider can be one or more memory devices (e.g., memory cells), memory nodes, local media controllers, other computing resource, or a combination thereof. The quantity of data for the data operation can be based on one or more storage units and the storage units can be the same or similar to a bit, byte, word, block, cache line, stripe, frame, page, other storage unit or portion of a storage unit, or a combination thereof. The measurement can be represented as one or more values and each value can be a count, an average, a frequency, a minimum, a maximum, other value, or a combination thereof. In the example illustrated inFIG.3, characteristic305A can correspond to a latency and characteristic305B can correspond to a bandwidth. Latency can be based on an interval of time it takes the memory sub-system110to perform one or more data operations (e.g., memory latency). The latency can be a measurement of time (e.g., duration, period, or interval) that begins when the data operation is initiated and ends when the data operation is completed. For example, the latency of a read operation (e.g., access delay) can be an interval of time that begins when data is requested by the requestor and ends when some or all of the requested data is received from the provider. The latency of a store operation (e.g., store delay) can be the interval of time that begins when data is transmitted by the requestor and ends when either the data is stored in one or more media devices or when a response (e.g., acknowledgement) is received from the provider. In one example, the latency can be a Column Address Strobe latency (CL) and can be based on the delay in clock cycles between a read command and the instant the data becomes available. The interval measurement can be specified in clock cycles (e.g., clock ticks) or can be converted to a time duration (absolute time, wall clock time). Bandwidth can be based on a rate that the memory sub-system110can perform one or more data operations (e.g., memory bandwidth). The bandwidth can be the same or similar to the throughput (e.g., user data and overhead), goodput (user data without overhead), other transfer rate, or a combination thereof. The overhead can include management data for error detection, error correction, recovery, acknowledgement, etc. The bandwidth can be expressed in units of data quantity per time duration (e.g., bytes per second) and can be an average bandwidth, sustained bandwidth, minimum bandwidth, maximum bandwidth, observed bandwidth, other bandwidth, or a combination thereof. In one example, measuring bandwidth can be done by counting the amount of data copied from one location in memory to another location per unit time. For example, copying 1 million bytes from one location in memory to another location in memory in one second would be counted as 1 million bytes per second (1 MB/sec). Configuration300can be the system configuration of host system120and can be based on or include one or more hardware configurations, software configurations, or a combination thereof. The hardware configurations can include the configuration of one or more computing resources210and include the configuration of CPUs211A-B, memory controllers215A-B, memory nodes240A-B, and interconnects230A-Z, or a combination thereof. The software configurations can include the installation, execution, or configuration of one or more computer programs that include device firmware, device drivers, kernel, applications, other portion of operating system220, or a combination thereof. In the example shown inFIG.3, configuration300is a system configuration that includes one or more optional configurations300A-Z. Configurations300A-Z can each correspond to a configuration parameter that can be changed to modify the characteristics of the memory subs-system. The configuration parameters and their alternate values (e.g., parameter values) are discussed in more detail in regards to parameter discovery module412ofFIG.4. Configurations300A-Z are example configurations that can be used individually or in combination to cause the existing memory sub-system to have characteristics that emulate a target memory sub-system. Configuration300A can change the data path from using local memory to using remote memory and can be colloquially referred to as introducing a data path detour (e.g., memory detour). Configuration300A can modify the data path and add one or more hops to the data path. Each extra hop can add an extra memory controller, interconnect, or a combination thereof. As shown inFIG.3, process310can execute a benchmarking program312and would normally be associated with memory on local memory node240B and configuration300A can cause process310to be associated with memory on remote memory node240A (e.g., allocated, designated, or assigned remote memory). The default data path for process310would have been from CPU211B to memory controller215B to interconnect230B to memory node240B. However configuration300A, causes the data path to be from CPU211B to interconnect230Z to CPU211A (e.g., CPU Cache to CPU Cache) to memory controller215A to interconnect230A to memory node240A. This adds an extra interconnect to the data path, which increased the latency and decreased the bandwidth. Configuration300A can be implemented by using an affinity parameter to cause process310to remain on CPU211B and a pinning parameter to cause the process to use remote memory that is one or more hops away from CPU211B (e.g., a remote NUMA node). Configuration300B can change the speed of one or more of the computing resources of host system120. Changing the speed can involve increasing or decreasing the speed of one or more of the memory controllers, interconnects, CPUs, memory nodes, memory devices, or a combination thereof. In one example, configuration300B can decrease the speed of memory controller215A. In another example, configuration300B can decrease the speed of interconnect230A, interconnect230Z, or a combination thereof. In either example, this may involve overclocking or underclocking the computing resources by modifying one or more parameters related to clock rates, clock multipliers, other feature, or a combination thereof. The clock rate can be the frequency at which the clock generator can generate pulses that are used to synchronize the data operations of computing resources. The clock rate can be provided by a mother board, CPU, memory controller, memory node, memory device, other computing resource, or a combination thereof. The clock rate can be an indicator of the speed or frequency of the computing resource and therefor clock rate, clock speed, and clock frequency can be used interchangeably. The clock rate can be measured in clock cycles per second (cycles/sec) or the unit hertz (Hz). The clock multiplier can be the ratio of an internal clock rate to an externally supplied clock rate. There can be a clock multiplier for CPUs (e.g., CPU multipliers), interconnects (e.g., bus multipliers), memory controllers (e.g., controller multipliers), other device, or a combination thereof. The clock multiplier can modify the interconnect to processor ratio (e.g, bus/core ratio). For example, CPUs211A and211B can have a 36× clock multiplier and for every external clock cycle (e.g., 100 MHz) there can be 36 internal cycles (e.g., 3.6 GHz). Configuration300C can involve partitioning the bandwidth into multiple partitions. This decreases the portion of bandwidth available along data path330to a fraction of the original bandwidth (e.g., ½, ⅓, ¼, etc.). The multiple partitions can each have the same portion of bandwidth (e.g., ½, ½) or have different portions of bandwidth (½, ¼, ¼). In one example, configuration300C can change parameters that cause bandwidth bisection and results in the bandwidth being split into two partitions, but other numbers of partitions are also possible and would further decrease the available bandwidth for data path330. In the example shown inFIG.3, the bandwidth of interconnect230A can be partitioned into two portions and a portion can be used by process310that is testing the emulated memory sub-system and the remaining portions can be used by one or more other processes. Configurations300D and300E can involve making the memory nodes less efficient with loading and storing data. Configuration300D can involve spreading the data within the memory node so that the data is less efficiently accessed. This can be done before, during, or after writing or reading the data and involve storing the data across different memory cells, stripes, planes, zones, dies, or memory devices. Configuration300E can involve making a physical change to the memory node so that it has an unbalanced memory configuration. This can involve populating memory devices (DIMMs) to reduce the efficiency of memory interleaving, which results in increasing latency. This can also or alternatively involve having memory devices (e.g., DIMMs) at different speeds, different capacities, or missing a corresponding match (e.g., missing a matching 8 GB DIMM). As shown inFIG.2, memory node240A can include a 5 GB DIMM and a 3 GB DIMM instead of a pair of 4 GB DIMMs. Configuration300F can involve using a CPU to generate a computing workload that approximately loads the memory sub-system110. The load can be a workload that predictably and consistently loads memory controller215A, interconnect230A, memory node240A, other portion of memory sub-system110, or a combination thereof. The load can be generated using programs340A-Z and one or more threads342A-Z. Programs340A-Z can be designed to generate a workload that is memory intensive and precisely affects the characteristics305A-Z (e.g., increase latency, decrease available bandwidth). Programs340A-Z can generate a sequence of memory operations and use an arrangement of binary data that increase or maximizes the consumption of memory sub-system110. Programs340A-Z can be memory intensive and include code (e.g., executable data), information (non-executable data), or a combination thereof. The code can include one or more function calls (e.g., API calls, system calls, hypercalls), commands (e.g., Command Line Interface (CLI) commands), instructions (e.g., CPU instructions), other operation, or a combination thereof. In one example, programs340A-Z can include third party utilities (e.g., LikWid), OS utilities (e.g., Linux memhog), CPU manufacturer utilities (e.g., Intel Memory Latency Checker (MLC) load generator functions), other programs, or a combination thereof. Thread342A-Z can be computing threads that are used to execute programs340A-Z. Threads342A-Z can each execute an instance of a computer program and programs340A-Z may be instances of the same computer program or instances of different computer programs. Each of the threads342A-Z can have a limit to the amount of load it can generate and the more threads the larger the total load. Configuration300F can include selecting the number of threads to precisely control the size of the load on memory sub-system110. Threads342A-Z can execute in parallel (e.g., concurrently) and can be sibling threads that are all part of the same computing process or can be threads of one or more different computing processes. In the example ofFIG.3, threads342A-Z can execute on CPU211A and apply the load to the local memory node240A and local memory controller215A. In other examples, threads342A-Z can execute on another CPU or on multiple CPUs and apply the load to the same memory node240A and memory controller215A or to multiple memory nodes and memory controllers. Configuration300Z can involve introducing additional overhead on read operations, write operations, or a combination thereof. This can be done so that the performance characteristics between read and write are more or less symmetric (e.g., increase or decrease the read-to-write ratio). For example, overhead can be added to the write operations to make them slower without changing the speed of the read operations. FIG.4is a block diagram illustrating an exemplary host system120that that can update its configuration to emulate a memory sub-system that is absent from host system120. The features discussed in regards to the components and modules ofFIG.5can be implemented in software (e.g., program code) or hardware (e.g., circuitry) of host system120. More or less components or modules can be included without loss of generality. For example, two or more of the components can be combined into a single component, or features of a component can be divided into two or more components. In the example illustrated, host system120can include a configuration analysis component222, an emulation component224, and a performance testing component226. Configuration analysis component222can be used to determine the configuration of host system120and to discover parameters that are available to modify the configuration. In one example, configuration analysis component222can include a configuration determination module410, a parameter discovery module412, a candidate evaluation module414, and a modeling module416. Configuration determination module410can enable host system120to determine the configuration of host system120. As discussed above, the configuration can be a system configuration that includes software configurations and/or hardware configurations of one or more of the computing resources. Determining the configuration of host system120can involve accessing data about the configuration of host system120from the operating system, computing resources, or a combination thereof and storing it as configuration data442in data store440. The operating system is responsible for managing computing resources and often stores configuration data in one or more storage objects, such as files (e.g., configuration files, settings files), registries (e.g., hives), databases (e.g., configuration records), other storage object, or a combination thereof. Configuration determination module410can access configuration data from the operating system (OS) by making one or more requests (e.g., system calls) and receiving responses with the configuration data. Configuration determination module410can also or alternatively request configuration data directly from the computing resources by transmitting requests to the computing resource (e.g., CPU, memory controller) and receiving a response with the configuration data. Configuration data442can include data that represents information about the computing resources210, computing topology250, configuration300, and optional configurations300A-Z discussed above in regards toFIGS.2-3. The configuration data received from the operating system or computing resources can be transformed, aggregated, filtered, supplemented, before during or after being stored in data store440as configuration data442. The stored configuration data442can include information about the quantity, location, types, versions, identifiers, or other information for the hardware and software of some or all of the computing resources of host system120. In one example, configuration data442can indicate that host system120is a multi-socket server with a particular number of CPUs and memory nodes. The configuration data can also indicate the relative locations (e.g., hop count) of the CPUs and memory nodes and indicate the memory nodes that are local and remote from each CPU. Parameter discovery module412can enable host system120to analyze configuration data442to identify parameters that are available to update configuration300. Parameter discovery module412can be aware of a global set of parameters and can analyze the configuration data to detect which parameters in the global parameter set are available on host system120. In one example, parameter discover module412can determine available parameter values corresponding to: bus speed options (e.g., 133 MHz, 266 MHz), number of parallel threads per CPU (e.g., 4, 8, 56 concurrent threads), options for thread affinity (e.g., thread 1 bound to core 1), pinning memory of a thread (e.g., remote or local), options to detour the data path (e.g., 0, 1, or 3 hops). Each of the available parameters can correspond to one or more alternate parameter values. The resulting set of available parameters and their corresponding set of available parameter values (e.g., parameter value data444) can define a configuration parameter space for host system120. The configuration parameter space can represent different options for configuring host system120. The configuration parameter space can be an n-dimensional space where each dimension corresponds to a parameter in the set of available parameters and the locations along the dimension correspond to the alternate parameter values for that parameter. For example, the set of available parameter values can include a first parameter with two options (e.g., data path with zero hops or one hop) and a second parameter with three options (e.g., bus speed of underclocked, normal, or overclocked), and a third parameter with nine options (e.g., 0-8 loading threads). In this simplified example, the set of available parameters is 3 and the set of available parameter values is 14 (2+3+9). Candidate evaluation module414can enable host system120to select and evaluate one or more candidate configurations for host system120. Each of the candidate configurations can be a particular combination of parameter values and can correspond to a single point in the configuration parameter space. The configuration parameter space can have n-dimensions and therefore each point in the configuration parameter space can correspond to n coordinate values (e.g., a value along the first dimension, second dimension, and third dimension). The combination of coordinate values that identify the point map to the combination of available parameter values that make up a single candidate configuration. In the simplified example discussed above, the configuration parameter space is based on the set of 3 available parameters that have a total of 14 available parameter values. This results in 54 potential combinations (e.g., 2*3*9) and each of the potential combinations can be a potential configuration of host system120. Candidate evaluation module414can explore the configuration parameter space by selecting which of the potential configurations should be a candidate configuration that gets evaluated. In one example, candidate evaluation module414can select every potential combinations as a candidate configuration. In another example, candidate evaluation module414can select a subset of the potential combinations as candidate configurations. In yet another example, candidate evaluation could start with a candidate set, and stop when a sufficiently good candidate set was found. In any example, candidate evaluation module414can evaluate each of the selected candidate configurations by measuring the characteristics of the memory sub-system while the host system120is using the candidate configuration. The measurement can take place during normal use of host system120or candidate evaluation module414can run an evaluation workload that includes a particular program and data set that is used to evaluate the candidate configuration. The evaluating can be the same or similar to experimenting, testing, executing, running, other term, or a combination thereof. Candidate evaluation module414can use one or more programs to test and measure the characteristics of the candidate configuration. The programs can be the same or similar to tools, utilities, or features and include CPU manufacturer utilities (e.g., Intel Memory Latency Checker (MLC), Intel Processor Counter Monitor (PCM)), third party tools (e.g., Likwid, likwid-bench, sysinternals, ProcMon,), OS utilities (e.g., Task Manager), other programs, or a combination thereof. Before, during, or after evaluating the candidate configurations, candidate evaluation module414can store the resulting characteristics of each candidate configuration. The resulting characteristics (e.g., latency, bandwidth, etc) can be stored in data store440and can also or alternatively be used to update the parameter space (e.g., adding results to the points). Modeling module416can enable host system120to evaluate characteristic data446and generate a model to represent the effects of configuration updates on the characteristics of the memory sub-system110. In one example, the model can be a mathematical model that represents the characteristics of the memory sub-system as a function of the available parameter values. In another example, the model can be a data structure that maps characteristics of the memory sub-system to the corresponding parameter values or candidate configurations. In either example, modeling module416can model all of the evaluated candidate combinations which can include all of the potential combinations in the parameter space or a subset of the potential combinations (e.g., modeling data448). The results of the modeling can be displayed to the user to enable the user to determine the range of characteristics that can achieve by re-configuring host system120. In one example, host system120can avoid a combinatorial explosion by evaluating an initial set of the potential combination before determining the target characteristics and a subsequent set of potential combination after determining the target characteristics, as discussed below in regards to calibration module424. The initial set can be simple candidate configurations that include a change to a single parameter or a small subset of the parameters. For example, a first candidate configuration includes a change to a first parameter and a second candidate configuration includes a change to a second parameter and both candidate configuration are absent changes to any of the other parameters. All of the candidate configuration can be defined by the parameter values that are different from another configuration and therefore a candidate configuration can be represented by a single parameter value. The other configuration can be a base configuration, a default configuration, a prior configuration, a current configuration, a future configuration, or a combination thereof. This is advantageous because a particular configuration of host system120can correspond to hundreds or thousands of different parameter values. Emulation component224can determine characteristics of a target memory sub-system that is absent from host system120and update the configuration of the host system120to emulate the target memory sub-system. Emulation component224can evaluate multiple candidate configurations in order to identify the configuration that most closely emulates the performance characteristics of the target memory sub-system. In one example, emulation component224can include a target characteristics module420, a configuration updating module422, and a calibration module424. Target characteristics module420can determine the target characteristics that the memory sub-system of the host system will emulate. The target characteristics module420can receive a request from a user to emulate a characteristic of a target memory sub-system. The request can include user input or can initiate a prompt by the host system to receive user input. The target characteristics can be determined based on the user input. In one example, the user can provide input that identifies a target memory sub-system and target characteristics module420can look up the target characteristics for the target memory sub-system. The look up can be a local look up using a table with specifications of different memory sub-systems or can be a remote look up that uses a service available over the internet. In another example, the user can provide the target characteristics by selecting them from in interface or typing them into an interface. The interface can be based on a Graphical User Interface (GUI), a Command Line Interface (CLI), a Web Interface, an Application Programming Interface (API), other interface, or a combination thereof. In one example, the memory sub-system of the host system can include Dynamic Random Access Memory (DRAM) and the target memory sub-system can include Non-Volatile Memory (NVRAM) that is absent from the host system. The target characteristics of the target memory sub-system can include the read latency (e.g., X nanosecond), a write latency (e.g., Y nanosecond), a transfer bandwidth (e.g., Z GiB/s), other characteristic, or a combination thereof. Configuration updating module422can update the configuration of host system120based on the plurality of candidate configurations. The updated configuration can change the memory sub-system to emulate the characteristic of the target memory sub-system. In one example, updating the configuration based on the plurality of candidate configuration can involve selecting one of the candidate configurations. In another example, updating the configuration based on the plurality of candidate configurations can involve identifying a new configuration that is different from the candidate configurations, as discussed below in regards to calibration module. In either example, configuration updating module422can update the configuration of host system120by starting one or more threads on a CPU associated with a local memory node of the memory sub-system. The one or more threads can each comprise a memory intensive program. Configuration updating module422can cause memory allocated to the multiple threads to be located on a remote memory node of the memory sub-system and not on the local memory node (e.g., extend the data path one hop to remote memory). Configuration updating module422can reduce a bus speed of a remote memory controller that provides the CPU access to the remote memory node and activate processor affinity to cause the one or more threads to stay on the CPU and continue to using the extended data path. Calibration module424can enable host system120to measure the characteristics of the current configuration and to adjust the current configuration to more closely align with the target characteristics. Calibration module424can perform adjustments after determining the target characteristic and be the same or similar to candidate evaluation module414, which can perform evaluations of the candidate configurations in the absence of the target characteristics (e.g., before determining the target characteristic or without checking it). Calibration module424can execute as a single iteration or as multiple iterations that may or may not use feedback from a prior iteration. Each iteration can identify a new configuration and use one or more of the modules discussed above to update the existing configuration and evaluate the new configuration. In one example, updating the configuration based on the plurality of candidate configurations can involve selecting a configuration that is based on the resulting characteristics of the plurality of candidate configuration but is different from the candidate configurations. For example, two of the candidate configurations can be close to the target characteristic but one can be slightly higher and the other slightly lower. Configuration updating module422can identify a new configuration based on the two candidate configurations (e.g., a new configuration between the two) and update the host based on the identified new configuration. In either example, configuration updating module422can configure host system120so that the characteristics of the memory sub-system are substantially similar (e.g., plus or minus 10%) to the target characteristics. Performance testing component226can run one or more tests on the host system after host system120is configured to emulate the targeted memory sub-system. In one example, performance testing component226can include a test selection module430, an execution module432, and results module434. Test selection module430can select the one or more tests to run on the host system120. The test can include one or more benchmark tests, performance tests, use case tests, system tests, functional tests, regression tests, other tests, or a combination thereof. The test can be selected based on user input, computing resources, other aspect of host system120, or a combination thereof. The test can include executable data (e.g., code, programs), non-executable data (e.g., workload, settings), other data or a combination thereof. The tests can include one or more tests that are sensitive to memory bandwidth, memory latency, other characteristic, or a combination thereof. A test that is sensitive to memory bandwidth can be a stream test (e.g., STREAM Benchmark). The stream test can involve data streaming using sequential memory copy operations that consume all of the available bandwidth of the emulated memory sub-system. The stream test can use cache read-ahead to make the test less sensitivity to memory latency. A test that is sensitive to memory latency can be a graph test (e.g., Graph500 Benchmark). The graph test can involve traversing a graph data structure and can involve many dependent loads (e.g., pointer chasing) that make it more sensitive to memory latency and less sensitive to memory bandwidth. Test execution module432can enable host system120to run the one or more tests of the host system after updating the configuration to emulate the characteristic of the target memory sub-system. As discussed above, one or more of the CPUs of host system120can be loading CPUs that are running loading threads to generate a precision workload to emulate the target characteristics. Test execution module432can run the test using different CPUs (e.g., a testing CPU) or on different cores of the loaded CPU (e.g., testing cores). In either example, the updated configuration applies a load on the memory sub-system using a first CPU core and the performance test applies an additional load on the memory sub-system using a second CPU core (e.g., same or different CPU). Results module434can enable host system120to determine the results of the one or more tests. The tests can include techniques to measure the performance of the test before, during, or after it runs. The test results can then be stored, transmitted, displayed, other action, or a combination thereof. FIG.5is a flow chart of a method500for updating the configuration of the host system to emulate the performance characteristics of a target memory sub-system, in accordance with some embodiments of the present disclosure. Method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500is performed by host system120ofFIG.1-4. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation510, the processing logic can determine a configuration of a host system that includes a memory sub-system. In one example, the host system is a multi-socket server with a multiple CPUs and the memory sub-system includes multiple memory nodes. Each of the CPUs can use a local memory controller to access a local memory node and can use a remote memory controller to access a remote memory node. At operation520, the processing logic can receive a request to emulate a characteristic of a target memory sub-system. The characteristic of the target memory sub-system can be a set of performance characteristics that includes at least one of a read latency (e.g., 200 ns), a write latency (e.g., 400 ns), and a transfer bandwidth (e.g., 35 GiB/S). The individual performance characteristics of the target memory sub-system can be worse, better, or equal to the corresponding performance characteristics of the memory sub-system of the host system. In one example, memory sub-system of the host system comprises Dynamic Random Access Memory (DRAM) and the target memory sub-system comprises Non-Volatile Memory (NVRAM) that is absent from the host system. At operation530, the processing logic can analyze a plurality of candidate configurations for the host system. The plurality of candidate configurations can include one or more candidate configurations that generate a load on the memory sub-system to make the memory sub-system emulate the target memory sub-system (e.g., mimic, simulate, or emulate one or more of the target characteristics). Each of the plurality of candidate configurations corresponds to a point in a configuration parameter space and includes a combination of parameter values. The configuration parameter space represents alternate parameter values of a set of one or more parameters available on the host system. The processing logic can determine the plurality of candidate configurations based on available parameter values of the host system. The determination can involve determining available parameter values corresponding to bus speeds for the memory sub-system, determining available parameter values corresponding to a number of parallel threads that can be executed by a CPU of the host system, determining available parameter values corresponding to an affinity of a thread to a core of the CPU, and/or determining available parameter values corresponding to pinning memory of a thread to a remote memory node. The processing logic can analyze the plurality of candidate configurations by exploring the parameter space of the host system. The exploring can involve selecting a candidate configuration based on a set of available parameter values, updating the configuration of the host system based on the candidate configuration, and storing characteristic data of the candidate configuration. The characteristic data can indicate one or more characteristics of the memory sub-system while using the candidate configuration. The processing logic can evaluate the characteristic data and generate one or more mathematical models that represent the characteristics of the memory sub-system as a function of the available parameter values. At operation540, the processing logic can update the configuration of the host system based on the plurality of candidate configurations. The updated configuration can change the memory sub-system to emulate the characteristic of the target memory sub-system. In one example, updating the configuration of the host system can involve starting a plurality of threads on a CPU associated with a local memory node of the memory sub-system. The plurality of threads can include a memory intensive program. The processing logic can allocate memory for the plurality of threads on a remote memory node of the memory sub-system and reduce a bus speed of a remote memory controller that provides the CPU access to the remote memory node. The processing logic can activate processor affinity to cause the plurality of threads to stay on the CPU. In alternate example of method500, the processing logic can run a performance test of the host system after updating the configuration to emulate the characteristic of the target memory sub-system. The updated configuration can apply a load on the memory sub-system using a first CPU and the performance test can apply an additional load on the memory sub-system using a second CPU. The results of the performance test can indicate the effects the target memory sub-system would have on the host system. FIG.6illustrates an example machine of a computer system600within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system600can correspond to a memory controller (e.g., the memory controller215A-Z ofFIGS.2-3) that includes, is coupled to, or utilizes a memory sub-system (e.g., memory sub-system110ofFIG.1-3). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system600includes a processing device602, a main memory604(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory606(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system618, which communicate with each other via a bus630. Processing device602represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device602is configured to execute instructions626for performing the operations and steps discussed herein. The computer system600can further include a network interface device608to communicate over the network620. The data storage system618can include a machine-readable storage medium624(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions626or software embodying any one or more of the methodologies or functions described herein. The instructions626can also reside, completely or at least partially, within the main memory604and/or within the processing device602during execution thereof by the computer system600, the main memory604and the processing device602also constituting machine-readable storage media. The machine-readable storage medium624, data storage system618, and/or main memory604can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions626include instructions to implement functionality corresponding to the emulation component224ofFIGS.1-2and4. While the machine-readable storage medium624is shown in an example embodiment to be a single medium, the term “non-transitory machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., non-transitory computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 70,718 |
11861194 | DETAILED DESCRIPTION Embodiments will be described hereinafter with reference to the accompanying drawings. The following explanations disclose examples of devices and methods to embody the technical idea of the embodiments, and the technical idea of the embodiments is not limited to the structures, layout, etc., of the components explained below. Modification which is easily conceivable by a person of ordinary skill in the art comes within the scope of the disclosure as a matter of course. To make the description clearer, in a plurality of drawings, constituent elements comprising substantially the same function and structure are denoted by like reference numbers and their detailed descriptions may be omitted unless necessary. In general, according to one embodiment, a storage device is configured to store unencrypted user data. The user data is erased according to at least one data erasure mechanism. The storage device includes a receiver configured to receive an inquiry from a host device, and a transmitter configured to transfer response information indicating the at least one data erasure mechanism to the host device. First Embodiment FIG.1shows examples of connection between a storage device12and a host device14according to a first embodiment. The storage device12is connected to the host device14, and writes user data transferred from the host device14to a storage medium of the storage device12or transfers user data read from the storage medium to the host device14. The interface between the storage device12and the host device14is, for example, an SCSI (registered trademark), ATA (registered trademark), NVM Express (registered trademark) or an eMMC (registered trademark). A single storage device12may be connected to a single host device14in a one-to-one relationship as shown inFIG.1A. Alternatively, a plurality of storage devices12may be connected to a single host device14via a network16as shown inFIG.1B. The host device14may be an electronic device such as a personal computer (PC) inFIG.1A. The host device14may be, for example, a server, inFIG.1B. The storage device12shown inFIG.1Amay be installed in the PC by a PC vendor. The number of users of a single storage device12is not limited to one. A single storage device12may be used by a plurality of users. For example, as shown inFIG.1B, when the host device14offers a service for providing a virtual machine to a large number of users, a single storage device12may be divided into a plurality of areas (for example, namespaces, ranges, or partitions) such that each area can be the virtual machine of a corresponding user. [General Configuration] FIG.2shows an example of the configuration of the storage device12. The storage device12includes an interface (I/F) processor22which is connected to the host device14via a host I/F (not shown). An authentication processor102, authorization processor104, data erasure mechanism indicator module106, and data erasure mechanism indicator request reception module108are connected to the I/F processor22. The authentication processor102performs user authentication process with a personal identification number (PIN) to control the access to the storage device12. A PIN manager112is connected to the authentication processor102. The PIN manager112manages a plurality of PINs, for example, Owner PIN (security identifier: SID)112a, Administrator PIN (admin PIN)112b, Label PIN (PSID)112c, and User PIN112d. To classify the authority into a hierarchy, Administrator PIN and User PIN are set. The user may want to revert the storage device12to the shipping state for some reason. For example, when the storage device12is disposed, the user wants to prevent the leakage of the user data stored in a storage module34from the storage device12. In this specification, the revert process of the storage device12to the shipping state is referred to as “reset.” Reset includes both the erasure of user data (in other words, operation for preventing user data from being read) and the initialization of PINs set after shipping. Here, it is assumed that a specific PIN, for example, Owner PIN or Label PIN, is required for reset. Further, it is assumed that the storage device12includes a lock function, and a specific PIN is needed to lock the storage device12(in other words, to cause the storage device12to transition from an unlocked state to a locked state) or to unlock the storage device12(in other words, to cause the storage device12to transition from a locked state to an unlocked state). The authentication processor102, a lock manager110, an area information manager114, an erase processor118, and a read/write processor122are connected to the authorization processor104. When a command for reset is received from the host device14, the authentication processor102performs user authentication process relating to a user who requests the command. Specifically, the authentication processor102checks whether or not the value of an input PIN matches the value of the PIN stored in the PIN manager112. For example, when a request for authentication relating to the owner is received from the host device14, the authentication processor102examines whether or not the value of the PIN included in the request for authentication matches the value of Owner PIN112astored in the PIN manager114. When the values of PINs match, the authentication processor102determines that the authentication is successful. When the values of PINs do not match, the authentication processor102determines that the authentication fails. The authorization processor104determines whether or not the user who issues a command (in other words, the user of the host device14) has the authority to issue the command. The authorization processor104notifies the lock manager110, the read/write processor122, the erase processor118, etc., of the determination result. That is, the authorization processor104manages a table for determining which command can be executed by which execution authority. When a command is received, the authorization processor104determines whether or not the command can be executed by the command. For example, it is assumed that the table of the authorization processor104indicates that Revert command to reset the storage device12can be executed only when the authentication is successful with Owner PIN or Label PIN. It is further assumed that the user who succeeded in authentication with Owner PIN issues Revert command to reset the storage device12from the host device14. The authorization processor104determines whether or not the user who issues Revert command has the authority to issue Revert command. In this example, the execution of Revert command is permitted when authentication succeeds with Owner PIN. Thus, the authorization processor104determines that the user who issues Revert command has the authority. If the user who succeeded in authentication with User PIN tries to execute Revert command, the authorization processor104determines that the user who issues Revert command does not have the authority. When the authorization processor104determines that the user who issues Revert command has the authority, the authorization processor104transfers Revert command to the erase processor118to reset the storage device12, causes the erase processor118to erase data, and resets the PINs so as to be initial values. If the user who issues the unlock command is succeeded in authentication with User PIN or Administrator PIN, the authorization processor104transfers the unlock command to the lock manager110. The lock manager110unlocks the storage device12. The lock manager110may be configured to lock or unlock the entire user area of the storage module34managed by the area information manager114. Alternatively, the lock manager110may be configured to lock or unlock a specific area of the storage module34. Even if the user who issues the unlock command is succeeded in authentication with Label PIN issues, the authorization processor104does not transfer the unlock command to the lock manager110. Thus, the storage device12is not unlocked. The data erasure mechanism indicator request reception module108and an erase information manager124are connected to the data erasure mechanism indicating module106. The data erasure mechanism indicator request reception module108receives an inquiry about data erasure mechanisms from the host device14, and transfers it to the data erasure mechanism indicating module106. The data erasure mechanism indicating module106shows the data erasure mechanisms supported by the storage device12to the host device14. For example, the data erasure mechanisms include overwrite data erasure, block erasure, unmap, reset write pointers and crypto erasure (encryption key updating). In overwrite data erasure, the area in which the data to be erased is stored is overwritten with “0” or data generated by random numbers. Block erasure disables the original written data of the entire block including the user data to be erased from being read. In unmap, a mapping table indicating in which block of the storage medium user data is stored is reset regarding the user data. In reset write pointers, a pointer indicating in which block of the storage medium user data is stored is reset. In crypto erasure, when input user data is encrypted with a key provided in the storage device12, and the encrypted data is stored in the storage module34, the key used for the data encryption is eradicated. In this way, the encrypted data cannot be decrypted, and thus, the input data is invalidated. The erase information manager124is connected to the erase processor118. The erase information manager124may not accept a read/write command while data is erased, manage the status of a data erasing process in preparation for power discontinuity while data is erased, and supply information indicating to what extent data has been erased to the host device12after restart at the time of power discontinuity. The erase processor118includes an area erase module118aand areas erase module118b. The erase processor118receives Revert command and RevertSP command which are for resetting the storage device12from the host device14. The erase processor118erases the data in the storage module34by a particular data erasure mechanism according to information specifying data erasure mechanism and initializes PINs. Revert command and RevertSP command correspond to Revert method and RevertSP method defined in the specifications by “Trusted Computing Group”, for example, “TCG Storage Security Subsystem Class: Pyrite”, Specification Version 1.00, Revision 1.00, Aug. 5, 2015. The area erase module118aerases data in a specified area of a memory space of the storage module34. When data in an area of the storage area of the storage device12is erased, erasure of data in other areas may be suspended. The areas erase module118bcollectively erases data in plural areas. Examples of the plural areas include partitions assigned with the same Namespace ID. The erase processor118and the read/write processor122are connected to the storage module34. The storage module34includes a large-capacity nonvolatile storage medium such as a flash memory or a hard disk. The storage module34receives a read command or a write command from the host device14, and writes or reads data. Each part (module, processor, manager, etc.) of the storage device12can be implemented as software applications, hardware and/or software modules, or components on one or more computers or processors (CPUs). In the description, the module may be called the processor or the manager, the processor may be called the module or the manager, and the manager may be called the module or the processor. [PIN] PINs are described with reference toFIGS.3A,3B, and3C.FIG.3Ashows commands which can be issued in accordance with the types of PINs. Owner PIN (SID) has the authority to issue Activate command and Revert command. The authorization processor104manages PINs and commands regarding which PIN is able to issue which command. Activate command is a command to enable the lock function. Revert command is a command to set PINs to the initial values, disable the lock function, and forcibly erase data. Administrator PIN (admin PIN) has the authority to issue RevertSP command. RevertSP command is a command to set PINs to the initial values and disable the lock function. With regard to forced data erasing, RevertSP command is able to specify with a parameter whether or not data should be erased. Label PIN (PSID) has the authority to issue Revert command. User PIN does not have the authority to issue Revert and RevertSP command. However, User PIN is able to unlock the area assign to the user. FIG.3Bshows the types of PINs to be initialized and the types of data to be erased by commands. Activate command does not relate to reset. To the contrary, Activate command relates to activation. Neither the initialization of PINs nor data erasing is performed by Activate command. By Revert command, data is erased, and Owner PIN and Administrator PIN are initialized. By RevertSP command, data is erased, and Administrator PIN is initialized. RevertSP command is able to specify whether data should be erased or should be maintained without erasing with a parameter when the command is issued. Revert command does not include a parameter for specifying whether or not data should be erased. By Revert command, data is always erased. In addition to the above commands, Set command to set PINs is provided. Set command includes a parameter indicating the type of PIN to be set. The authority to issue Set command varies depending on the value of the parameter, in other words, the type of PIN to be set. For example, Set command to set User PIN can be issued by the administrator and the user. The owner does not have the authority to set User PIN. Thus, Label PIN fails in authentication for Set command to set User PIN. The authority to issue Activate command, Revert command and RevertSP command is determined regardless of parameters. According to first embodiment, two types of Label PINs (PSIDs) can be set. As shown inFIG.3C, a first-type Label PIN (PSID1) is the above Label PIN to reset the storage device14, and the data of the entire storage area is erased and Owner PIN and Administrator PIN are initialized by PSID1. A second-type Label PIN is Label PIN (PSID2, PSID3, . . . ) for each user. It is assumed that the storage area is allocated to a plurality of users (here, user1and user2). By Revert command issued by Label PIN (PSID2) of user1, the data of the area allocated to user1is erased, and User PIN of user1is initialized. By Revert command issued by Label PIN (PSID3) of user2, the data of the area allocated to user2is erased, and User PIN of user2is initialized. In other words, by Revert command issued by Label PIN (PSID2) of user1, the data of the area allocated to user2cannot be erased, and further, User PIN of user2cannot be initialized. By Revert command issued by Label PIN (PSID3) of user2, the data of the area allocated to user1cannot be erased, and further, the User PIN of user1cannot be initialized. In this manner, security can be improved for each user. Administrator PIN is able to reset the storage device12to the shipping state. However, in preparation for the loss of Administrator PIN, Label PIN for reset may be printed on somewhere on the storage device12, for example, the name plate label attached to the chassis of the storage device. For example, when the storage area is allocated to a plurality of users1and user2, PSID1, PSID2and PSID3may be printed on the name plate label. A method for notifying the PC vendor or the user of Label PIN without printing Label PIN on the storage device12may be employed. For example, the PC vendor may provide the user with a website such that Label PIN is displayed when the user inputs the serial number of the PC. Similarly, the vendor of the storage device may provide the user with a website such that Label PIN is displayed when the serial number of the storage device is input. As shown inFIG.1B, in preparation for a case where a plurality of storage devices12are connected to the host device14such as a server, and the storage devices12in the server should be simultaneously reset, the vendor of the storage devices12may set Label PIN having the same value for the storage devices12and notify the server vendor of the value of Label PIN by e-mail, etc. [State Transition of Storage Device] FIG.4shows an example of the state transition of the storage device12. When the storage device12is shipped out, the storage device12is in an inactive state40A. In the inactive state40A, neither Administrator PIN nor User PIN can be set, and the lock function is disabled. SID (Owner PIN) is set to an initial value. As the initial value of SID, MSID PIN may be defined. Anybody can obtain MSID PIN, using Get command. The method for notifying the user of the initial value of SID is not limited to the use of a command. The initial value of SID may be described in the manual or printed on the name plate label. When the storage device12is the same status as it is shipped out (the storage device12has not yet been configured), the host will authenticate as SID authority by using the initial value of SID, such as MSID PIN. Since the initial value of SID is MSID PIN, the authentication succeeds. Subsequently, SID can be changed to an arbitrary value (a desired PIN for the owner) from the initial value. It is assumed that the storage device12is shipped to, for example, the PC vendor, in the inactive state40A, and the PC vendor sets an SID by the above method. When the storage device12in the inactive state40A receives Set command to set SID from the host device14, the authority of the user who sends Set command is checked. Set command includes a parameter including SID to be set. The authority to set SID is the owner. When Set command is issued from the owner, SID is set. In an inactive state40B, the value of SID is the value set by the owner with Set command (this value is not the initial value). When the storage device12in the inactive state40B receives Activate command from the host device14, the authority of the user who sends Activate command is checked. Activate command is a command for causing the storage device12to transition to an active state. The authority to issue Activate command is the owner as shown inFIG.3A. When the owner issues Activate command, the storage device12transitions to an active state40C. In the active state40C, Administrator PIN and User PIN are set to the initial values, and the lock function is enabled. For example, it is assumed that the storage device12installed in a PC is shipped to the end user in the active state40C, and Administrator PIN or User PIN is set on the end user side. When the storage device12in the active state40C receives Set command to set Administrator PIN or Set command to set User PIN from the host device14, the authority of the user who sends Set command is checked. Set command includes a parameter including Administrator PIN or User PIN to be set. The authority to set Administrator PIN is the administrator. The authority to set User PIN is the administrator and the owner. When Set command is issued from a user who has the authority to issue Set command, the value of Administrator PIN or User PIN is set to the value (this value is not the initial value) by the end user with Set command. Thus, the storage device12transitions to an active state40D. When the storage device12in the active state40D receives Revert command to reset the storage device12from the host device14, the authority of the user of issuance source of Revert command is checked. The authority to issue Revert command is a user who knows Owner PIN or Label PIN. When Revert command is issued by a user who has the authority to issue Revert command, data is erased, and Owner PIN, Administrator PIN and User PIN are initialized. Thus, the storage device12transitions to the inactive state (shipping state)40A. When the storage device12in the active state40D receives RevertSP command to reset the storage device12from the host device14, the authority of the user who sends RevertSP command is checked. The authority to issue RevertSP command is the administrator. When RevertSP command is issued by a user who has the authority to issue RevertSP command, data is erased, and Administrator PIN and User PIN are initialized. The storage device12transitions to the inactive state40B. Even after the storage device12is reset by RevertSP command, the storage device12may remain in active state instead of inactive state. When the PINs are initialized, the storage device12is automatically unlocked. As Owner PIN can be initialized by Revert command, the storage device12can be unlocked. However, data is erased by Revert command. Thus, after the storage device12is unlocked, the data stored by the user does not remain in the storage device12. Since Administrator PIN can be also initialized by RevertSP command, the storage device12can be unlocked. However, the storage device12can be unlocked by Administrator PIN without initializing Administrator PIN (without issuing RevertSP command). The lock manager110is provided with a flag for managing whether the storage device12is locked or not. The storage device12is locked when the flag is set and the storage device12is unlocked when the flag is reset. The flag can be set by Set command. Therefore, the storage device12can be unlocked without issuing RevertSP command. Authority to reset the flag is Administrator PIN. The erase information manager114is able to set the flag for a specific area (range) of a storage area. To unlock a range1, the flag for the range1is reset. Authority to reset the flag for the range1is confirmed by using User PIN1. A user who knows User PIN1is able to lock or unlock the range1but is unable to lock or unlock a range2. Thus, the storage device12can be locked in range units. Since Label PIN has an authority to issue Revert command in order for initialization, the storage device12can be unlocked. However, data is erased by Revert command. Thus, after the storage device12is unlocked, the data stored by the user does not remain. When the storage device12receives Revert command, the storage device12erases data by an internal process and is also unlocked. Strictly speaking, one of data erasing and unlocking is performed, and subsequently, the other is performed. In consideration of security at the time of power discontinuity, unlocking should be preferably performed after data erasing. When power discontinuity occurs after unlocking and immediately before data erasing, the storage device12may be unlocked without erasing data. However, when measures are taken to prevent such a situation at the time of power discontinuity, data erasing may be performed after unlocking. [Sequence of Data Erasing] FIG.5AandFIG.5Bshow an example of the sequence of data erasing for resetting the storage device12. Prior to reset, the host device14transfers, to the storage device12, a data erasure mechanism indicator request to inquire the data erasure mechanism supported by the storage device12. For example, this request is transferred when the host device14is booted. The data erasure mechanism indicator request received in the data erasure mechanism indicator request reception module108is transferred to the data erasure mechanism indicating module106. In step50A, the data erasure mechanism indicating module106obtains, from the erase information manager124, information indicating one or more erasure mechanisms supported by the storage device12. The data erasure mechanism indicating module106sends back data erasure mechanism response information indicating the obtained data erasure mechanism(s) to the host device14. Now, examples of data erasure mechanism indicator request and data erasure mechanism response information are described with reference toFIGS.6A and6B. Here, Level 0 Discovery Header and Level 0 Discovery Response Data Format are applied, which are defined in TCG Storage, Security Subsystem Class: Opal, Specification Version 2.01, Revision 1.00.FIG.6Ashows an example of a data erasure mechanism indicator request using Level 0 Discovery Header. Level 0 Discovery Header only instructs the storage device12to send back Level 0 Discovery Response. Level 0 Discovery Header includes only a header portion and does not include a body portion. FIG.6Bshows an example of erasure mechanism response information to which Level 0 Discovery Response is applied. Level 0 Discovery Response includes Level 0 Discovery Header shown inFIG.6Aand a body portion. In Feature Descriptor which is the body portion, plural feature descriptors are defined. As shown inFIG.7A, one of Feature Descriptors corresponds to erasure mechanisms.FIG.7Bshows an example of the data structure of Feature Descriptor. Feature Descriptor includes a header portion and a body portion. Byte0-3is the header portion. Byte4-nis the body portion. The header portion includes Feature code. Feature Descriptor data of the data erasure mechanism supported by the storage device12is described in Byte4. Data erasure mechanism is allocated to the respective bits of Byte4as shown inFIG.7C. When each bit is “1”, its data erasure mechanism is supported. When each bit is “0”, its data erasure mechanism is not supported. For example, when bit0of Feature Descriptor data is “1”, overwrite data erasure is supported. When bit1is “1”, block erasure is supported. When bit2is “1”, unmap is supported. When bit3is “1”, reset write pointers are supported. When bit4is “1”, encryption key updating (crypto erasure) is supported. Returning to the explanation of the sequence of data erasing shown inFIG.5AandFIG.5B, the following situation is assumed. When the host device14receives information indicating a single data erasure mechanism from the storage device12, the host device14specifies the data erasure mechanism. When the host device14receives information indicating a plurality of data erasure mechanisms, the host device14selects one of them, and notifies the storage device12of information indicating the selected data erasure mechanism. However, there is a probability that the host device14specifies a data erasure mechanism other than the above data erasure mechanisms. For example, the host device14may set a data erasure mechanism to the storage device12by Set command including a parameter indicating data erasure mechanism information. The data erasure mechanism specification information received in the storage device12is transferred to the authentication processor102. The authentication processor102performs authentication process of the user who issues Set command specifying the data erasure mechanism in step50B. In step50C, the authorization processor104checks which PIN is used to authenticate the user who issues Set command to check whether or not the user has the authority to issue Set command. When Set command is issued by the user authenticated with Label PIN or User PIN, the authorization processor104determines that the authentication fails and transfers information indicating that the authorization fails to the host device14in step50D. When Set command is issued by the user authenticated with Owner PIN or Administrator PIN, the authorization processor104determines that the authorization is successful. When the authorization is successful, the erase information manager124checks whether or not the storage device12supports the data erasure mechanism specified by the host device14in step50C-1. When the data erasure mechanism specified by the host device14is not supported by the storage device12(NO in step50C-1), the erase information manager124transfers information indicating a specification error to the host device14in step50D-1. When the data erasure mechanism specified by the host device14is supported by the storage device12(YES in step50C-1), the erase information manager124sets the data erasure mechanism specified by the host device14in the erase processor118in step50E. Subsequently, when there is a need to reset the storage device12, the host device14notifies the storage device12of a reset command (data erase command). The host device14may notify the storage device12of a data erase command with, for example, Revert command or RevertSP command. The data erase command received in the storage device12is transferred to the authentication processor102. The authentication processor102performs authentication process of the user who issues Revert command or RevertSP command which is the erase command in step50F. The authorization processor104checks whether the received command is Revert command or RevertSP command in step50G. When Revert command is received, the authorization processor104checks which PIN is used to authenticate the user who issues of Revert command in step50H to check whether or not the user has the authority to issue Revert command. When Revert command is issued by the user authenticated with Administrator PIN or User PIN, the authorization processor104determines that the authorization fails in step50I. Neither data erasing nor the initialization of PINs is performed. When Revert command is issued by the user authenticated with Owner PIN or Label PIN, the authorization processor104determines that the authorization is successful. In step50J, the erase processor118erases data by the specified data erasure mechanism, and the PIN manager112initializes Owner PIN, Administrator PIN and User PIN. In this way, the storage device12transitions to the inactive state (shipping state)40A shown inFIG.4. When a RevertSP command is received, the authorization processor104checks which PIN is used to authenticate the user who issues RevertSP command in step50K to check whether or not the user has the authority to issue RevertSP command. When RevertSP command is issued by the user authenticated with Owner PIN, Label PIN or User PIN, the authorization processor104determines that the authorization fails in step50L. Neither data erasing nor the initialization of PINs is performed. When RevertSP command is issued by the user authenticated with Administrator PIN, the authorization processor104determines that the authorization is successful. Whether or not data erasure is specified by a parameter in RevertSP command is checked in step50M. When data erasure is specified (YES in step50M), in step50J, the erase processor118erases data by the specified data erasure mechanism, and the PIN manager112initializes Administrator PIN and User PIN. When data erasing is not specified (NO in step50M), in step50N, the PIN manager112initializes Administrator PIN and User PIN. In this way, the storage device12transitions to the inactive state40B shown inFIG.4. As explained above, the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports. The host device14is capable of specifying a data erasure mechanism for the storage device12based on the information. The storage device12checks the authority of the user who specified the data erasure mechanism. When the user has authority, the storage device12sets the specified data erasure mechanism. When reset is actually performed, the host device14supplies a reset command to the storage device. The storage device12checks the authority of the user who issues the reset command. When the user has authority, the storage device12erases data in accordance with the set data erasure mechanism, and initializes PINs. In this manner, even when unencrypted data is stored in the storage device12, the storage device12can be reset. Data does not leak out from the reset storage device12after disposal. Security can be ensured. As encrypted data is not stored in the storage device12, the host device14does not need to have an encryption application program. The processing load of the host device14is less. Since an encryption circuit is unnecessary, the manufacturing cost of the storage device12can be reduced. The storage device12does not set single access authority (unlock) in the entire storage area. The storage device12is capable of dividing the storage area into a plurality of areas (ranges) based on LBA ranges and setting access authority (in other words, a PIN necessary for unlocking) for each range. The concept of ranges is described later with reference toFIG.13. For example, as shown inFIG.8, range2may be set to an unlocked state where anybody can access the range. Range1may be set to a lock state where only the user and administrator who normally use the storage device can unlock the range. Range3may be set to a locked state where only the administrator can unlock the range. By dividing the storage area of the storage device12into a plurality of ranges, a plurality of users can share the storage device12while maintaining the security one another. [Management of Erasing Process] FIG.9Ais a flowchart showing an example of an erasing process in preparation for power discontinuity. The storage device12receives Revert/RevertSP command (“/” means “or”) in step222. The authentication processor102performs authentication process of the user who issues Revert command or RevertSP command in step224. The authorization processor104checks the authority of the user who issues Revert/RevertSP command and determines whether or not the command is issued by the user having the authority in step226. When the Revert/RevertSP command is not issued by the user having the authority to issue the command, the authorization fails in step228. When Revert/RevertSP the command is issued by the user having the authority to issue the command, the authorization processor104transfers Revert/RevertSP command to the erase processor118in step230. In step232, the erase processor118analyzes Revert/RevertSP command and determines to which range Revert/RevertSP command is related. The erase processor118(area erase module118a) obtains an LBA range corresponding to the range of the result of determination, for example, LBA X-Y, and starts erasing data from the initial LBA X. While data is erased, the erase information manager124writes the erased LBAs to a nonvolatile memory in step234. The nonvolatile memory may be realized by a flash memory provided in the erase information manager124or may be realized by a part of the storage module34. The erase processor118determines whether or not data erasing in the LBA range corresponding to the range of the result of determination is completed in step236. When data erasing is not completed, the erase processor118continues to erase data. When data erasing is completed, the erase processor118causes the erase information manager124to write a completion flag indicating that the process of Revert/RevertSP command is completed to the nonvolatile memory in step238. Even when power discontinuity occurs in the process of Revert/RevertSP command, information indicating that the process of Revert/RevertSP command is uncompleted and the erased LBAs are stored in the nonvolatile memory. Thus, the storage device12is capable of effectively restart the uncompleted Revert/RevertSP command from the LBA whose data is not erased when power is restored. There is no need to erase data from the beginning of the LBA range after the restart. Thus, the time required to erase data is not needlessly lengthened. FIG.9Bis a flowchart showing an example of a process for restarting Revert/RevertSP command when power is restored. When power is turned on, the erase information manager124determines whether or not uncompleted Revert/RevertSP command is present in step242. When uncompleted Revert/RevertSP command is not present, a normal process is performed in step246. When an uncompleted Revert/RevertSP command is present, in step244, the erase information manager124reads the erased LBAs from the nonvolatile memory, sets the erased LBAs to the erase restarting address of the erase processor118, and causes the erase processor118to start erasing data from an unerased LBA (following the erase restarting address). While data is erased, the erase information manager124writes the erased LBAs to the nonvolatile memory in step248. The erase processor118determines whether or not the data erasing of the range whose data is erased in progress is completed in step250. When the data erasing is not completed, the erase processor118continues to erase data. When the data erasing is completed, in step252, the erase processor118causes the erase information manager124to write a completion flag indicating that the process of Revert/RevertSP command is completed to the nonvolatile memory. Subsequently, a normal process is performed in step254. As shown inFIGS.5A and5B, the erasing process shown inFIGS.9A and9Bmay be performed after the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports and the host device14specifies a data erasure mechanism. Alternatively, the erasing process shown inFIGS.9A and9Bmay be independently performed regardless of the procedure shown inFIGS.5A and5B. [Exclusive Control of Data Erasing] Exclusive control for prioritizing the data erasing process which is performed in progress is explained with reference toFIG.10toFIG.12. FIG.10is a flowchart showing an example of a process for rejecting access while data is erased and prioritizing the erasing process. The storage device12receives Revert/RevertSP command in step262. The authentication processor102performs authentication process of the user who issues Revert command or RevertSP command in step264. The authorization processor104checks the authority to issue Revert/RevertSP command and determines whether or not Revert/RevertSP command is issued by the user having the authority to issue the command in step266. When Revert/RevertSP command is not issued by the user having the authority to issue the command, the authorization fails in step268. When Revert/RevertSP command is issued by the user having the authority to issue the command, the storage device12determines whether or not a read/write command is received from the host device14while Revert/RevertSP command is executed in step270. When a read/write command is received, the storage device12pushes a job for the read/write command in a queue, or sends back an error to the host device14in step272. The queue may be provided in, for example, the read/write processor122. When a read/write command is not received, the storage device12continues to execute Revert/RevertSP command in step274. The user does not know that a data erase operation is completed to which LBA at present. Therefore, the user does not recognize, when data is written while erasing data, whether the data is to be written into an area in which a data erase operation is completed or an area in which a data erase operation is not completed. If data is written into the area in which a data erase operation is completed, the written data remains in the area. If data is written into the area in which a data erase operation is not completed, the written data is erased. However, the user cannot perform respective controls depending on whether the data is to be written into an area in which a data erase operation is completed or an area in which a data erase operation is not completed, and thus the user may be confused. The user may issue a write command assuming that the written data remains in the area or assuming that the written data is erased. In both assumptions, the written data may remain or be erased depending on the situation so that expected result is not obtained. According to the embodiment, the storage device12does not perform read/write operation during a data erase operation, thereby the user confusion is prevented. As shown inFIGS.5A and5B, the erasing process shown inFIG.10may be performed after the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports and the host device14specifies a data erasure mechanism. Alternatively, the erasing process shown inFIG.10may be independently performed regardless of the procedure shown inFIGS.5A and5B. FIG.11is a flowchart showing an example of the rejection of access to a range whose data is erased in progress as the second example of the exclusive control of a data erasing process. Access to a range other than the range whose data is erased in progress is permitted. In the process ofFIG.10, plural areas are not defined in the storage area. However, in the process ofFIG.11, plural areas (ranges) are defined in the storage area. The storage device12receives RevertSP command in step282. The authentication processor102performs authentication process of the user who issues RevertSP command in step284. The authorization processor104checks the authority to issue RevertSP command, and determines whether or not RevertSP command is issued by the user having the authority to issue the command in step286. When RevertSP command is not issued by the user having the authority to issue the command, the authorization fails in step288. When RevertSP command is issued by the user having the authority to issue the command, the authorization processor104transfers RevertSP command to the erase processor118in step290. In step292, the erase processor118analyzes RevertSP command and determines to which range RevertSP command is related. The erase processor118(area erase module118a) starts erasing the data of an LBA range corresponding to the range of the result of determination. The storage device12determines whether or not a read/write command is received from the host device14while data is erased in step294. When a read/write command is not received, the storage device12continues to execute RevertSP command in step296. When a read/write command is received, the storage device12determines whether or not the received read/write command is a read/write command relating the range whose data is erased in progress. When the received read/write command relates to the range whose data is erased in progress, the storage device12pushes a job for the command in a queue or sends back an error to the host device14in step300. When the received read/write command does not relate to a range other than the range whose data is erased in progress, the storage device12executes the read/write command regarding the range other than the range whose data is erased in progress in step302. In this way, even when, while data erasing is performed for an area, writing to a different area is performed, normal data writing and reading can be performed in the different area as data erasing is not performed in the different area. In the area in which data erasing is performed, as described above, exclusive control which does not execute access other than erasing is performed. This configuration prevents data from remaining or being erased contrary to the user's expectation. As shown inFIGS.5A and5B, the erasing process shown inFIG.11may be performed after the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports and the host device14specifies a data erasure mechanism. Alternatively, the erasing process shown inFIG.11may be independently performed regardless of the procedure shown inFIGS.5A and5B. FIG.12is a flowchart showing an example of the execution control of a plurality of Revert/RevertSP commands as the third example of the exclusive control of a data erasing process. The storage device12receives the first Revert/RevertSP command in step312. The authentication processor102performs authentication process of the user who issues the first Revert/RevertSP command in step314. The authorization processor104checks the authority to issue the first Revert/RevertSP command, and determines whether or not the first Revert/RevertSP command is issued by the user having the authority to issue the command in step316. When the first Revert/RevertSP command is not issued by the user having the authority to issue the command, the authorization fails in step318. When first Revert/RevertSP command is issued by the user having the authority to issue the command, the authorization processor104transfers the first Revert/RevertSP command to the erase processor118in step320. In step322, the erase processor118analyzes the first Revert/RevertSP command and determines to which range the first Revert/RevertSP command is related. The erase processor118(area erase module118a) starts erasing the data of an LBA range corresponding to the range of the result of determination. The storage device12determines whether or not the second Revert/RevertSP command is received while data is erased in step324. Step324may be performed before step322. When the second Revert/RevertSP command is not received, the storage device12continues to execute the first Revert/RevertSP command in step328. When the second Revert/RevertSP command is received, the authentication processor102performs authentication process of the user who issues the second Revert/RevertSP command in step332. The authorization processor104checks the authority to issue the second Revert/RevertSP command and determines whether or not the second Revert/RevertSP command is issued by the user having the authority to issue the command in step334. When the second Revert/RevertSP command is not issued by the user having the authority to issue the command, the authorization fails in step336. When the second Revert/RevertSP command is issued by the user having the authority to issue the command, the authorization processor104transfers the second Revert/RevertSP command to the erase processor118in step338. In step342, the erase processor118analyzes the second Revert/RevertSP command, determines to which range the second Revert/RevertSP command is related, and determines whether or not the range of the first Revert/RevertSP command is different from the range of the second Revert/RevertSP command. When the range of the second Revert/RevertSP command is different from the range of the first Revert/RevertSP command, the erase processor118determines whether or not the received two commands are the first RevertSP command and the second RevertSP command in step344. When the two commands are the first RevertSP command and the second RevertSP command, the ranges of the two received RevertSP commands are different from each other. Thus, the erase processor118also executes the second RevertSP command in step346. Instead of the execution of the second RevertSP command in step346, a job for the second RevertSP command may be pushed in a queue. When it is determined that the range of the second Revert/RevertSP command is the same as the range of the first Revert/RevertSP command in step342, or when it is determined that the combination of the two commands is not the combination of the first RevertSP command and the second RevertSP command in step344, the data erasing module118pushes a job for the second Revert/RevertSP command in a queue or sends back an error to the host device14in step348. The combinations of the first and second commands include the combinations of (i) the first Revert command and the second Revert command, (ii) the first Revert command and the second RevertSP command, (iii) the first RevertSP command and the second Revert command and (iv) the first RevertSP command and the second RevertSP command. With regard to the combination of (iii) the first RevertSP command and the second RevertSP command, when the range of the first command is different from the range of the second command, as shown in step346, the second RevertSP command is executed in addition to the first RevertSP command. With regard to the other combinations of (i) the first Revert command and the second Revert command, (ii) the first Revert command and the second RevertSP command and (iv) the first RevertSP command and the second Revert command, the second command is not performed as shown in step348even when the range of the first command is either the same as or different from the range of the second command. In this way, the storage device12is capable of focusing on executing each Revert/RevertSP command. Thus, the time required to erase data is not lengthened. As shown inFIGS.5A and5B, the erasing process shown inFIG.12may be performed after the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports and the host device14specifies a data erasure mechanism. Alternatively, the erasing process shown inFIG.12may be independently performed regardless of the procedure shown inFIGS.5A and5B. [Erasing Areas by Namespaces] FIG.13schematically shows the storage area of the storage device12. Namespaces are defined in NVM Express, Revision 1.3, May 1, 2017. A namespace is a quantity of nonvolatile memory that may be formatted into logical blocks. A namespace is each of the partial areas into which the entire storage area of the storage device12is divided, and is specifically, a collection of logical blocks. At least one namespace identified by a namespace ID can be defined for a single storage device. A namespace of size “n” includes logical blocks with logical block addresses 0 to “n−1.” A namespace global range is provided in each namespace. Each namespace global range includes a plurality of ranges. As described above, different PINs can be set for the ranges, respectively. A global range ranges over a plurality of namespaces. Partial areas into which the entire storage area is divided may be partitions. The partitions are partial areas managed by the host device14. Namespaces are partial areas managed by the host device14and the storage device12. When the host device14accesses a partition, the host device14specifies the logical address included in the partition to be accessed. When the host device14accesses a namespace, the host device14specifies the namespace to be accessed. The area information manager114of the storage device12manages the relationship between namespaces and ranges as shown inFIG.13. FIG.14Ais a flowchart showing an example of erasing data in namespace units by the erase processor118(areas erase module118b). In step402, the storage device12receives a data erase command in namespace units from the host device14. Parameters can be added to each data erase command. Thus, a data erase command in namespace units may be realized by adding the parameter of a namespace. Alternatively, since parameters can be added to Revert/RevertSP command, the parameter of a namespace may be added such that Revert/RevertSP command for executing erasing in namespace units is an erase command in namespace units. Data in all the ranges in the specified namespace are collectively erased in step404. Further, as shown inFIG.14B, a namespace table indicating for which namespace an erasing process should be performed by Revert/RevertSP command may be defined. A namespace can be registered with the namespace table by using Set command and specifying the namespace ID with a parameter of Set command. The erasing in namespace units includes both erasing data in a single namespace and erasing data in all the namespaces (in other words, a global range). When 00h or FFh is specified by the parameter, the parameter may be regarded in a manner that all the namespaces are specified. FIG.14Cshows an example in which the namespace table is set. When the storage device12receives Set command in step412, the storage device12sets the namespace ID specified by the parameter in the namespace table. When the namespace ID is 00h or FFh, all the namespace IDs are set in the namespace table. Subsequently, when the storage device12receives Revert/RevertSP command from the host device (step414), in step416, the storage device12refers to the namespace table, obtains namespace ID(s), and erases the data of all the ranges included in the namespace(s) corresponding to the obtained namespace ID(s). In this way, in a case where the storage device12includes a plurality of namespaces, and each namespace includes a plurality of ranges, when the host device14merely gives a data erase command in namespace units, the storage device12is capable of easily erasing the data of all the ranges included in the specified namespace(s). As the host device14does not need to manage the relationship between namespaces and ranges, the structure of the application program of the host device14is simplified. Thus, the cost can be reduced. As shown inFIGS.5A and5B, the erasing process shown inFIG.14Amay be performed after the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports and the host device14specifies a data erasure mechanism. Alternatively, the erasing process shown inFIG.14Amay be independently performed regardless of the procedure shown inFIGS.5A and5B. Some erasing operations are explained with reference toFIG.9AtoFIG.14A. These erasing operations may be freely combined with each other to be performed. According to the first embodiment, the storage device12notifies the host device14of the data erasure mechanism(s) the storage device12supports. The host device14specifies a data erasure mechanism. The storage device12erases data by the specified data erasure mechanism. Thus, even when unencrypted data is stored in the storage device12, the storage device12can be reset. Data does not leak out from the reset storage device12after disposal. Security can be ensured. As encrypted data is not stored in the storage device12, the host device14does not need to incorporate an encryption application program. The processing load of the host device14is less. Since an encryption circuit is unnecessary, the manufacturing cost of the storage device12can be reduced. Other embodiments are explained below. In the following embodiments, only portions different from those of the first embodiment are explained, overlapping descriptions being omitted. Second Embodiment The probability that data leaks out from the storage device12which has been reset and discarded is not zero. When this probability should be as close to zero as possible, the storage device12may be physically destroyed and shredded such that the storage device12is not physically present. However, it takes time and effort to mechanically destroy and shred the storage device12. A second embodiment allows the storage device12to be electrically destroyed. In the second embodiment, as shown inFIG.15, as the state of the storage device12, a destroy state is defined in addition to the active state and inactive state. When the host device14issues a destroy command to the storage device12in the active state, the storage device12transitions to the destroy state. FIG.17shows the outline of a storage device12A which may be set to the destroy state according to the second embodiment. The storage device12A is different from the storage device12shown inFIG.2in respect that a destroy processor116is added. The destroy processor116is connected to the I/F processor22, authorization processor104, read/write processor122, and erase processor118. When the storage device12A is in the destroy state, the destroy processor116instructs the read/write processor122to reject commands except a confirmation command. FIG.16Ashows an example of the operations of setting the storage device12A to the destroy state. The storage device12A receives a destroy command in step422. It is assumed that, in a manner similar to that of Revert/RevertSP command, a destroy command requires Owner PIN or Administrator PIN to be issued. Further, it is assumed that users other than the owner or administrator cannot issue a destroy command and cannot destroy the storage device12. The authentication processor102performs authentication process of the user who issues the destroy command in step424. The authorization processor104checks the authority to issue the destroy command, and determines whether or not the destroy command is issued by the user having the authority in step426. When the destroy command is not issued by the user having the authority, the authorization fails in step428. When the destroy command is issued by the user having the authority to issue the command, in step430, the storage device12A accepts, of the commands from the host device14, a confirmation command which inquires whether or not the current state is the destroy state, and the storage device12A rejects the other commands, for example, read/write command. Further, the storage device12A erases data and initializes PINs. Subsequently, the storage device12A transitions to the destroy state. Thus, in a manner similar to that of the inactive state (shipping state)40A, in the destroy state, neither Administrator PIN nor User PIN can be set, and further, both unlocking and locking are impossible. Commands other than the confirmation command are rejected. Thus, the storage device12A in the destroy state is not able to transition to the other states such as the active state or inactive state. FIG.16Bis a flowchart showing an example of the operation of the storage device12A in the destroy state. When the storage device12A receives a command (YES in step434), the storage device12A determines whether or not the received command is the confirmation command in step436. When the received command is the confirmation command, the storage device12A sends back destroy state information indicating that the current state is the destroy state to the host device14in step440. When the received command is a command other than the confirmation command, the storage device12A sends back error information indicating an error to the host device14in step438. In this way, when the storage device12A receives a command for transition to the destroy state, the storage device12erases the data of the storage module34and sets PINs to the initial values. In the destroy state, the storage device12is not able to access the storage module34, and the probability that data leaks out is as close to zero as possible. In the destroy state, the storage device12sends back a response indicating that the current state is the destroy state to the host device14in reply to the confirmation command from the host device14. Thus, the host device14is able to confirm that the storage device12is in the destroy state, a failure state or a reset state (initial state of an inactive state or an active state). No destroy or scrapping device is required. Thus, the operational cost of the storage device12A is low. The destroyed devices can be distinguished from faulty devices. Since faulty devices are not mistakenly disposed of, data leakage from the devices to be disposed of can be prevented. In the above description, read/write command is rejected in the destroy state. However, read access to data allowed to leak out may be permitted. In other words, the storage area of data allowed to leak out may be a read-only area. Third Embodiment In the above explanation, the storage device12does not include data encryption module and stores plaintext data. Now, the third embodiment of the storage device12B which stores encrypted data is explained.FIG.18shows the outline of the storage device12B according to the third embodiment. The storage device12B is different from the storage device12shown inFIG.2in respect that an encryption processor142and a key manager140are added. The key manager140generates a key to encrypt data and stores the key in itself. The key is a random value generated by a random number generator. The encryption processor142encrypts the plaintext data input to the storage module34, using the key. One of the examples of the algorithm of encryption is well-known common key algorithm such as an advanced encryption standard (AES). The encryption processor142performs a decrypting process for the encrypted data output from the storage module34with the same key as the key used for encryption, and returns the encrypted data to plaintext data. Data is always encrypted when it is written to the storage module34. Data is always decrypted when it is read from the storage module34. As described above, the host device14includes a function for specifying the system of data erasing to be performed by the storage device12B. As shown inFIG.7C, the storage device12B is capable of implementing crypto erasure as a data erasure mechanism. When the host device14specifies crypto erasure as the data erasure mechanism, and further when a command for data erasing is given by Revert command or RevertSP command, the erase processor118instructs the key manager140to update the key. When the key manager140is instructed to update the key, the key manager140generates the value of a new key by generating new random numbers, discards the value of the old key, and stores the value of the new key in itself. Thereafter, the encryption processor142performs an encrypting process and a decrypting process, using the key with the new value. The value of the key is updated in this manner. In this regard, the data stored in the storage module34has been encrypted using the key with the old value. Thus, even when a decrypting process is performed for the stored data using the key with the new value, it is impossible to decrypt (restore) the stored data so as to be correct plaintext data. After the key is updated, the encrypting process and the decrypting process of the encryption processor142are meaningless. Thus, the execution of the encrypting process and decrypting process may be stopped. The key manager140shown inFIG.18instructs the encryption processor142to stop the encryption process and decrypting process after the key is updated. Thus, even when encrypted data is stored in the storage device12B, the host device14is capable of specifying a data erasure mechanism, and the storage device12B is reset by the specified data erasure mechanism. Fourth Embodiment FIG.19shows the outline of a storage device12C in which encrypted data is stored, according to a fourth embodiment. The storage device12C is different from the storage device12B provided with the key manager140and encryption processor142inFIG.18in respect that a destroy processor116is added. The destroy processor116is connected to the I/F processor22, authorization processor104, read/write processor122, erase processor118and encryption processor142. When the storage device12C is in the destroy state, the destroy processor116instructs the encryption processor142to disable the encryption function, and instructs the read/write processor122to reject commands except the confirmation command. Further, when the storage device12B is in the destroy state, the key manager140may generate the value of a new key by generating new random numbers, discard the value of the old key and store the value of the new key in itself. After the value of the key is updated, it is impossible to decrypt the data encrypted with the key with the old value. When the storage device12B is in the destroy state, the key manager140may instruct the encryption processor142to stop the encrypting process and decrypting process. According to the fourth embodiment, the effects of the second embodiment and the third embodiment can be obtained. Fifth Embodiment In the first to fourth embodiments, the storage device12,12A,12B or12C is shipped to, for example, a PC vendor, in the inactive state40A. The PC vendor sets SID, changes the storage device to the active state and ships it to an end user. The end user sets Administrator PIN and User PIN. In the first to fourth embodiments, the PC vendor needs to notify the end user of at least the initial value of Administrator PIN by, for example, describing it in the manual. A fifth embodiment shows a method for restoring the value of Administrator PIN indicated to the end user to the initial value without erasing data even when the end user lost Administrator PIN. In the fifth embodiment, as shown inFIG.20, the PIN manager112of a storage device12D includes a plurality of Administrator PINs. For example, Administrator PIN1and Administrator PIN2are defined.FIG.20shows a storage device12D wherein a plurality of Administrator PINs are provided in the storage device12of the first embodiment. However, the storage device12D may be structured such that a plurality of Administrator PINs are provided in the storage devices12A,12B and12C of the second to fourth embodiments. When the PC vendor causes the storage device12D to transition to the active state by Activate command, Administrator PIN1and Administrator PIN2are set to the initial values. The PC vendor sets the values of Administrator PIN1and Administrator PIN2. The value of Administrator PIN1is indicated to the end user by, for example, describing it in the manual. However, the value of Administrator PIN2is not disclosed to the end user. The PC vendor appropriately manages the value of Administrator PIN2such that the value does not leak out to outside. The authority given by Administrator PIN1is separated from the authority given by Administrator PIN2as shown inFIG.21. The authority of Administrator PIN1is allowed to view the value of Administrator PIN1by Get command and set the value of Administrator PIN1by Set command. However, the authority of Administrator PIN1is not allowed to view the value of Administrator PIN2by Get command or set the value of Administrator PIN2by Set command. Similarly, the authority of Administrator PIN2is allowed to view the value of Administrator PIN2by Get command and set the value of Administrator PIN2by Set command. However, the authority of Administrator PIN2is not allowed to view the value of Administrator PIN1by Get command or set the value of Administrator PIN1by Set command. Administrator PIN1is managed by the user. Thus, the end user may know the value of Administrator PIN1. When the value of Administrator PIN2is changed (set) by the authority of Administrator PIN1, the value of Administrator PIN2is different from that of Administrator PIN2set in the factory of the PC vendor. To prevent this situation, the authority is separated such that the authority of Administrator PIN1is not allowed to view the value of Administrator PIN2(Get command) or set the value of Administrator PIN2(Set command). InFIG.21, the authority of Administrator PIN2is allowed to neither set nor view Administrator PIN1. However, at least the authority of Administrator PIN1should not be allowed to change Administrator PIN2. Thus, access control may be set such that the authority of Administrator PIN2is allowed to set and view Administrator PIN1. As shown inFIG.3Aof the first embodiment, RevertSP command can be issued by Administrator PIN. The PC vendor performs a reset process with RevertSP command by Administrator PIN2based on a request from an end user. A command to perform a reset process, in other words, a command to execute RevertSP command, may be remotely transferred from the server managed by the PC vendor to the PC of the end user, for example, through the Internet. A parameter indicating whether data should be erased or maintained can be specified in RevertSP command. When the option for maintaining data is specified in Revert command by Administrator PIN2, the data is maintained as it is. However, Administrator PIN1is initialized. Even after the storage device12D is reset by RevertSP command, the storage device12D may remain in the active state instead of the inactive state. In this way, the storage device12D is structured so as to define a plurality of Administrator PINs. In this structure, even when the end user lost Administrator PIN1, Administrator PIN1can be initialized. Sixth Embodiment In the first to fifth embodiments, the storage device12,12A,12B,12C or12D is shipped to, for example, a PC vendor, in the inactive state40A. The PC vendor sets SID, changes the storage device to the active state and ships it to an end user. The end user sets Administrator PIN and User PIN. In a sixth embodiment, the following situation is assumed. The PC vendor sets SID. However, the PC vendor does not change the storage device to the active state, and ships it to an end user in the inactive state. The end user changes the storage device to the active state and sets Administrator PIN and User PIN. As shown inFIG.3A, to cause the storage device to transition from the inactive state to the active state, Owner PIN is required. In the above assumption, the PC vendor changes the initial value of Administrator PIN to a value which could be known to the PC vendor. As Activate command is executed on the end user side, Owner PIN needs to be input to the storage device on the end user side. To realize this situation, the following two methods are considered:(1) the value of Owner PIN is provided in the PC; and(2) the value of Owner PIN is not provided in the PC, and Owner PIN is calculated on the end user side.(1) Value of Owner PIN is provided in PC The PC vendor stores SID in an area which cannot be easily viewed (read) by an end user, such as BIOS, and ships the storage device to the end user. As the storage device is shipped to the end user in the inactive state, the lock function is disabled in the initial state. When the end user enables the lock function, SID stored in the area which cannot be easily viewed by the end user is read by the program stored in the BIOS, etc. The storage device is caused to transition from the inactive state to the active state by Activate command.(2) Value of Owner PIN is not provided in PC The PC vendor generates SID with the label information of SID, the serial number of the storage device, etc., in advance, and sets the value of SID in the storage device. It is assumed that only the PC vendor could know this generation algorithm including parameters. The label information refers to the ID of SID. The value of SID (for example, XXX, YYY, ZZZ) corresponds to the label information (for example, 0001, 0002, 0003). The value of SID is not distributed to the end user. However, the label information is distributed to the end user. The label information of the SID is stored in an area which cannot be easily set by the end user of the storage device (in other words, an area to which the end user cannot easily write data). The program stored in the BIOS, etc., on the end user side generates SID, using the label information, the serial number of the storage device, etc. The end user causes the storage device to transition from the inactive state to the active state by Activate command with SID. When the PC is connected to the Internet, the PC may communicate with the server of the PC vendor, and the label information of SID and the serial number of the storage device may be transferred to the server of the PC vendor from the PC. The PC vendor may generate SID based on the received information. In this way, the authorization of Owner PIN can be performed without providing Owner PIN on the end user side. Alternatively, the PC vendor may store the combination of the value of SID (for example, XXX, YYY, ZZZ) and the label information (for example, 0001, 0002, 0003) in a table. The value of SID (for example, XXX) is set in an SID storage area which cannot be easily viewed by the end user of the storage device and cannot be set by the end user. The PC notifies the server of the PC vendor of the label information of SID, in other words, which SID is set. Thus, it is preferable that the label information not be changed by the end user without authorization. The PC vendor is capable of obtaining SID from the label information with reference to the above table. The PC vendor is capable of authorizing Owner PIN by transmitting the obtained SID to the program of the PC via a network without providing Owner PIN on the end user side. In either case, the storage device needs to secure an area for storing the label information of SID. This area should be preferably defined in an area other than the LBA area to prevent the end user from accessing the area by a normal read/write command. Further, the authority to write data needs to be limited to SID such that the label information of SID cannot be changed by the end user without authorization.FIG.22shows the configuration of a storage device12E according to the present embodiment. The storage device12E further includes a label store manager150. The label store manager150sets the label information of SID in a label store table152and views the label information from the label store table152. The label store table152is accessed by Set command and Get command instead of a read/write command for an LBA. The label store manager150sets a value in the label store table152by Set command, and obtains the set value from the label store table152by Get command.FIG.22shows the storage device12E wherein the label store manager150is provided in the storage device12of the first embodiment. However, the storage device12E may be structured such that the label store manager150is provided in the storage device12A,12B,12C or12D of the second to fifth embodiments. Further, as shown inFIG.23, the authority to access the label store table152is restricted based on the user of issuance source of the command. The value of the label store table152can be read by any type of authority by means of Get command. However, writing to the label store table152by means of Set command is limited to the authority of Owner PIN (SID). For example, when Set command regarding the label store table152is supplied to the storage device12E, the authentication processor102performs user authentication process using the PIN, and the authorization processor104determines whether or not the user who issues Set command is the owner. When the user who issues Set command is the owner, the label information is set in the label store table152. In the factory of the PC vendor, SID is set. The label information of SID is stored in the label store table152, using the authority of SID. In this structure, the authority to write the label information of SID can be limited to SID. According to the sixth embodiment, the storage device can be caused to transition from the inactive state to the active state by Activate command on the end user side. When the lock function is not used, a PC which is shipped out in the inactive state can be used as it is. When the end user wants to use the lock function, the lock function can be enabled by causing the storage device to transition from the inactive state to the active state by Activate command on the end user side. In the above manner, it is possible to provide an end user who uses the lock function and an end user who does not want to use the lock function with both of the functions. The usability can be improved. The present invention is not limited to the embodiments described above, and the constituent elements can be modified in various ways without departing from the spirit and scope of the invention. Various aspects of the invention can also be extracted from any appropriate combination of constituent elements disclosed in the embodiments. For example, some of the constituent elements disclosed in the embodiments may be deleted. Furthermore, the constituent elements described in different embodiments may be arbitrarily combined. | 75,500 |
11861195 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). The present disclosure generally relates to improving programming to data storage devices, such as solid state drives (SSDs). A first memory device has a first XOR element and a second memory device has a second XOR element. The ratio of the first XOR element to the capacity of the first memory device is substantially smaller than the ratio of the second XOR element to the capacity of the second memory device. A read verify operation to find program failures is executed on either a wordline to wordline basis, an erase block to erase block basis, or both a wordline to wordline basis and an erase block to erase block basis. Because the program failures are found and fixed prior to programming to the second memory device, the second XOR element may be decreased substantially. FIG.1is a schematic block diagram illustrating a storage system100in which data storage device106may function as a storage device for a host device104, according to disclosed embodiments. For instance, the host device104may utilize a non-volatile memory (NVM)110included in data storage device106to store and retrieve data. The host device104comprises a host DRAM138. In some examples, the storage system100may include a plurality of storage devices, such as the data storage device106, which may operate as a storage array. For instance, the storage system100may include a plurality of data storage devices106configured as a redundant array of inexpensive/independent disks (RAID) that collectively function as a mass storage device for the host device104. The storage system100includes the host device104, which may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device106. As illustrated inFIG.1, the host device104may communicate with the data storage device106via an interface114. The host device104may comprise any of a wide range of devices, including computer servers, network attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or other devices capable of sending or receiving data from a data storage device. The data storage device106includes a controller108, NVM110, a power supply111, volatile memory112, an interface114, and a write buffer116. In some examples, the data storage device106may include additional components not shown inFIG.1for the sake of clarity. For example, the data storage device106may include a printed circuit board (PCB) to which components of the data storage device106are mechanically attached and which includes electrically conductive traces that electrically interconnect components of the data storage device106, or the like. In some examples, the physical dimensions and connector configurations of the data storage device106may conform to one or more standard form factors. Some example standard form factors include, but are not limited to, 3.5″ data storage device (e.g., an HDD or SSD), 2.5″ data storage device, 1.8″ data storage device, peripheral component interconnect (PCI), PCI-extended (PCI-X), PCI Express (PCIe) (e.g., PCIe x1, x4, x8, x16, PCIe Mini Card, MiniPCI, etc.). In some examples, the data storage device106may be directly coupled (e.g., directly soldered) to a motherboard of the host device104. The interface114of the data storage device106may include one or both of a data bus for exchanging data with the host device104and a control bus for exchanging commands with the host device104. The interface114may operate in accordance with any suitable protocol. For example, the interface114may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. The electrical connection of the interface114(e.g., the data bus, the control bus, or both) is electrically connected to the controller108, providing electrical connection between the host device104and the controller108, allowing data to be exchanged between the host device104and the controller108. In some examples, the electrical connection of the interface114may also permit the data storage device106to receive power from the host device104. For example, as illustrated inFIG.1, the power supply111may receive power from the host device104via the interface114. The NVM110may include a plurality of memory devices or memory units. NVM110may be configured to store and/or retrieve data. For instance, a memory unit of NVM110may receive data and a message from the controller108that instructs the memory unit to store the data. Similarly, the memory unit of NVM110may receive a message from the controller108that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some examples, a single physical chip may include a plurality of dies (i.e., a plurality of memory units). In some examples, each memory unit may be configured to store relatively large amounts of data (e.g., 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, etc.). In some examples, each memory unit of NVM110may include any type of non-volatile memory devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magnetoresistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of non-volatile memory devices. The NVM110may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller108may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level. The data storage device106includes a power supply111, which may provide power to one or more components of the data storage device106. When operating in a standard mode, the power supply111may provide power to one or more components using power provided by an external device, such as the host device104. For instance, the power supply111may provide power to the one or more components using power received from the host device104via the interface114. In some examples, the power supply111may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply111may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, supercapacitors, batteries, and the like. In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases. The data storage device106also includes volatile memory112, which may be used by controller108to store information. Volatile memory112may include one or more volatile memory devices. In some examples, the controller108may use volatile memory112as a cache. For instance, the controller108may store cached information in volatile memory112until cached information is written to non-volatile memory110. As illustrated inFIG.1, volatile memory112may consume power received from the power supply111. Examples of volatile memory112include, but are not limited to, random-access memory (RAM), dynamic random access memory (DRAM), static RAM (SRAM), and synchronous dynamic RAM (SDRAM (e.g., DDR1, DDR2, DDR3, DDR3L, LPDDR3, DDR4, LPDDR4, and the like)). The data storage device106includes a controller108, which may manage one or more operations of the data storage device106. For instance, the controller108may manage the reading of data from and/or the writing of data to the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108may initiate a data storage command to store data to the NVM110and monitor the progress of the data storage command. The controller108may determine at least one operational characteristic of the storage system100and store the at least one operational characteristic to the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108temporarily stores the data associated with the write command in the internal memory or write buffer116before sending the data to the NVM110. FIG.2Ais a schematic illustration of scheduling foggy-fine programming, according to disclosed embodiments. The Front End (FE) module202comprises a first XOR engine204and a first static random-access memory (SRAM)206. Host data may be initially delivered to the FE module202. The data passes through the first XOR engine204and is written to the first SRAM206. The first XOR engine204generates XOR parity information prior to writing to the first SRAM206. Exclusive OR (XOR) parity information is used to improve reliability of storage device for storing data, such as enabling data recovery of failed writes or failed reads of data to and from NVM or enabling data recovery in case of power loss. The storage device may be the data storage device106ofFIG.1. The reliability may be provided by using XOR parity information generated or computed based on data stored to storage device. The first XOR engine204may generate a first parity stream to be written to the first SRAM206. The first SRAM206may contain a plurality of dies to which data may be written. The Second Flash Manager (FM2) module210comprises of an encoder212, a second SRAM216, a decoder214, and a second XOR engine232, where the second XOR engine232is configured to generate a second parity stream to be written to the second SRAM216. The decoder214may comprise a low gear (LG) decoder and a high gear (HG) decoder. The LG decoder can implement low power bit flipping algorithms, such as a low density parity check (LDPC) algorithm. The LG decoder may be operable to decode data and correct bit flips where such data has a low bit error rate (BER). The HG decoder can implement full power decoding and error correction algorithms, which may be initiated upon a failure of the LG decoder to decode and correct bit flips in data. The HG decoder can be operable to correct bit flips where such data has a high BER. Alternatively, FM2may be replaced with a combined FE-FM monochip. The encoder212and decoder214(including the LG decoder and HG decoder) can include processing circuitry or a processor (with a computer-readable medium that stores computer-readable program code (e.g., firmware) executable by the processor), logic circuitry, an application specific integrated circuit (ASIC), a programmable logic controller, an embedded microcontroller, a combination thereof, or the like, for example. In some examples, the encoder212and the decoder214are separate from the storage controller, and in other examples, the encoder212and the decoder214are embedded in or part of the storage controller. In some examples, the LG decoder is a hardened circuit, such as logic circuitry, an ASIC, or the like. In some examples, the HG decoder can be a soft decoder (e.g., implemented by a processor). Data may be written to second SRAM216after being decoded at the decoder214. The data at the second SRAM216may be further delivered to the encoder212, as discussed below. The memory device220may be a NAND memory device. The memory device220may comprise TLC memory222. It is to be understood that the embodiments discussed herein may not be limited to TLC memory and may be applicable to any multilevel cell memory such as MLC memory, QLC memory, or the like. SLC memory, MLC memory, TLC memory, QLC memory, and PLC memory are named according to the number of bits that a memory cell may accept. For example, SLC memory may accept one bit per memory cell and QLC memory may accept four bits per memory cell. Each bit is registered on the storage device as a 1 or a 0. Furthermore, the TLC memory222includes a TLC exclusive or (XOR) partition226, where the TLC XOR partition226stores parity or XOR data. Host data is written to the first SRAM206of the FE module202. First XOR parity data may be generated, concurrently, at the first XOR engine204of the FE module202as the host data is written to the first SRAM206. The host data and the generated first XOR parity data passes from the first SRAM206to the encoder212to be encoded along stream1. The host data is encoded and foggy written to the TLC memory222. Likewise, the generated first XOR parity data is encoded and foggy written to the TLC XOR partition226of the memory device220along stream2. During the foggy write, the controller may selectively choose data to read in order to allow for data sorting into the relevant one or more streams. The TLC memory222may be a region of the memory device220dedicated to protecting data in case of a power loss event. At stream3, the host data is read from the TLC memory222at the decoder214. After the host data is decoded at the decoder214, the host data is written to the second SRAM216of the FM2210along stream4, where second XOR parity data is further generated for the host data at the second XOR engine232of the FM2210. The host data and second XOR parity data are passed through the encoder212to be encoded along stream5and are fine written to the respective locations of the TLC memory222and the TLC XOR partition226along stream6. FIG.2Bis a schematic illustration of scheduling foggy-fine programming, according to disclosed embodiments. The TLC memory222may be further partitioned into a first TLC memory partition222A and a second TLC memory partition222B. The second TLC memory partition222B may be larger than the first TLC memory partition222A. Furthermore, the TLC XOR partition226may be further partitioned into a first TLC XOR partition226A and a second TLC XOR partition226B. The second TLC XOR partition226B may be smaller than the first TLC XOR partition226B. In one embodiment, the size of the second TLC XOR partition226B is about 50% of the size of the size of the first TLX XOR partition226A. In another example, the size of the second TLC XOR partition226B is less than about 50% of the size of the size of the first TLX XOR partition226A. In some examples, the second TLC XOR partition226B may be a configurable size, such that the controller108may determine the size of the second TLC XOR partition226B during the life of the data storage device106depending on factors such as a threshold value, TLC memory222health, such as the failed bit count (FBC) of the TLC memory, and the like. The first TLC memory partition222A and the first TLC XOR partition226A may be a first TLC superblock and the second TLC memory partition222B and the second TLC XOR partition226B may be a second superblock. When foggy programming to the TLC memory222, the host data is programmed to the first TLC XOR partition222A. Likewise, the XOR parity data is programmed to the first TLC XOR partition226A. The first TLC memory partition222A and the first TLC XOR partition226A may be considered as a “Host Write” area, where data programmed in the “Host Write” area may be lost or corrupted. However, when fine programming to the TLC memory222, the host data is copied (i.e., a reclaimed copy), that is still valid (i.e., has not been re-written or trimmed by another host command), from the first TLC XOR partition222A, re-encoded, and programmed to the second TLC memory partition222B and the XOR parity data is programmed to the second TLC XOR partition226B. After fine programming to the second TLC memory partition222B and the second TLC XOR partition226B, the data and XOR parity data is protected as a first copy of the data is stored in the first TLC memory partition222A. The second TLC memory partition222B and the second TLC XOR partition226B may be considered as a “Reclaim Copy” area for the reasons stated above. Because the fine programming may utilize a decreased XOR parity scheme, such as the schemes described below, the required XOR parity partition size may be decreased. For example, a reduction of the XOR parity data of about 50% may release about 100 GB of capacity in a 7680 GB SSD. The 100 GB of capacity released may then be utilized to store additional host data. Thus, over-provisioning of the memory for XOR parity data may be decreased and the memory that would have been used to store XOR parity data may be utilized to store host data or similar data. FIG.3is a schematic illustration of a horizontal exclusive or (XOR) scheme with full die redundancy of a superblock300, according to disclosed embodiments. The superblock300includes a plurality of dies (e.g., dies0-7) and a plurality of wordlines (WL) (e.g., WLs0-95). The listed number of dies and wordlines are not intended to be limiting, but are shown to exemplify a possible embodiment. For example, the superblock may include about 32 dies and more than or less than about 96 WLs. Each die of the plurality of dies includes a first plane, indicated by PL0, and a second plane, indicated by PL1. Furthermore, each of wordline of the plurality of wordlines includes four strings (STR). The number of strings per wordline is based on the type of memory cell of the superblock. For example, QLC memory includes four strings per wordline, TLC memory includes three strings per wordline, and SLC memory includes one string per wordline. The superblock300may be an example of a zone namespace architecture that includes seven dies for data and an eighth die for XOR parity data. Die7of the superblock300is associated with XOR parity data302. Because die7includes only XOR parity data302, the XOR parity data302may recover another failed die304, such as die1, where recovering the another failed die304includes recovering all of the data of the failed die (i.e., full die redundancy). Furthermore, because each string, such as string2of WL0, spans across each of the eight dies, each string includes 16 planes. Because die7includes XOR parity data302, the parity group ratio is about 1:7, where the XOR parity data overhead is about 12.5% (i.e., ⅛). The listed values are not intended to be limiting, but to provide an example of possible embodiment. FIG.4is an illustration of the possible options of exclusive or (XOR) in a multi-level cell, according to disclosed embodiments. Sources of uncorrectable error correction code (UECC) data errors includes program status failures (PSF), silent program failures, and wear and data retention related random failures. The PSF and the silent program failures may be associated with the bit failure rate (BFR) and/or the program erase (PE) cycle. The wear and DR related random failures may be associated with the sector failure rate (SFR). The “XOR parity” column illustrates the type of XOR parity included in each embodiment. For example, a full die redundancy (FDR) may recover an entire failed die. However, the XOR parity data associated with the FDR may require a large amount of storage space in the superblock, thus reducing the amount of data that may be stored in the superblock. XOR parity data or any other parity schemes for multiple error correction code (ECC) codewords, such as low-density parity-check (LDPC), may be used to recover failed bits of data. For example, when the failed bit count (FBC) is larger than a threshold value, the controller, such as the controller108ofFIG.1, may utilize the XOR parity data to recover the failed bits, such that the data associated with the failed bits is no includes the failed bits. Another example of a data error is a program failure, where the program failure size varies from one WL-string-plane to 2-erase block failure two planes. Unlike the FBC errors, where both XOR parity data and LDPC may be used to correct the FBC errors, XOR parity data and similar parity schemes may be used to correct program failures. Program failures, such as PSF, may be fixed by writing data in a different location of the superblock. For example, when a cell has an unacceptable bit error rate (UBER), the controller may avoid programming data to the cell that has an UBER. However, silent program failures may be undetectable by the controller and are passed to the NVM, such as the NVM110ofFIG.1, unnoticed. Silent program failures may result in double, triple, or higher errors that reduce the reliability of the data storage device, such as the data storage device106ofFIG.1. In order to protect user data from failure due to the program failures, the XOR parity data scheme needs to be large enough (i.e., low XOR parity ratio) to protect against any combinations of the program failures previously described and any program failures not described, but contemplated, and have the correct geometry as well. However, the XOR parity data scheme size has limitations. For example, by increasing the size of the XOR parity data scheme, less user data or any other data may be stored in the NVM since the XOR parity data takes up more memory in the NVM, where the more memory could be utilized to store more user data. The “extra measures” column refers to the temporary storage of data in a cache or a buffer or the level of read verify/enhanced post write read (EPWR) used to check for and fix errors. For example, the buffer or the cache may store up to the last two wordlines (2WL deep) of data written to the superblock. When another wordline is written to the superblock, the oldest wordline of the last two wordlines stored in the buffer or the cache is released, such that releasing the wordline refers to erasing the data. Furthermore, the read verify/EPWR level refers to the frequency of the read verify/EPWR operations. For example, a read verify/EPWR level of a wordline signifies that after a wordline is programmed, the read verify/EPWR operation occurs. As illustrated inFIG.4, each time there is an additional firmware (FW) read verify/EPWR check, the XOR parity ratio decreases. For example, at the first FW read verify/EPWR check embodiment404, where the data is copied from a first TLC memory partition, such as the first TLC memory partition222A ofFIG.2B, to a second TLC memory partition, such as the second TLC memory partition222B ofFIG.2B, the XOR parity ratio is about to about 1:127. As a comparison, the no extra checks embodiment402has an XOR parity ratio of about 1:63. The first FW read verify/EPWR check embodiment404includes a read verify/EPWR level of a wordline. Furthermore, the cache or the buffer stores the last two wordlines (e.g., 2WL deep) of data written to the superblock. By further performing read verify/EPWR checks, such as the fourth FW read verify/EPWR check embodiment406, the XOR parity ratio may be substantially decreased to about 0, where the buffer or the cache stores the last erase block written to the super block and the read verify/EPWR operation occurs after each time a wordline, an erase block, or both a wordline and an erase block are programmed to. The XOR parity data ratio as well as overhead may be reduced if the PSF does not need to be corrected. For example, if the data, such as up to the last two wordlines written or the last erase block programmed, is still available in the source blocks, such as the SLC memory, the buffer, and/or the cache, the data in the source blocks may be programmed over the PSF failures. In another example, a temporary XOR parity data may be stored in the buffer, the cache, and/or the volatile memory, such as the volatile memory112ofFIG.1, where, in the case of a PSF, one XOR parity element (i.e., XOR parity data) per cached XOR stripe (i.e., one die, last two wordlines) may be fixed. Furthermore, the XOR parity data ratio as well as overhead may be reduced if the silent program failures do not need to be fixed. For example, if the data, such as up to the last two wordlines written or the last erase block programmed, is still available in the source blocks, such as the first TLC memory partition222A, the buffer, and/or the cache, the data in the source blocks may be programmed over the PSF failures. Furthermore, if additional FW read verify/EPWR operations detect the silent program failures by looking for the error signatures for the silent program failure types, the XOR parity ratio and overhead may be reduced. Referring toFIG.2B, the XOR parity ratio of the first TLC memory partition222A and the first TLC XOR partition226A may be about 1:7, where for every 1 die of XOR parity data, 7 dies are appropriated to data. Likewise, the XOR parity ratio of the second TLC memory partition226A and the second TLC XOR partition226B may be about 1:7. However, by performing additional FW read verify/EPWR operations on the data programed to the second TLC memory partition226B, the XOR parity ratio of the second TLC memory partition222B and the second TLC XOR partition226B may be about 1:15, about 1:31, about 1:63, about 1:127, about 1:255, about 1:383, or the like. It is contemplated that the XOR parity ratio of the second TLC memory partition222B and the second TLC XOR partition226B may be about 0 where no XOR parity data is stored in the second TLC XOR partition226B. For example, each wordline may be checked for silent program failures when written to a superblock of the second TLC memory partition222B. In some embodiments, each plane and/or each string of the wordline is also checked for silent program failures. Though the overhead of the operation may be large, silent program failures do not get programmed to the second TLC memory partition222B unnoticed. Furthermore, because each wordline is checked, only a minimal number of wordlines, such as up to about two wordlines, may be stored in the buffer, the cache, and/or the volatile memory. Thus, the latency of copying the wordlines from the stored location to the second TLC memory partition222B or the latency of releasing the wordlines from the stored location may be negligible or small. In another example, the FW read verify/EPWR operation may check for whole erase block failures. When checking for whole erase block failures, only a few wordlines need to be checked at the end of programming the erase block, thus making the overhead of the operation smaller than the overhead of the operation of checking each wordline. In one embodiment, the number of wordlines checked may be about two wordlines, where the about two wordlines checked are the last wordlines of the programmed erase block. However, because the erase block is checked at the completion of the erase block program to the second TLC memory partition222B, the source blocks associated with the data of the erase block may need to be stored in the first TLC memory partition222A, the buffer, and/or the cache. Because storing the source blocks associated with the erase block is larger than storing the last two wordlines programmed to the second TLC memory partition222B, the latency of releasing the source block from the relevant location may be larger than the latency of releasing up to about two wordlines. In yet another example, the FW read verify/EPWR operation may check for both erase block and wordline failures. After the erase block has been programmed to the second TLC memory partition222B, each wordline of the erase block is checked for wordline failures. In some embodiments, each plane and/or each string of the wordline is checked for program failures. Because each wordline of the erase block is checked at the completion of the erase block program to the second TLC memory partition222B, the source blocks associated with the data of the erase block may need to be stored in the first TLC memory partition222A, the buffer, and/or the cache. Because storing the source blocks associated with the erase block is larger than storing the last two wordlines programmed to the second TLC memory partition222B, the latency of releasing the source block from the relevant location may be larger than the latency of releasing up to about two wordlines. Though the overhead of the operation may be larger than the previous two examples, the silent program failures may not be passed to the second TLC memory partition222B unnoticed, thus enabling the highest reduction of XOR parity data, where the XOR parity data stored may be minimal or substantially about zero. In some embodiments, the only XOR parity data stored may be for wear and DR related random failures. FIGS.5A and5Bare illustrations of various program failure types, according to disclosed embodiments. It is to be understood thatFIGS.5A and5Bare, collectively, representative of a single figure split into two pages labeledFIG.5AandFIG.5B. The silent errors (i.e., silent program failures) do not include any PSF failures. Thus, the silent errors may be passed to the MLC memory, such as the MLC memory224ofFIG.2. However, as illustrated in the other examples ofFIGS.5A and5B, by including at least one cell that has a PSF, the error may be noticed by the controller and the controller may correct the error utilizing XOR parity data, LDPC, and/or the like. The various program failure types may be UECC/short and PSF. In some embodiments, the program failures are along the borders of the superblock, such as the last wordline programmed (i.e., WL(n)) and/or the last string programmed (i.e., S3). By implementing a read verify/EPWR operation, such as the read verify/EPWR operations described inFIG.4, the program failures illustrated inFIGS.5A and5Bmay not be programmed to the MLC memory. It is contemplated that other failure types not shown are applicable the embodiments described herein. FIGS.6A and6Bare schematic illustrations of a reduced horizontal exclusive or (XOR) scheme of a superblock600,650, according to disclosed embodiments. Aspects of the superblock300ofFIG.3may be similar to the superblock600ofFIG.6Aand the superblock650ofFIG.6B. Although a superblock scheme is exemplified, it is contemplated that the disclosed embodiments may be applicable to non-superblock schemes. The superblock scheme may refer to data striping across multiple dies, as to achieve a higher level of parallelism. The non-superblock scheme may refer to data striping across a single die. Furthermore, it is to be understood that while a horizontal XOR scheme is exemplified, the embodiments may be applicable to a reduced vertical XOR scheme. According to the horizontal XOR scheme, the XOR parity group/stripe spans horizontally across a data stripe, such that the last block or set of blocks of the horizontal data stripe is programmed with XOR parity data. The XOR parity data may protect against data failure for the respective horizontal data stripe. According to the vertical XOR parity scheme, the XOR parity group spans vertically across a single plane, for example, such that the last block of a plane or set of blocks of the plane is programmed with XOR parity data. The XOR parity data may protect against data failure for the respective vertical data stripe. Unlike the superblock300ofFIG.3, the superblock600, specifically die7, has about 50% reduced XOR parity data when compared to the superblock300. The XOR parity data602may be located on a single plane of a single die, such as PL1of die7. Like the superblock600, the superblock650has about 50% reduced XOR parity data when compared to the superblock300. However, rather than having XOR parity data on a single plane of a single die, the XOR parity data602may be stored on alternating strings, such as on STR1and STR3of WL0, where the XOR parity data602is stored on both PL0and PL1of a single die, such as the die7. Likewise, the 1:15 parity group652illustrates where the parity (P) may be located on alternating wordlines. The reduced XOR parity data may be due the additional read verify/EPWR operations to check for PSF and silent program failures. The about 50% XOR parity data of the superblock600,650may only recover a block or a plane of a die604rather than a whole die failure. The parity group ratio may be about 1:15 rather than 1:7 as previously illustrated inFIG.3. However, because the additional read verify/EPWR operations are executed on the programmed wordlines and/or erase blocks of the superblock, the UBER may be substantially less than the programmed wordlines and/or erase blocks of the superblock that does not have additional read verify/EPWR operations executed on the wordlines and/or erase blocks of the superblock. FIG.7is a schematic illustration of a reduced horizontal exclusive or (XOR) scheme of a superblock700, according to disclosed embodiments. Aspects of the superblock650ofFIG.6Bmay be similar to the superblock700. Although a superblock scheme is exemplified, it is contemplated that the disclosed embodiments may be applicable to non-superblock schemes. Furthermore, it is to be understood that while a vertical XOR scheme is exemplified, the embodiments may be applicable to a reduced horizontal XOR scheme. For example, the superblock700has a 1:15 parity group ratio, where the XOR parity data702is stored on alternating strings. Regarding the superblock700, the fourth string, STR3, of the fourth wordline, WL3is being programmed to. The data at risk due to program failure are the two previously programmed wordlines WL2and WL1. However, the source blocks for the data of WL2and WL1are stored in the TLC memory, such as the first TLC memory partition222A ofFIG.2B. WL0may be considered “safe”, where a successful read verify/EPWR operation, such as the read verify/EPWR operation described inFIG.4, has been completed on the data of WL0. The program failures704, both PSF and silent program failures, may still exist in WL1, WL2, and WL3because the read verify/EPWR operation has not yet been executed. FIG.8is a schematic illustration of a reduced vertical exclusive or (XOR) scheme of a superblock800, according to disclosed embodiments. Although a superblock scheme is exemplified, it is contemplated that the disclosed embodiments may be applicable to non-superblock schemes. Furthermore, it is to be understood that while a vertical XOR scheme is exemplified, the embodiments may be applicable to a reduced horizontal XOR scheme. The superblock800exemplifies the 1:383 parity groups scheme as illustrated inFIG.4, where the 1:383 refers to 1 XOR parity data for 383 other cells in a TLC memory that includes 96 wordlines and 8 dies. In some examples, the number of parity groups per superblock may be reduced to half or a quarter of the superblock, such that the superblock includes between about 2 and about 4 parity groups. For example, each of the parity groups may protect against data failures of the respective die and a neighboring adjacent die. Rather than programming XOR parity data to the last die of each wordline, the XOR parity data802is programmed to the last string of the last wordline, such that the XOR parity data protects the previous wordlines and the previous strings for each plane and/or die. In some examples, the XOR parity data802may be stored in volatile memory, such as the volatile memory112ofFIG.1, until the completion of the program to the previous strings and wordlines as to maintain a sequential program to the superblock. In one example, the XOR parity data may protect along the same die and/or plane, such that a first XOR parity data806in Die7, PL1, WL95, STR3may protect a fourth location808din Die7, PL0, WL0, STR1. In another example, the first XOR parity data806in Die7, PL1, WL95, STR3may protect a scattered group of cells such as the first location808a, second location808b, third location808c, and fourth location808d. Furthermore, volatile memory and/or the NVM, such as the first TLC memory partition222A ofFIG.2B, may store the last erase block of data such that a first erase block804may be recovered. FIG.9is a flowchart illustrating a method900of performing a foggy-fine program, according to disclosed embodiments. At block902, the controller, such as the controller108ofFIG.1, receives a write command. The controller performs a foggy/fine program at block904to the non-volatile memory, such as the second TLC memory partition222B ofFIG.2B. The data associated with the fine program may be written to one or more wordlines, such as a first wordline and a second wordline, of the NVM or to an erase block of the NVM. At block906, the data source (i.e., the data associated with the write command at block902) is held in a volatile memory, such as the volatile memory112ofFIG.1, and/or in a NVM, such as the first TLC memory partition222B ofFIG.2B, where the data source may store up to about two wordlines last fine program written to the NVM or the last erase block fine programmed to the NVM. At block908, a read verify operation occurs on the data that is fine programmed to the NVM. The read verify operation may be an enhanced post write read operation. Furthermore, the read verify operation may occur to either the last two wordlines previously programmed, such as a first wordline and a second wordline, the last erase block previously written, or each wordline of the last erase block previously written. The read verify operation checks for program failures such as PSF, silent program failures, and the like. At block910, the controller determines if the read verify operation was successful. If the read verify operation was not successful (i.e., a program failure is present) at block910, then at block912, the data source stored in the first TLC memory partition222A is copied to the second TLC memory partition222B, where copying the data to the second TLC memory partition222B is a fine program. The controller then performs a read verify operation on the copied data at block908. However, if the read verify operation is successful at block910, then a reduced amount of XOR parity data programed with data at block914, such that the XOR parity data programmed may be about 50% of the XOR parity data programmed in previous approaches. The amount of XOR parity data programmed may depend on the level of programming redundancy performed, such as the different redundancy levels described previously inFIG.4. For example, when programming the data to the second TLC memory partition222B, a plane of a die across each of the plurality of wordlines may include an XOR parity element (i.e., XOR parity data). The plurality of wordlines include a plurality of strings, where the XOR parity element is written to alternating strings of a die and at least one string of the plurality of strings does not include XOR parity data. Rather than programming a whole die with XOR parity data, about half a die is programmed with XOR parity data after a successful read verify operation. It is to be understood that the reduced XOR parity data scheme may be a reduced horizontal parity scheme, a vertical reduced parity scheme, or a combination of the previously mentioned schemes. Furthermore, in some examples the vertical reduced parity scheme may be a scattered vertical parity scheme, such that the XOR parity data does protects different cells or bits per plane of each die. In some embodiments, the first TLC memory partition222A and the second TLC memory partition222B may not have the same XOR parity scheme. For example, the first TLC memory partition222A has a first ratio, where the first ratio is about a 1:7 parity group ratio, and the second TLC memory partition222B has a second ratio, where the second ratio is about a 1:15 parity group ratio. Furthermore, in some examples, the second TLC memory partition222B may have not have any XOR parity data, such as the fourth FW read verify/EPWR check embodiment406described inFIG.4. At block916, the data source associated with the data of the successful read verify operation is released from the first TLC memory partition222A, the buffer, and/or the cache. At block918, the fine program is completed. By performing a read verify operation to different levels of the non-volatile memory, such as on a wordline to wordline basis, on an erase block basis, or on each wordline of an erase block basis, and storing the data source in the SLC memory, the buffer, and/or the cache, the size of the XOR parity data of the fine program to the NVM may be decreased and the reliability of the data may be increased. In one embodiment, a data storage device includes a controller and a memory device coupled to the controller. The memory device includes a first superblock that has a first parity portion of a first storage size and a second superblock that has a second parity portion of a second storage size. The first storage size is less than the second storage size. The first superblock has a third storage size and the second superblock has a fourth storage size. The third storage size is less than the fourth storage size. The first superblock is configured to handle host write data. The second superblock is configured to handle reclaimed copies of the host write data. The first superblock is TLC memory. The second superblock is TLC memory. The first parity portion and the second parity portion are XOR parity. The second storage size is about 50% less than the first storage size. The second superblock includes a plurality of wordlines. Each wordline includes a plurality of strings. At least one string of the plurality of strings does not include exclusive or (XOR) data. In another embodiment, a data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to receive host data from a host device, generate first exclusive or (XOR) parity data for the host data, encode the host data and first XOR parity data with an encoder, write the encoded host data and first XOR parity data to a first memory superblock, decode valid data and first XOR parity data written to the first memory superblock, wherein the valid data corresponds with the host data that has not become obsolete when located in the first memory superblock, generate second XOR parity data for the decoded valid data, re-encode the decoded valid data and second XOR parity data with the encoder, and write the re-encoded data and second XOR parity data to a second memory superblock. The re-encoded second XOR parity data has a size that is less than the size of the first XOR parity data. The memory device includes a plurality of dies. Each of the plurality of dies includes a first plane and a second plane. At least one plane of the first plane and the second plane includes exclusive or (XOR) data. The controller is further configured to write data to a first wordline of plurality of wordlines of the memory device, write data to a second wordline of the plurality of wordlines, perform a read verify operation on the first wordline, and perform a read verify operation on the second wordline. At least one of the first wordline and the second wordline does not include an XOR parity element. The read verify operation is enhanced post write read (EPWR). The XOR parity element is at least one of a full die redundancy, a full plane redundancy, and an erase block redundancy. The decoder and the encoder are disposed in a front module of the data storage device. The first XOR parity data is generated in a front end module separate from the front module. In another embodiment, a data storage device includes memory means comprising a first memory superblock and a second memory superblock, means to store host data with parity data in the first memory superblock, and means to store a copy of the host data and parity data in the second memory superblock, wherein the copy of the parity data utilizes a smaller amount of parity data storage in the second superblock compared to an amount of parity data storage in the first superblock. The data storage device further includes means to perform a read verify operation to detect program failures. The read verify operation is an enhanced post write read. The means to perform a read verify operation includes either checking each wordline of a plurality of wordlines of the memory means for program failures, each erase block of a plurality of erase blocks of the memory means for program failures, or both each wordline of the plurality of wordlines and each erase block of the plurality of erase blocks for program failures. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 47,488 |
11861196 | DESCRIPTION OF EMBODIMENTS This application mainly aims to resolve a problem of how to save bandwidth resources between processors. The following describes technical solutions of this application with reference to accompanying drawings. FIG.1is a schematic diagram of a scenario to which technical solutions in embodiments of this application can be applied. As shown inFIG.1, a host11communicates with a storage system, and the storage system includes a plurality of storage nodes (or referred to as “nodes” for short)100. Each storage node100is one storage engine (referred to as an engine for short), and each node100includes a plurality of controllers103. Each controller103includes a plurality of processors, and each processor includes a plurality of processor cores. In addition, each node100has a front-end interface card101and a back-end interface card102. The front-end interface card101is used for communication between the node100and the host11, and the back-end interface card102is used for communication between the node100and a plurality of disk enclosures. Each disk enclosure includes a plurality of hard disks107. The hard disk is configured to store data, and may be a disk or another type of storage medium, for example, a solid-state drive or a shingled magnetic recording hard disk. The front-end interface card101is directly connected to the plurality of controllers103included in each node through an internal network channel, and the back-end interface card102is also directly connected to the plurality of controllers103included in each node through an internal network channel (FIG.1shows only some connections), to ensure that the controllers103in each node100can receive and send services by using the front-end interface card101or the back-end interface card102. In addition, each node100may be connected to the disk enclosure by using the back-end interface card102, to implement data sharing between nodes. In some application scenarios, one or more storage nodes100and a disk enclosure may be collectively referred to as a storage device. The controller103is a computing device, for example, a server or a desktop computer. In terms of hardware, as shown inFIG.2, the controller103includes at least a processor104, a memory105, and a bus106. The processor104is a central processing unit (CPU), and is configured to process an I/O request from the node100or a request generated in the controller103. One controller103may include a plurality of processors104, and each processor includes a plurality of processor cores (not shown in the figure). The memory105is configured to temporarily store data received from the host11or data read from the hard disk. When receiving a plurality of write requests sent by the host11, the controller103may temporarily store data in the plurality of write requests in the memory105. When a capacity of the memory105reaches a specific threshold, the data stored in the memory105is sent to the hard disk for storage. The memory105includes a volatile memory, a non-volatile memory, or a combination thereof. The volatile memory is, for example, a random-access memory (RAM). The non-volatile memory is, for example, any machine readable medium that can store program code, such as a flash memory chip, a floppy disk, a hard disk, a solid state disk (SSD), or an optical disc. The memory105has a power-off protection function. The power-off protection function means that the data stored in the memory105is not lost when the system is powered off and then powered on again. The bus106is configured to implement communication between the components in the controller103. Storage space provided by the controller103for the host11is from the plurality of hard disks107, but an actual address of the storage space provided by the hard disks is not directly exposed to the controller103or the host11. In actual application, physical storage space is virtualized into several logical units (LU), which are provided for the host11, and each logical unit has a unique logical unit number (LUN). Because the host11can directly sense the logical unit number, a person skilled in the art usually directly refers to the LUN as the logical unit. Each LUN has a LUN ID, which is used to identify the LUN. A specific location of data in an LUN may be determined based on a start address and a length of the data. A person skilled in the art usually refers to a start address as a logical block address (LBA). It can be understood that three factors: an LUN ID, an LBA, and a length identify a specific address segment. The host11generates a data access request, and the data access request usually carries an LUN ID, an LBA, and a length. For ease of description, in this embodiment, the LUN ID, the LBA, and the length are referred to as a virtual address. It can be known from the foregoing descriptions that an LUN to be accessed by the request and a specific location of the LUN may be determined based on the virtual address. The controller103stores a correspondence between the virtual address and an address at which the data is stored in the hard disk. Therefore, after receiving the data access request, the controller103may determine a corresponding physical address based on the correspondence, and indicate the hard disk to read or write the data. To ensure that data is evenly stored in each storage node100, a distributed hash table (DHT) manner is usually used for routing when a storage node is selected. In the distributed hash table manner, a hash ring is evenly divided into several parts, each part is referred to as one partition, and one partition corresponds to one address segment described above. All data access requests sent by the host11to the storage system are located to one address segment. For example, data is read from the address segment, or data is written into the address segment. It should be understood that a CPU resource, a memory resource, and another resource in the storage system need to be used (the CPU resource and the memory resource are usually combined into a computing resource in the industry) to process these data access requests. The CPU resource and the memory resource are provided by the controller103. The storage node usually has a plurality of controllers103, and each controller103includes a plurality of processors. When the storage node executes a service request, a plurality of processors usually need to process in parallel a plurality of sub-requests obtained by splitting the service request. Because these sub-requests are associated with each other, forwarding and data interaction of the sub-requests between the plurality of processors are involved, and bandwidth resources between the processors are occupied. To resolve this problem, in this embodiment of this application, one CPU or one or more CPU cores in one CPU are allocated to one address segment set. The address segment set includes one or more address segments, and the address segments may be consecutive or nonconsecutive. All data access requests for accessing these address segments are executed by the allocated CPU, or executed by the allocated one or more CPU cores. Different CPUs or different CPU cores in one CPU are allocated to different address segment sets. Further, in this embodiment of this application, one memory is allocated to each address segment set, and data (including both service data and metadata) related to a data access request for accessing an address segment included in the address segment set is temporarily stored in the allocated memory. Specifically, one memory is allocated to one address segment set, and different memories are allocated to different address segment sets. The memory herein includes but is not limited to the memory105inFIG.2. In an actual application scenario, a processor has a memory, which is also referred to as a local memory of the processor. The local memory is usually integrated into a component together with the processor, or is directly or indirectly coupled to the processor. In this case, during memory allocation, the local memory of the processor that is allocated to the address segment set may be preferentially allocated to the address segment set. It should be noted that the foregoing memory allocation manner is merely an implementation provided in this embodiment of this application, and another implementation may be used. For example, no memory is pre-allocated to an address segment set, and a memory is selected from a plurality of memories included in the storage system when any data access request needs to use a memory.FIG.13is a schematic flowchart of a capacity expansion method according to an embodiment of the present disclosure In addition to a CPU resource and a memory resource, resources used to process a data access request may further include a network resource and a hard disk resource. Optionally, both the network resource and the hard disk resource may be pre-allocated to different address segment sets. When a new resource is added to the storage system, the new resource and an original resource may be integrated and then reallocated to the address segment sets. One implementation is to re-divide address segment sets, keep a quantity of address segments unchanged, increase a quantity of address segment sets, reduce a quantity of address segments included in each address segment set, and then reallocate resources of the storage system to the adjusted address segment sets. Another implementation is to maintain an allocation relationship between some address segments in each address segment set and original resources, and allocate newly added resources to the other address segments in the address segment set. This implementation can reduce a change of a mapping relationship between an address segment and an original resource. To better implement resource isolation between data access requests for different address segment sets, several virtual nodes are created in the storage system in this application. The virtual node is a minimum unit for resource allocation. Resources in the storage system may be classified into several equal parts, and each equal part corresponds to one virtual node. Specifically, each virtual node corresponds to some CPU resources, some memory resources, some network resources, and some hard disk resources. For example, if the storage system has four nodes100, each node has four controllers103, each controller has four CPUs, and each CPU has 48 CPU cores, one node100has 768 CPU cores in total. If the storage system includes four nodes, there are 3072 cores in total. If each CPU corresponds to 256 GB memory, one controller has 1 TB memory, one node has 4 TB memory, and the storage system has 16 TB memory in total. If all hardware resources included in the storage system are classified into 256 equal parts, there are 256 virtual nodes, a CPU resource corresponding to each virtual node is 12 CPU cores, and a memory resource corresponding to each virtual node is 0.0625 TB. As described above, one partition corresponds to one address segment. After a virtual node is introduced, one partition set corresponds to one virtual node, and one partition set includes a plurality of partitions. Correspondingly, one address segment set corresponds to one virtual node, and one address segment set includes a plurality of address segments. In other words, an address segment is used as an input, and after calculation is performed by using a preset algorithm, a partition can be uniquely determined, and a virtual node can be further uniquely determined. Assuming that there are 1024 partitions in the storage system and 32 virtual nodes are created in the storage system, each virtual node corresponds to 32 partition sets, and each partition set includes 32 partitions. Generally, a quantity of partitions included in the storage system remains unchanged. Even if virtual nodes are added to or removed from the storage system, only the 1024 partitions are reallocated in the virtual nodes that are added or removed. It should be understood that creating a virtual node is not the only manner for implementing resource isolation. If there is no virtual node, resources may be directly allocated to each address segment set according to the foregoing description. For creation of a virtual node in the storage system, this embodiment provides at least two creation manners. In one manner, the virtual node is automatically created during initialization of the storage system. A specific process is as follows: The virtual node may be created based on any one of (1) a quantity of storage nodes, (2) a quantity of controllers, and (3) a quantity of CPUs that are included in the system, and a combination thereof. A quantity of created virtual nodes is less than or equal to the quantity of CPUs included in the system. Then, a resource is allocated to each virtual node, a mapping relationship between each virtual node and the allocated resource is created (for this part of content, refer to the following descriptions ofFIG.3toFIG.6), and the created mapping relationship is stored in the host11and the front-end interface card101.FIG.14is a schematic diagram of front-end interface card according to an embodiment. In another manner, during initialization of the storage system, management software of the storage system provides an interface for an administrator. The administrator selects a quantity of to-be-created virtual nodes in the interface. Then, the storage system creates virtual nodes according to an instruction, allocates a resource to each virtual node, creates a mapping relationship between each virtual node and the allocated resource (for this part of content, refer to the following descriptions ofFIG.3toFIG.6), and stores the created mapping relationship in the host11and the front-end interface card101. Similarly, the administrator may select a quantity of virtual nodes based on any one of (1) a quantity of storage nodes, (2) a quantity of controllers, and (3) a quantity of CPUs, a combination thereof, or based on another factor. In either of the foregoing creation manners, a quantity of virtual nodes may be adjusted during running of the storage system. For example, the quantity of virtual nodes may be increased when a controller is added to the storage system, or the quantity of virtual nodes may be reduced when a controller is removed from the storage system, or the quantity of virtual nodes may be increased when a disk enclosure is added to the storage system, or the quantity of virtual nodes may be reduced when a disk enclosure is removed from the storage system. Even if a quantity of resources does not change, the storage system can still adjust the quantity of virtual nodes as specified by the administrator. FIG.3is a schematic diagram of an allocation relationship between a virtual node, a CPU resource, and a memory resource according to an embodiment of this application. As shown inFIG.3, all CPUs and all memories in the storage system form a computing resource pool, the CPU resources and the memory resources included in the computing resource pool are classified into several computing resource groups, and each computing resource group is allocated to one virtual node. Different virtual nodes occupy different computing resource groups. Each computing resource group may use one CPU, or a plurality of computing resource groups may share one CPU. For example, a computing resource group 0 uses a CPU_0, a computing resource group 1 uses a CPU_1, a computing resource group m and a computing resource group m+1 share a CPU_m, and a computing resource group n uses a CPU_n, where both m and n are integers greater than 1, and n is greater than m. It may be understood that there is further one or more computing resource groups between the computing resource group n and the computing resource group m. When a plurality of computing resource groups share one CPU, because the CPU includes a plurality of CPU cores (for example, 48 CPU cores), the plurality of CPU cores included in the CPU may be classified into a plurality of core groups, and each core group (including one or more CPU cores) is allocated to one virtual node. In addition, each computing resource group further includes a memory resource, and the memory resource included in each computing resource group may be a local memory of a CPU included in the computing resource group. After such a configuration, a data access request corresponding to a virtual node is run on an allocated CPU, and the virtual node may use a local memory resource of the nearby CPU. A local memory of a CPU is a memory that is located in the same node as the CPU. Specifically, for example, a memory resource used by the computing resource group 0 is a Mem_0, and the Mem_0is a local memory of the CPU_0. A memory resource used by the computing resource group 1 is a Mem_1, and the Mem_1is a local memory of the CPU_1. The computing resource group m and the computing resource group m+1 share a Mem_m, and the Mem_m is a local memory of the CPU_m. A memory resource used by the computing resource group n is a Mem_n, and the Mem_n is a local memory of the CPU_n. In this embodiment, one computing resource group is allocated to each virtual node. Therefore, different CPU resources and memory resources may be used for service requests corresponding to different virtual nodes, thereby avoiding resource contention. In addition, if no resource is not allocated to each virtual node in a conventional manner, when a data access request is executed, a plurality of CPUs usually need to process in parallel a plurality of sub-requests obtained by splitting the request. Because these sub-requests are associated with each other, the CPUs schedule or forward the sub-requests when processing sub-requests. However, in the manner provided in this embodiment, each virtual node corresponds to one CPU, or a plurality of virtual nodes share one CPU. Therefore, a service request allocated to a virtual node is executed by a specified CPU, thereby reducing scheduling and forwarding between CPUs. In addition, if different data access requests share one memory, some mutually exclusive operations are inevitably performed to implement data consistency. However, in the manner in this embodiment, different memories are allocated to data access requests for accessing different address segment sets, thereby reducing mutually exclusive operations to some extent. It should be noted that the computing resource pool is only an implementation provided in this embodiment, and this embodiment may further provide another implementation. For example, some or all CPUs in the storage system form CPU resources, and some of the CPU resources are allocated to each virtual node. For another example, some or all memories in the storage system form memory resources, and some of the memory resources are allocated to each virtual node. The network resources in this embodiment of this application mainly include link resources between the controller103and the disk enclosure. A plurality of logical links may be created on each back-end interface card102, a plurality of connections may be established on each logical link, and these connections form a network resource pool.FIG.4is a schematic diagram of an allocation relationship between a virtual node and a network resource according to this application. As shown inFIG.4, the connections included in the network resource pool are classified into several link groups, and each link group uses one or more connections. For example, a link group 0 uses a connection_0, and the connection_0is established on a logical link_0. A connection group 1 uses a connection_1, and the connection_1is established on a logical link_1. A connection group P uses all connections between a connection_m and a connection_n, where both m and n are integers greater than 1, n is greater than m. There are one or more connections between the connection_m and the connection_n, and these connections are established on a logical link_n. Each link group is allocated to one virtual node. Because different virtual nodes use different connections, contention for network resources caused by exchange between the nodes in the system is avoided. It should be noted that the foregoing network resource pool is only an implementation provided in this embodiment, and this embodiment may further provide another implementation. For example, some or all of connections between the controller103and the disk enclosure form network resources, and some of the network resources are allocated to each virtual node. The hard disk resources in this embodiment are mainly capacities of all hard disks included in the storage system.FIG.5is a schematic diagram of an allocation relationship between a virtual node and a hard disk resource according to an embodiment of this application. As shown inFIG.5, the disks are classified into several chunks at a specified granularity. These chunks form a storage pool. According to a redundant array of independent disks (RAID) rule, a specific quantity of chunks are selected from different disks to form a chunk group (CKG). For example, a chunk group 0 includes a chunk_0, a chunk_m, and a chunk_n. The chunk_0 is from a hard disk_0, the chunk_m is from a hard disk_1, and the chunk_n is from a hard disk_n. Each virtual node corresponds to one or more chunk groups. Because different virtual nodes use different CKGs, back-end hard disk resources are isolated. It should be noted that the foregoing storage pool is only an implementation provided in this embodiment, and this embodiment may further provide another implementation. For example, some or all hard disks included in the storage system form hard disk resources, and some of the hard disk resources are allocated to each virtual node. In conclusion, each virtual node includes a CPU resource, a memory resource, a network resource, and a hard disk resource that are needed for processing a service. As shown inFIG.6, computing resources allocated to a virtual node 0 are the computing resource group 0, hard disk resources allocated to the virtual node 0 are a chunk group 1 and a chunk group 2, and network resources allocated to the virtual node 0 are a link group m and a link group n. Resources allocated to virtual nodes are independent of each other. As a quantity of CPUs and a quantity of CPU cores increase linearly, as long as performance of a single virtual node keeps consistent by correspondingly increasing a quantity of virtual nodes, performance can be linearly expanded as a quantity of physical resources increases. This technology described in this embodiment is referred to as a CoreFarm in the industry. The following describes a data storage process.FIG.7is a schematic flowchart of write request processing according to an embodiment. As shown inFIG.7, the following steps are included. S101. A client triggers generation of a write request by using a host11, where the write request carries to-be-written data and a virtual address of the data, and the virtual address is an LUN ID, an LBA, and a length. S102. The host11determines a virtual node corresponding to the write request. Specifically, the host11performs hash calculation on the virtual address, to obtain a hash value. The hash value corresponds to a specific partition, and then an identifier of the partition is mapped to a specific virtual node (referred to as a target virtual node) in a plurality of virtual nodes according to a specific rule. The rule includes but is not limited to a sequential algorithm, a random algorithm, and the like. For ease of description, an example in which the target virtual node is the virtual node 0 inFIG.5is used in this embodiment. According to the descriptions inFIG.3toFIG.5, hardware resources allocated to the virtual node 0 include the computing resource group 0, the chunk group 1, the chunk group 2, the link group m, and the link group n. S103. The host11sends the write request to a storage node corresponding to the virtual node. Specifically, the host11stores a mapping table of a resource allocation status of each virtual node. The mapping table records a correspondence between each virtual node and each resource allocated to the virtual node (as shown in Table 1). TABLE 1VirtualComputingChunkLinknoderesource groupgroupgroupVirtualComputingChunk group 1 andLink group m andnode 0resource group 0chunk group 2link group nVirtualComputingChunk group 0Link group 1node 1resource group 1. . .. . .. . .. . .VirtualComputingChunk group pLink group pnode presource group p The host11determines, based on a computing resource that corresponds to the virtual node 0 and that is recorded in Table 1, a storage node in which the computing resource is located. It can be learned from the description inFIG.5that a CPU corresponding to the virtual node 0 is located in the computing resource group 0. Further, it can be learned fromFIG.2that the computing resource group 0 includes the CPU_0 and the Mem_0. Therefore, the host11sends the write request to a storage node (for example, the storage node 0) in which the CPU_0 is located. If there is no link between the host11and the storage node 0, the write request may be sent to another storage node through a link between the host11and the another storage node, and then the another storage node forwards the write request to the storage node 0. If there is one link between the host11and the storage node 0, the write request is directly sent to the storage node 0 through the link. If there are a plurality of links between the host11and the storage node 0, one link may be selected from the plurality of links in a polling manner or another manner, and the write request is sent to the storage node 0 through the selected link. S104. After receiving the write request, the storage node sends the write request to a CPU corresponding to the virtual node for processing. Specifically, a front-end interface card101of the storage node stores a mapping table (as shown in Table 1) of a resource allocation status of each virtual node. The front-end interface card101may determine a corresponding target virtual node based on the virtual address carried in the write request, to further determine a CPU corresponding to the target virtual node. An example in which the target virtual node is the virtual node 0 is still used. A CPU corresponding to the virtual node 0 is the CPU_0. Therefore, the front-end interface card101sends the write request to the CPU_0. The CPU may perform corresponding processing on the data in the write request. Data before processing and data after processing need to be temporarily stored in a memory. It can be learned fromFIG.2that a memory resource included in the computing resource group 0 is the Mem_0. Therefore, the data may be stored in memory space indicated by the Mem_0. S105. The storage node sends processed data to a corresponding hard disk for storage through a back-end physical channel that matches the virtual node. Specifically, when data stored in the memory Mem_0 reaches a specific watermark, the data stored in the memory Mem_0 needs to be written into the hard disk for persistent storage. The storage node may search the mapping table for a chunk group corresponding to the target virtual node, and write the to-be-written data into the chunk group corresponding to the target virtual node. For example, it can be learned fromFIG.5and Table 1 that chunk groups corresponding to the virtual node 0 are the chunk group 1 and the chunk group 2. It indicates that the virtual node 0 may use a hard disk resource of the chunk group 1 or a hard disk resource of the chunk group 2. It can be learned fromFIG.5that the chunk group 0 includes the chunk_0, the chunk_m, and the chunk_n. The chunk_0 is located in the hard disk_0, the chunk_m is located in the hard disk_1, and the chunk_n is located in the hard disk_n. The storage node 0 may divide the to-be-written data into two data slices, obtain a check slice of the two data slices through calculation, and then send the two data slices and the check slice to the hard disk_0, the hard disk_1, and the hard disk_n respectively through the back-end interface card102. In addition, the storage node 0 needs to use network resources in a network resource pool to send the data slices and the check slice. For example, network resources corresponding to the virtual node 0 include the link group m and the link group n. Therefore, the back-end interface card102may send the data slices and the check slice in parallel to a corresponding hard disk by using a plurality of connections included in the link group m and/or the link group n.FIG.13is another flowchart of capacity expansion method according to an embodiment of the present disclosure. According to the write request processing method provided inFIG.7, the host11first determines the virtual node corresponding to the write request, and then processes the request by using the hardware resources pre-allocated to the virtual node. Because resources allocated to virtual nodes are independent of each other, when the host11processes a plurality of data processing requests in parallel, the plurality of data processing requests do not interfere with each other. Processing the write request is used as an example for description inFIG.7. In addition to processing a data read/write request, a storage system further processes another service request such as data exchange, protocol parsing, and data flushing. In a multi-CPU and multi-CPU-core storage system, these service requests are usually executed by a plurality of CPUs or a plurality of CPU cores in one CPU in series. A key to affecting linearity is overheads caused by cross-CPU and cross-CPU-core processing and serial execution in the storage system. In the embodiments, a CPU packet scheduling method is provided to resolve the problem.FIG.8is a schematic diagram of packet scheduling of a CPU core in a virtual node. First, a virtual node corresponds to one CPU, and correspondingly, it means that one virtual node uses one CPU, or a plurality of virtual nodes share one CPU. In this way, it is ensured that service requests for a same virtual node are processed by a same CPU. Therefore, service scheduling remains independent between virtual nodes. Then, a plurality of CPU cores included in the CPU corresponding to the virtual node are classified into several service processing groups based on service logic, and each service processing group includes one or more CPU cores. As shown inFIG.7, a first service processing group is specially used for I/O read and write, a second service processing group is specially used for data exchange, a third service processing group is used for protocol parsing, and a fourth service processing group is used for data flushing. CPU cores included in the third service processing group and the fourth service processing group may be shared. Specifically, for example, one CPU includes 48 CPU cores. It is assumed that 12 CPU cores are allocated to the first service processing group, 12 CPU cores are allocated to the second service processing group, and 24 CPU cores are allocated to both the third service processing group and the fourth service processing group. Different service requests are isolated in such a manner. In a single service processing group, service requests are executed in series on a CPU core allocated to the service processing group, so as to prevent the service request from contending for resources with other service requests to some extent, thereby reducing mutually exclusive operations and implementing a lock-free design. When a quantity of CPU cores included in a CPU increases, a processing capability of the CPU can also be linearly expanded. In addition, after the service requests are grouped, there is less service code than that before the grouping, and the service code occupies less memory space accordingly. When a total amount of memory space remains unchanged, more space can be spared in the memory to store service data, so as to increase a memory hit rate of the data. Similar to processing of the write request, when the client triggers a read request by using the host11, the host11may determine, based on a virtual address of to-be-read data carried in the read request, a virtual node corresponding to the request, and further determine a storage node corresponding to the virtual node (similar to S103). The host11sends the read request to the storage node corresponding to the virtual node. After receiving the read request, the storage node sends the read request to a CPU corresponding to the virtual node for processing (similar to S104). If the to-be-read data is not hit in a corresponding memory, the CPU corresponding to the virtual node may further determine a network resource and a hard disk resource that correspond to the virtual node, and then send the request to a corresponding hard disk by using the corresponding network resource, to read the to-be-read data. In addition, in actual application, the cost of improving a capability of the storage system by improving a single-core capability is increasing. Currently, a plurality of nodes are used in the industry, and each node has a plurality of CPU cores. In this way, a processing capability of the storage system is improved. For example, in terms of a similar single-core capability, if a quantity of cores in the storage system increases from48to768, a hardware capability of the storage system improves. However, how to enable a service processing capability of the storage system to be linearly expanded as a quantity of CPU cores and a quantity of resources such as memory resources increase is a problem that all storage device vendors need to resolve. According to a capacity expansion method provided in the embodiments, the service processing capability of the storage system can be linearly expanded as a quantity of hardware resources increases. The following describes a node capacity expansion process. The process is described with reference toFIG.3toFIG.6. FIG.9is a schematic diagram of a type of capacity expansion of a storage system according to an embodiment of this application, andFIG.10is a schematic flowchart of a capacity expansion method according to an embodiment of this application. This embodiment is described by using an example in which a quantity of controllers is expanded in a node. It is assumed that the node includes two controllers before expansion: a controller A and a controller B. After two controllers (a controller C and a controller D) are added, the node includes four controllers. A front-end interface card101and a back-end interface card102are shared by the controllers before and after the expansion. Specifically, as shown inFIG.10, the expansion method includes the following steps. S201. After the controller C and the controller D are added to the system, the controller C and the controller D separately initialize virtual node instances. It may be understood that, when a quantity of controllers increases, CPU resources and memory resources that can be provided by the entire node increase accordingly. Therefore, as long as a quantity of virtual nodes increases, a processing capability of the entire node can be improved by allocating a newly added CPU resource and a newly added memory resource to a newly added virtual node. The controller C is used as an example. The controller C creates a plurality of virtual nodes based on a quantity of CPUs included in the controller C. Because one CPU is allocated to one virtual node in this embodiment, a quantity of virtual nodes may be less than or equal to the quantity of CPUs included in the controller C. For example, if the controller C includes eight CPUs, the controller C may create a maximum of eight virtual nodes. After the quantity of virtual nodes is determined, a mapping relationship between a newly added virtual node and a CPU and a mapping relationship between a newly added virtual node and a memory are further determined. For example, in the controller C, a virtual node x corresponds to a CPU_x (x represents a positive integer), and a memory resource needed by the virtual node x may be a local memory (for example, a Mem_x) of the CPU_x. Therefore, the CPU_x and the Mem_x form a computing resource group, which is allocated to the virtual node x. A virtual node x+1 corresponds to a CPU_x+1, and a memory resource needed by the virtual node x+1 may be a local memory (for example, a Mem_x+1) of the CPU_x+1. Therefore, the CPU_x+1 and the Mem_x+1 form another computing resource group, which is allocated to the virtual node x+1. A manner of creating a virtual node by the controller D is similar to that of creating the virtual node by the controller C. In addition, after the controller C and the controller D are added to the system, the controller C and the controller D establish physical links with the back-end interface card102. A plurality of logical links are created on these physical links, and a plurality of connections may be established on each logical link. These connections are added to the network resource pool shown inFIG.4, to expand network resources in the network resource pool. These newly added network resources may be classified into several link groups, and each link group includes one or more connections. Then, each link group is allocated to one newly added virtual node. For example, the virtual node x corresponds to a link group x (x represents a positive integer), and the virtual node x+1 corresponds to a link group x+1. S202. Migrate some partitions belonging to virtual nodes of the controller A and the controller B to virtual nodes of the controller C and the controller D. It can be learned from the foregoing description that a service request from a host11is routed to a virtual node based on a partition corresponding to a virtual address. When a total quantity of partitions included in the storage system remains unchanged, to enable newly created virtual nodes to bear the service request, some partitions belonging to original virtual nodes need to be migrated to the newly created virtual nodes. For example, before capacity expansion, one virtual node corresponds to one partition set, and one partition set includes 32 partitions. After capacity expansion, one virtual node corresponds to 24 partitions. One implementation is to re-establish a mapping relationship between all partitions in the storage system and all virtual nodes (including both the original virtual nodes and the newly added virtual nodes), and the other implementation is to migrate some partitions in an original partition set to the newly added virtual nodes, and retain a correspondence between the remaining partitions and the original virtual nodes. With reference to the foregoing example, eight partitions in the original partition set need to be migrated to the newly added virtual nodes. It should be noted that a quantity of to-be-migrated partitions depends on a proportion of a quantity of newly added virtual nodes in a quantity of virtual nodes included in the entire node. A migration algorithm is not limited in this embodiment, provided that partitions are evenly distributed in all virtual nodes. S203. Update a mapping table, where the mapping table includes both a mapping table stored in the host11and a mapping table in the front-end interface card101. According to the description in S201, CPU resources, memory resources, and network resources are allocated to the newly added virtual nodes. These newly added allocation relationships need to be recorded in the mapping table for processing service requests. Because there is no hard disk in the controller C and the controller D, hard disk resources needed by the newly added virtual nodes are still from the storage pool shown inFIG.5. Specifically, chunk groups belonging to the original virtual nodes may be migrated to the newly added virtual nodes. A migration algorithm is not limited in this embodiment, provided that chunk groups allocated to all virtual nodes are approximately equal. An updated mapping table is shown in Table 2. TABLE 2VirtualComputingChunkLinknoderesource groupgroupgroupVirtualComputingChunk group 1 andLink group m andnode 0resource group 0chunk group 2link group nVirtualComputingChunk group 0Link group 1node 1resource group 1. . .. . .. . .. . .VirtualComputingChunk group pLink group pnode presource group p. . .. . .. . .. . .VirtualComputingChunk group p + 1Link group xnode xresource group xVirtualComputingChunk group p + 2Link group x + 1node x + 1resource groupx + 1. . .. . .. . .. . . S204. The host11sends the service request based on a new partition routing relationship. For a manner of processing the service request, refer to the schematic flowchart of processing a write request shown inFIG.6. Details are not described herein again. FIG.11is a schematic diagram of another type of capacity expansion of a storage system according to an embodiment of this application, andFIG.12is a schematic flowchart of another expansion method according to an embodiment of this application. In the examples shown inFIG.11andFIG.12, not only a quantity of controllers in a node is expanded, but also a quantity of disks or a quantity of disk enclosures is expanded. An example in which a quantity of controllers is expanded in a node is still used. It is assumed that the node includes two controllers: a controller A and a controller B before capacity expansion. After two controllers (a controller C and a controller D) are added, the node includes four controllers. A front-end interface card101and a back-end interface card102are shared by the controllers before and after the expansion. As shown inFIG.12, the method includes the following steps. S301. After the controller C and the controller D are added to the system, the controller C and the controller D separately initialize virtual node instances. For this step, refer to S201shown inFIG.9. S302. Migrate some partitions belonging to virtual nodes of the controller A and the controller B to virtual nodes of the controller C and the controller D. For this step, refer to S202shown inFIG.9. S303. Select a primary controller such as the controller C from the newly added controllers: the controller C and the controller D based on a selection algorithm. S304. The controller C divides space of newly added hard disks into several chunks, and adds these chunks to a storage pool. When the controller C or the controller D receives a write request, the write request corresponds to newly added virtual nodes, and chunks from different hard disks form a chunk group to accommodate data carried in the write request. It can be learned that a plurality of newly added chunk groups in the storage pool may be allocated to the newly added virtual nods. Each virtual node uses one or more chunk groups. S305. Update a mapping table, where the mapping table includes both a mapping table stored in a host and a mapping table in the front-end interface card101. For this step, refer to S203shown inFIG.9. Different from S203, in the example in S203, a quantity of hard disk resources does not increase, and therefore chunk groups corresponding to the newly added virtual nodes are obtained by migrating chunk groups of original virtual nodes in the system. In the example in S305, because a quantity of hard disk resources also increases, the chunk groups corresponding to the newly added virtual nodes are from the newly added hard disk resources. According to the capacity expansion manners shown inFIG.11andFIG.12, newly added hard disk resources are allocated to newly added virtual nodes, and data carried in the write request corresponding to the newly added virtual nodes may be written into the newly added hard disks. However, a large amount of old data is still stored in original hard disks. To evenly distribute data stored in the system to all the hard disks, one manner is to migrate some old data to the new hard disks for storage, and another manner is not to actively migrate the old data, but migrate valid data (data that is not modified) in the old data to the newly added hard disks when garbage collection is performed on the old data. As the system runs, an increasing amount of junk data is generated. After several garbage collection operations, the data can be evenly distributed. An advantage of this manner is that because the data is not actively migrated, bandwidth overheads between nodes or between controllers can be reduced. According to the two capacity expansion methods shown inFIG.9andFIG.12, when a quantity of controllers in the system increases, a quantity of virtual nodes also increases, and newly added resources are allocated to newly added virtual nodes. In this way, fewer resources allocated to original virtual nodes are preempted to some extent. Therefore, as a quantity of hardware resources increases, a processing capability of the entire system correspondingly increases. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, the procedure or functions according to the embodiments of the present disclosure are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk (SSD), or the like. It should be understood that, in the embodiments of this application, the term “first” and the like are merely intended to indicate objects, but do not indicate a sequence of corresponding objects. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a storage node, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims. | 50,759 |
11861197 | DETAILED DESCRIPTION Various embodiments will be described hereinafter with reference to the drawings. In general, according to one embodiment, a memory system is capable of being connected to a host. The memory system includes a non-volatile memory and a controller. The non-volatile memory includes a plurality of blocks. The controller controls write/read of data to/from the non-volatile memory in response to a command from the host. The controller manages validity of data written in the non-volatile memory using a data map. The data map includes a plurality of first fragment tables. Each of the first fragment tables stores first information and second information. The first information indicates the validity of each data having a predetermined size written in a range of physical address in the non-volatile memory allocated to the first fragment table. The second information indicates the validity of a plurality of data having a predetermined size in each of a predetermined number of entries. The controller selects a write destination block based on a size of write data requested to be written to the non-volatile memory by a write command from the host. FIG.1is a block diagram illustrating an example of a configuration of an information processing system including a memory system according to an embodiment. In the present embodiment, the memory system is a semiconductor storage device configured to write data (user data) to a non-volatile memory and read the data from the non-volatile memory. This memory system may be realized as, for example, a solid state drive (SSD), or may be realized as another storage device such as a memory card. In the present embodiment, it is assumed that the memory system is realized as an SSD. As illustrated inFIG.1, an information processing system1includes a host2and a memory system3. The host2is an information processing device that operates as a host device for the memory system3, and can be realized as, for example, a personal computer, a server device, a mobile phone, an imaging device, a mobile terminal (tablet computer, smartphone, or the like), a game machine, or an in-vehicle terminal (car navigation system or the like). The memory system3is configured to be connectable to the host2, and includes a non-volatile memory4and a controller5(control circuit) that controls writing and reading of data to the non-volatile memory4. The non-volatile memory4may be configured to be detachable from the controller5. According to this, the memory capacity of the memory system3can be freely expanded. In a case where the memory system3is realized as an SSD as described above, the non-volatile memory4is, for example, a NAND type flash memory. In this case, the non-volatile memory4(NAND type flash memory) includes a plurality of memory cells (memory cell array) arranged in a matrix. The non-volatile memory4may be a NAND type flash memory having a two-dimensional structure or a NAND type flash memory having a three-dimensional structure. Further, the memory cell array of the non-volatile memory4includes a plurality of blocks, and each of the blocks is organized by a large number of pages. In the memory system3(SSD), each block functions as a data erasing unit. In addition, each page is a unit of a data writing operation and a data reading operation. In addition, various data are written in the non-volatile memory4, and the non-volatile memory4further stores an address translation table (hereinafter, simply referred to as LUT)41called a look up table (LUT). The LUT41is also called L2P (Logical address to Physical address). The LUT41corresponds to the data for managing a correspondence of a logical address used when the host2accesses the memory system3(write data to the non-volatile memory4or read data from the non-volatile memory4) with a physical address indicating a physical position where the data in the non-volatile memory4is written. In other words, the LUT41stores the physical address corresponding to each of the logical addresses. In a case where the non-volatile memory4is a NAND type flash memory, the logical address managed by the LUT41is a logical block address (LBA), and the physical address is a physical block address (PBA). In the following description, the logical address will be described as LBA and the physical address will be described as PBA. In addition, the non-volatile memory4further stores a data map (hereinafter, simply referred to as VDM)42called a valid data map (VDM). The VDM42corresponds to the data for managing the validity of the data written to the physical address in the non-volatile memory4(that is, whether the data is valid or invalid). At least one of the LUT41and the VDM42may be stored in a non-volatile memory other than the non-volatile memory4, for example. Further, the non-volatile memory4may be configured to divide the memory (region) for storing the data, the LUT41, and the VDM42. The controller5includes a communication interface control unit51, a write buffer memory52, a read buffer memory53, a non-volatile memory controller54, a memory55, and a processor56. The communication interface control unit51, the write buffer memory52, the read buffer memory53, the non-volatile memory controller54, the memory55, and the processor56are electrically connected via an internal bus IB. The communication interface control unit51controls communication between an external device (for example, the host2) and the memory system3. Specifically, the communication interface control unit51receives various commands from the host2. Various commands from the host2include, for example, a write command (write request) and a read command (read request). Here, the write command received by the communication interface control unit51includes data written to the non-volatile memory4based on the write command and an LBA used by the host2to access the data. Further, the read command received by the communication interface control unit51includes an LBA (that is, the LBA corresponding to the data) used when the host2accesses the data read based on the read command. Here, when the write command is received by the communication interface control unit51, the data is written in the non-volatile memory4based on the write command. The write buffer memory52temporarily stores the data written in the non-volatile memory4. The data stored in the write buffer memory52is written in the non-volatile memory4via the non-volatile memory controller54. On the other hand, when the read command is received by the communication interface control unit51, the data is read from the non-volatile memory4based on the read command, and the read buffer memory53temporarily stores data read from the non-volatile memory4by the non-volatile memory controller54. The data stored in the read buffer memory53is transmitted to the host2via the communication interface control unit51. The non-volatile memory controller54controls writing data to the non-volatile memory4and reading data from the non-volatile memory4. Although detailed description will be omitted, the non-volatile memory controller54may be configured to include a direct memory access controller (DMAC), an error correction unit, a randomizer (or a scrambler), and the like. The memory55is a main memory device used as a working memory of the processor56. The memory55is, for example, a dynamic random access memory (DRAM), and may be another semiconductor memory such as a static random access memory (SRAM). The memory55can be written and read at a higher speed as compared with the non-volatile memory4, and includes (a region used as) a cache memory551. The cache memory551stores cache data such as LUT41and VDM42stored in the non-volatile memory4, for example. The processor56controls the operation of the entire controller5via the internal bus IB. The processor56executes various processes (for example, processes for various commands received from the host2) by executing a control program (firmware) stored in, for example, a read only memory (ROM) (not shown). In the present embodiment, the controller5functions as a flash translation layer (FTL) configured to perform data management and block management of the non-volatile memory4(NAND type flash memory) by such a processor56. The processor56may be, for example, a central processing unit (CPU), a micro-processing unit (MPU), and a digital signal processor (DSP). By executing the above-described control program, the processor56realizes functional units such as a write control unit561, a read control unit562, a garbage collection control unit563, an address translation unit564, a management unit565, and a cache memory control unit566. Each of these units561to566is realized by a control program (that is, software) as described above, and may be realized by hardware or by a combination of software and hardware. In a case where the write command is received by the communication interface control unit51, the write control unit561controls the communication interface control unit51, the write buffer memory52, and the non-volatile memory controller54, and causes the non-volatile memory4to execute a writing process for the data included in the write command. In a case where the read command is received by the communication interface control unit51, the read control unit562controls the communication interface control unit51, the read buffer memory53, and the non-volatile memory controller54, and causes the non-volatile memory4to execute a reading process for the data corresponding to LBA included in the read command. The garbage collection control unit563executes garbage collection (GC) for the non-volatile memory4with reference to the above-described VDM42by cooperating with the write control unit561, the read control unit562, and the non-volatile memory controller54, for example. The garbage collection is a process of releasing an unnecessary memory region of the non-volatile memory4. Note that compaction that eliminates fragmentation of the memory region of the non-volatile memory4may be performed together with the garbage collection. When the above-mentioned read command is received by the communication interface control unit51, the address translation unit564executes a process of converting the LBA included in the read command into a PBA (physical address) by using the LUT41stored in the non-volatile memory4. In the memory system3, it is possible to read data (data corresponding to the LBA) from the non-volatile memory4based on the PBA translated from the LBA by the address translation unit564in this way. The management unit565executes a process of updating the LUT41and the VDM42when the above-described write command is received by the communication interface control unit51, and data is written in the non-volatile memory4based on the write command. The cache memory control unit566executes a process of reading (a part of) the LUT41or (a part of) the VDM42from the non-volatile memory4via, for example, the read control unit562, and storing the LUT41or the VDM42in the cache memory551. Further, the cache memory control unit566executes a process of reading (a part of) the LUT41or (a part of) the VDM42stored in the cache memory551via the write control unit561and writing (writing back) the LUT41or VDM42into the non-volatile memory4. Although the example in which the memory system3is provided outside the host2has been illustrated inFIG.1, MVMe over Fabrics or the like may be used as the interface between the host2and the memory system3. Further, the memory system3may be built in the host2. Further, the memory system3may be connected to a plurality of hosts2, or a plurality of memory systems3may be connected to one or more hosts2. Here, in the above-mentioned LUT41, the correspondence between the LBA (logical address) and the PBA (physical address) is managed. For example, when a write command from the host2is received by the communication interface control unit51and the data is written to the non-volatile memory4based on the write command, the management unit565needs to update the correspondence between the LBA included in the write command and the PBA in the non-volatile memory4in which the data is written, in the LUT41(that is, needs to register the correspondence in LUT41). However, when wide LBA ranges are designated in the above-mentioned write command, it takes time to update the correspondence between LBA and PBA in LUT41. Therefore, it is assumed that the LUT41in the present embodiment has a hierarchical structure formed of a plurality of hierarchies and is configured to include a plurality of tables (hereinafter, referred to as LUT fragment tables) corresponding to the plurality of hierarchies. The hierarchical structure of the LUT41is determined based on the setting information of the memory system3including the capacity of the non-volatile memory4, for example. In such a LUT41, it is assumed that each of the plurality of LUT fragment tables has the same size, for example. Further, as will be described in detail later, in the LUT fragment table corresponding to the upper hierarchy among the plurality of LUT fragment tables corresponding to the plurality of hierarchies, (the range of) LBA, reference destination information (hereinafter, referred to as a LUT pointer) for referencing the LUT fragment table corresponding to the hierarchy lower than the LUT fragment table, and the like are stored. The LUT pointer includes, for example, the PBA in the non-volatile memory4in which the LUT fragment table to be a reference destination is stored. Further, in the LUT fragment table corresponding to the lowest hierarchy in the hierarchical structure of the LUT41, the PBA corresponding to each of the LBAs allocated to the LUT fragment table is stored. That is, the LUT41in the present embodiment has a hierarchical structure in which the LUT fragment table corresponding to the upper hierarchy can be sequentially referred to from the LUT fragment table corresponding to the lower hierarchy, and the correspondence between LBA and PBA in the hierarchical structure is managed. Hereinafter, the LUT41having a hierarchical structure will be conceptually described with reference toFIG.2. In the example illustrated inFIG.2, it is assumed that the LUT41has a hierarchical structure formed of four hierarchies. In this case, the LUT41includes a plurality of first LUT fragment tables T411to a fourth LUT fragment table T414. As illustrated inFIG.2, the first LUT fragment table T411is a LUT fragment table corresponding to the lowest hierarchy (hereinafter referred to as a first hierarchy) in the hierarchical structure of the LUT41. The second LUT fragment table T412is a LUT fragment table corresponding to a higher hierarchy (hereinafter, referred to as a second hierarchy) of the first LUT fragment table T411in the hierarchical structure of the LUT41. The third LUT fragment table T413is a LUT fragment table corresponding to a higher hierarchy (hereinafter, referred to as a third hierarchy) of the second LUT fragment table T412in the hierarchical structure of the LUT41. The fourth LUT fragment table T414is a LUT fragment table corresponding to a higher hierarchy (hereinafter, referred to as a fourth hierarchy) of the third LUT fragment table T413in the hierarchical structure of the LUT41. In the example illustrated inFIG.2, the fourth hierarchy is the highest hierarchy in the hierarchical structure of the LUT41. Hereinafter, each of the above-mentioned first LUT fragment table T411to fourth LUT fragment table T414will be described in detail. First, consecutive LBA ranges are allocated to each of the plurality of first LUT fragment tables T411, and the first LUT fragment table T411includes a plurality of entries C411. Further, for each of the plurality of entries C411included in the first LUT fragment table T411, one different LBA from the LBA ranges allocated to the first LUT fragment table T411is allocated, and the PBA corresponding to the LBA (that is, the PBA in which the data corresponding to the LBA is written) is stored. In the present embodiment, the entire LBA ranges used by the host2to access the memory system3are divided into the number of the first LUT fragment tables T411, and the divided LBA range is allocated to each of the first LUT fragment table T411. With this, in the plurality of the first LUT fragment tables T411, it is possible to manage the PBA corresponding to each of the entire range of LBAs used by the host2when accessing the memory system3. Next, wider LBA ranges than those of the first LUT fragment table T411described above are allocated to each of the plurality of second LUT fragment tables T412, and the second LUT fragment table T412includes a plurality of entries C412. Further, for each of the plurality of entries C412included in the second LUT fragment table T412, the LBA ranges allocated to the first LUT fragment table T411corresponding to the lower hierarchy of the second LUT fragment table T412are allocated, and the LUT pointer indicating (position) of the first LUT fragment table T411is stored. In this case, the LBA ranges allocated to each of the second LUT fragment table T412correspond to the LBA ranges allocated to all the first LUT fragment tables T411indicated by the LUT pointers stored in each of the plurality of entries C412included in the second LUT fragment table T412. Further, wider LBA ranges than those of the second LUT fragment table T412described above are allocated to each of the plurality of third LUT fragment tables T413, and the third LUT fragment table T413includes a plurality of entries C413. Further, for each of the plurality of entries C413included in the third LUT fragment table T413, the LBA ranges allocated to the second LUT fragment table T412corresponding to the lower hierarchy of the third LUT fragment table T413are allocated, and the LUT pointer indicating (position) of the second LUT fragment table T412is stored. In this case, the LBA ranges allocated to each of the third LUT fragment table T413correspond to the LBA ranges allocated to all the second LUT fragment tables T412indicated by the LUT pointers stored in each of the plurality of entries C413included in the third LUT fragment table T413. Further, wider LBA ranges than those of the third LUT fragment table T413described above are allocated to each of the plurality of fourth LUT fragment tables T414, and the fourth LUT fragment table T414includes a plurality of entries C414. In addition, for each of the plurality of entries C414included in the fourth LUT fragment table T414, the LBA ranges allocated to the third LUT fragment table T413corresponding to the lower hierarchy of the fourth LUT fragment table T414are allocated, and the LUT pointer indicating (position) of the third LUT fragment table T413is stored. In this case, the LBA ranges allocated to each of the fourth LUT fragment table T414correspond to the LBA ranges allocated to all the third LUT fragment tables T413indicated by the LUT pointers stored in each of the plurality of entries C414included in the fourth LUT fragment table T414. Here, each of the plurality of fourth LUT fragment tables T414corresponding to the fourth hierarchy (that is, the highest hierarchy in the hierarchical structure) corresponds to each of the plurality of namespaces. The namespace is a region obtained by logically dividing a memory region (plurality of blocks) included in the non-volatile memory4. By allocating a namespace for each memory region in a predetermined range, for example, even if LBAs overlap in two or more memory regions, it is possible to access to appropriate data by using the namespace ID (identification information for identifying the namespace) and LBA. According to this, access to different namespaces can be treated in the same way as access to different devices. InFIG.2, the plurality of fourth LUT fragment tables T414correspond to the namespaces NS1to NSn (n is a natural number of 2 or more). In this case, the number of the plurality of fourth LUT fragment tables T414is n. As illustrated inFIG.2, the LUT41has a hierarchical structure for each of (the fourth LUT fragment table T414corresponding to) the namespaces NS1to NSn, and the number of hierarchies for each of the namespaces NS1to NSn is determined according to (size of) the memory region allocated to the namespace NS1to NSn. For example, in a case where the memory region allocated to the namespace is small, the number of hierarchies of the namespace is small. On the other hand, in a case where the memory region allocated to the namespace is large, the number of hierarchies of the namespace is large. In the example illustrated inFIG.2, a case where the number of hierarchies in each of the namespaces NS1to NSn is the same is indicated. In the LUT41having the hierarchical structure illustrated inFIG.2described above, the LUT pointer stored in each of the entries C414included in the fourth LUT fragment table T414corresponding to the fourth hierarchy (the highest hierarchy) indicates the third LUT fragment table T413corresponding to the third hierarchy, the LUT pointer stored in each of the entries C413included in the third LUT fragment table T413indicates the second LUT fragment table T412corresponding to the second hierarchy, the LUT pointer stored in each of the entries C412included in the second LUT fragment table T412indicates the first LUT fragment table T411corresponding to the first hierarchy (the lowest hierarchy), and the entry C411included in the first LUT fragment table T411is configured to store the PBA corresponding to the LBA. According to such a LUT41, the PBA corresponding to the LBA can be specified by sequentially referring to the fourth LUT fragment table T414, the third LUT fragment table T413, the second LUT fragment table T412, and the first LUT fragment table T411based on the LBA designated in various commands (LBA included in various commands). Here, in the example illustrated inFIG.2, the first LUT fragment table T411is a LUT fragment table corresponding to the lowest hierarchy in the hierarchical structure of the LUT41, and the PBA corresponding to one LBA is stored in each of the entries C411included in the first LUT fragment table T411. In this case, assuming that the size of the data written in a PBA is 4 KiB and the first LUT fragment table T411includes 32 entries C411, 32 LBA ranges (that is, LBAs for accessing 128 KiB data) are allocated to the first LUT fragment table T411corresponding to the first hierarchy. Similarly, assuming that the second LUT fragment table T412includes 32 entries C412, and the LUT pointer indicating the first LUT fragment table T411to which 32 LBAs are allocated to access 128 KiB data is stored in each of the entries C412(that is, 32 LBA ranges allocated to the first LUT fragment table T411are allocated to each of the entries C412), 32×32=1024 LBA ranges (that is, LBAs for accessing 4 MiB data) are allocated to the second LUT fragment table T412corresponding to the second hierarchy. Further, assuming that the one third LUT fragment table T413includes 32 entries C413, and the LUT pointer indicating the second LUT fragment table T412to which 1,024 LBAs are allocated to access 4 MiB data is stored in each of the entries C413(that is, 1,024 LBA ranges allocated to the second LUT fragment table T412are allocated to each of the entries C413), 1,024×32=32,768 LBA ranges (that is, LBAs for accessing 128 MiB data) are allocated to the one third LUT fragment table T413corresponding to the third hierarchy. In addition, assuming that the fourth LUT fragment table T414includes 32 entries C414, and the LUT pointer indicating the third LUT fragment table T413to which 32, 768 LBAs are allocated to access 128 MiB data is stored in each of the entries C414(that is, 32,768 LBA ranges allocated to the third LUT fragment table T413are allocated to each of the entries C414), 32,768×32=1,048,576 LBA ranges (that is, LBAs for accessing 4 GiB data) are allocated to the fourth LUT fragment table T414corresponding to the fourth hierarchy. That is, in an example of the LUT41illustrated inFIG.2, each of the first LUT fragment tables T411manages the LBA ranges for accessing 128 KiB data, each of the second LUT fragment tables T412manages the LBA ranges for accessing 4 MiB data, each of the third LUT fragment tables T413manages the LBA ranges for accessing 128 MiB data, and each of the fourth LUT fragment tables T414manages the LBA ranges for accessing 4 GiB data. InFIG.2, an example in which the LUT pointer is stored in each of the plurality of entries C414included in the fourth LUT fragment table T414is illustrated; however, in a case where the plurality of third LUT fragment tables T413indicated by each of the LUT pointers are continuously arranged in the non-volatile memory4, the fourth LUT fragment table T414may be configured to store only an LUT pointer indicating the first third LUT fragment table T413of the plurality of the third LUT fragment tables T413(that is, configured to omit the LUT pointer indicating the third LUT fragment table T413that is not the first). According to this, it is possible to reduce the size of the LUT41. Here, the fourth LUT fragment table T414has been described, but the same applies to other LUT fragment tables. Further, for example, when the continuity of the PBA in the non-volatile memory4in which the data corresponding to the LBA ranges allocated to one LUT fragment table is written is guaranteed, it is also possible to omit the LUT fragment table corresponding to the hierarchy lower than the LUT fragment table (that is, indicated by the LUT pointer stored in the entry included in the LUT fragment table). Specifically, for example, the second LUT fragment table T412manages the LBA ranges for accessing 4 MiB data, but in a case where the 4 MiB data accessed by the LBA managed by the second LUT fragment table T412is written in the continuous PBA, the entries C413included in the third LUT fragment table T413may store the first PBA in which the 4 MiB data is written, instead of the LUT pointer indicating the second LUT fragment table T412. According to this, since it is not necessary to refer to the second LUT fragment table T412and the first LUT fragment table T411lower than the third LUT fragment table T413, the LUT41can be referred to efficiently and the access speed for the data written in the non-volatile memory4can be improved. FIG.3is a diagram illustrating an example of a data structure of the LUT fragment table included in LUT41in the present embodiment. Here, the data structure of the first LUT fragment table T411will be mainly described. The first LUT fragment table T411includes, for example, a plurality of PBA storing units41a, an LBA storing unit41b, and a management data storing unit41c. The PBA storing unit41acorresponds to the entry C411included in the first LUT fragment table T411illustrated inFIG.2. That is, the number of PBA storing units41ais, for example, 32. The PBA storing unit41astores the PBA (that is, the PBA in which the data corresponding to the LBA is written) corresponding to one LBA allocated to the PBA storing unit41a(entry C411). In a case where the data corresponding to one LBA allocated to the PBA storing unit41ais stored in the cache memory551, the address information (PBA) in the cache memory551is stored in the PBA storing unit41a. The size of the PBA stored in the PBA storing unit41ais, for example, 32 bits. Further, for example, 8-bit management data MD1is attached to the PBA stored in the PBA storing unit41a, and the management data MD1is stored in the PBA storing unit41atogether with the PBA. The management data MD1attached to the PBA in this way includes, for example, data for managing whether the PBA is a PBA in the non-volatile memory4or the address information in the cache memory551. In this case, each size of the PBA storing unit41ais 40 bits, which is the sum of the size of the PBA (32 bits) and the size of the management data MD1(8 bits), and the total size of the 32 PBA storing units41ais 160 bytes. The LBA storing unit41bstores the first LBA in the LBA ranges allocated to the first LUT fragment table T411. The management data storing unit41cstores a namespace ID for identifying the namespace to which the first LUT fragment table T411belongs and Grain corresponding to the LBA ranges allocated to the first LUT fragment table T411(the LBA ranges managed by the first LUT fragment table T411). In addition, other information may be stored in the management data storing unit41c. Specifically, the management data storing unit41cmay store identification information (hierarchy ID) or the like for identifying the hierarchy (first hierarchy) corresponding to the first LUT fragment table T411. Here, for example, when the LUT41is updated in the present embodiment, a part of the LUT41(LUT fragment table to be updated) is stored in the cache memory551. In this case, a part of the LUT41is stored in a cache line unit. Further, a part of the LUT41updated in the cache memory551is written back to the non-volatile memory4in the cache line unit. It is assumed that the first LUT fragment table T411is stored in the cache memory551for each cache line described above. Assuming that the first LUT fragment table T411stored in the cache memory551is LUT cache data, the LUT cache data further includes pointers indicating LUT cache data to be associated with each other in, for example, the cache memory551in addition to the PBA storing unit41a, the LBA storing unit41b, and the management data storing unit41cdescribed above. Specifically, the LUT cache data includes a prior pointer storing unit41dthat stores a pointer indicating LUT cache data referenced prior to the LUT cache data, and a next pointer storing unit41ethat stores a pointer indicating another LUT cache data referenced next to the LUT cache data. As the pointers stored in the prior pointer storing unit41dand the next pointer storing unit41edescribed above, for example, a PBA in which other LUT cache data is stored is used, and an address in another format may be used. By using the pointers to the LUT cache data before and after the LUT cache data should be referred to, the access to the cache memory551can be made speed up, and thereby continuous access can be realized. The LUT cache data may further include other management data. Although the data structure of one first LUT fragment table T411has been illustrated inFIG.3, the plurality of first LUT fragment tables T411included in the LUT41all have the same data structure. Further, the data structures of the LUT fragment tables (the second LUT fragment table T412to the fourth LUT fragment table T414) other than the first LUT fragment table T411are the same as that of the first LUT fragment table T411. However, each of the PBA storing units41aincluded in the second LUT fragment table T412to the fourth LUT fragment table T414stores the PBA (32 bits) in the non-volatile memory4in which the LUT fragment table is stored as a LUT pointer indicating the LUT fragment table corresponding to the lower hierarchy. In a case where the LUT fragment table corresponding to the lower hierarchy is stored in the cache memory551, the address information of the cache memory551is stored in the PBA storing unit41a. Further, even with the PBA storing unit41aincluded in the second LUT fragment table T412to the fourth LUT fragment table T414, the first PBA in which data corresponding to the LBA ranges allocated to the PBA storing unit41a(entry C412, C413, or C414) is written may be stored. In the example illustrated inFIG.3, the size of each of the first LUT fragment table T411to the fourth LUT fragment table T414is, for example, a fixed length of 168 bytes, and the size of each of the LUT cache data stored in the cache memory551is, for example, a fixed length of 188 bytes. In the present embodiment, it is assumed that the first LUT fragment table T411to the fourth LUT fragment table T414(that is, a plurality of LUT fragment tables included in the LUT41) are configured to have the same data structure. Here, it has been described that the LUT41has a hierarchical structure formed of a plurality of hierarchies; however, in the present embodiment, the VDM42also has a hierarchical structure similar to the LUT41. Hereinafter, the VDM42in the present embodiment will be described. First, VDM in a comparative example of the present embodiment will be described with reference toFIG.4. It is assumed that the VDM in the comparative example of the present embodiment is configured to manage the validity of the data written to the physical address in the non-volatile memory4in a single hierarchy. As illustrated inFIG.4, VDM42′ in the comparative example of the present embodiment includes a plurality of VDM fragment tables T421′ corresponding to a single hierarchy. The different PBA ranges (physical address) are allocated to each of the plurality of VDM fragment table1421′, in each of the VDM fragment table T421′, the validity (that is, whether the data is valid or invalid) of the data stored in the PBA ranges allocated to the VDM fragment table T421′ is managed. In this case, for example, the entire PBA ranges in the non-volatile memory4to which the data can be written based on the write command from the host2are divided into the number of VDM fragment tables T421′, and the divided PBA range is allocated to each of the first VDM fragment tables T421′. With this, in the plurality of VDM fragment tables T421′, it is possible to manage the validity of the data written in the entire range of PBAs in the non-volatile memory4that can write data based on the write command from the host2. In each of the plurality of VDM fragment tables T421′, the validity of the data written in the PBA ranges allocated to the VDM fragment table T421′ is managed by using a bitmap (BMP) described later. Here, for example, when the data is written to the PBA in the non-volatile memory4based on a write command from the host2, in order to update the validity of the data written to the PBA, it is necessary to refer to the VDM fragment table T421′ to which the PBA is allocated; however, in order to refer to the VDM fragment table T421′, it is necessary to hold (expand) a pointer indicating each (position) of the plurality of VDM fragment tables T421′ described above, on the memory55. The pointer held on the memory55in this way includes, for example, the PBA in the non-volatile memory4in which each of the plurality of VDM fragment tables T421′ is stored. For example, assuming that the size of the memory region of the non-volatile memory4whose data validity is managed is 2 PiB, and the size of the data written to one PBA (that is, the unit of data whose validity is managed) is 4 KiB, 2 PiB/4 KiB=549,755,813,888, and it is necessary to manage the validity of 4 KiB data for about 512 G in VDM42′. Further, assuming that one VDM fragment table T421′ manages 4 KiB data for 1,280 pieces, for example, 512 G/1,280=429,496,729.6, and the number of VDM fragment tables T421′ required for VDM42′ is 429,496,730. Further, assuming that the size of the pointer indicating each of the plurality of VDM fragment tables1421′ is 32 bits (4 bytes), the total size of the pointers indicating all of the above 429,496,730 VDM fragment tables T421′ is 429,496,730×4 Byte=1,717,986,920 Byte, which is approximately 1.6 GiB. That is, when managing the validity of the data written in the PBA in the non-volatile memory4by using the VDM42′ corresponding to the comparative example of the present embodiment, since it is necessary to always hold the pointer indicating the above-mentioned 1.6 GiB of all VDM fragment tables1421′ on the memory55(that is, the information required to manage VDM42′ will continue to occupy the memory55), usability may be impaired. Specifically, it is useful to hold the LUT41in the memory55(cache memory551) in order to improve the response speed (IC response speed) to the command from the host2; however, it may not possible to secure sufficient memory regions to hold the LUT41by each pointer in the VDM fragment table T421′ mentioned above. In addition, the non-volatile memory4is formed of a plurality of chips, and for example, as the number of the chips or the capacity of the chips themselves is increased, the number of PBAs in the non-volatile memory4(that is, the memory region managed by the memory system3) is increased. According to this, since the number of the above-mentioned VDM fragment table T421′ is also increased, the number of pointers indicating the VDM fragment table T421′ is also increased, and a larger memory region needs to be secured in the memory55for the pointer. Similarly, as the number of PBAs in the non-volatile memory4is increased, the size of the VDM42′ itself is also increased, and thus if necessary, the memory region for caching the VDM42′ has to be expanded. For this, it is conceivable to secure a memory region by increasing memory (DRAM), for example, but it is necessary to avoid an increase in cost. That is, in the VDM42′ in the comparative example of the present embodiment, it is difficult to cope with the technological innovation (that is, the increase in the storage capacity) for the non-volatile memory4. Further, when starting the memory system3, it is necessary to expand the pointers indicating all the VDM fragment table T421′ on the memory55as described above. Further, when the memory system3is terminated (stopped), it is necessary to make all the pointers held on the memory55non-volatile. Specifically, for example, when one VDM fragment table T421′ is cached in the memory55(cache memory551), the pointer indicating the VDM fragment table T421′ held on the memory55is changed to the address information in the cache memory551. When terminating the memory system3, such a VDM fragment table T421′ is written back to the non-volatile memory4(that is, made non-volatile). In this case, it is necessary to change the pointer indicating the VDM fragment table T421′ (address information in the cache memory551) to the PBA in the non-volatile memory4in which the VDM fragment table T421′ is written, and to write the PBA (that is, the pointer) in the changed non-volatile memory4to the non-volatile memory4. In a case where the memory system3is terminated, such processing is executed for all VDM fragment table T421′ cached in the cache memory551. That is, in the VDM42′ in the comparative example of the present embodiment, it takes time for the internal process (starting process and terminating process) when starting and terminating the memory system3. Therefore, in the present embodiment, by employing the VDM42having a hierarchical structure as in the above-mentioned LUT41, it is possible to efficiently manage the validity of the data written in the non-volatile memory4. Specifically, the VDM42in the present embodiment has a hierarchical structure formed of a plurality of hierarchies and is configured to include a plurality of VDM fragment tables for the plurality of hierarchies. In such a VDM42, it is assumed that each of the plurality of VDM fragment tables has the same size, for example. Further, as will be described in detail later, in the VDM fragment table corresponding to the upper hierarchy among the plurality of VDM fragment tables corresponding to the plurality of hierarchies, (the range of) PBAs, reference destination information (hereinafter, referred to as a VDM pointer) for referencing the VDM fragment table corresponding to the hierarchy lower than the VDM fragment table, and the like are stored. The VDM pointer includes, for example, the PBA in the non-volatile memory4in which the VDM fragment table to be a reference destination is stored. Further, the VDM fragment table corresponding to the lowest hierarchy in the hierarchical structure of the VDM42manages the validity of each data having a predetermined size (for example, 4 KiB data) stored in the PBA ranges allocated to the VDM fragment table. Hereinafter, the VDM42having a hierarchical structure in the present embodiment will be conceptually described with reference toFIG.5. In the example illustrated inFIG.5, for convenience, it is assumed that the VDM42has a hierarchical structure formed of four hierarchies. In this case, the VDM42includes a plurality of first VDM fragment tables T421to fourth VDM fragment tables T424. As illustrated inFIG.5, the first VDM fragment table T421is a VDM fragment table corresponding to the lowest hierarchy (hereinafter, referred to as a first hierarchy) in the hierarchical structure of the VDM42. The second VDM fragment table T422is a VDM fragment table corresponding to the higher hierarchy (hereinafter, referred to as a second hierarchy) of the first VDM fragment table T421in the hierarchical structure of the VDM42. The third VDM fragment table T423is a VDM fragment table corresponding to the higher hierarchy (hereinafter referred to as a third hierarchy) of the second VDM fragment table T422in the hierarchical structure of the VDM42. The fourth VDM fragment table T424is a VDM fragment table corresponding to the higher hierarchy (hereinafter, referred to as a fourth hierarchy) of the third VDM fragment table T423in the hierarchical structure of the VDM42. In the example illustrated inFIG.5, the fourth hierarchy is the highest hierarchy in the hierarchical structure of the VDM42, and in the VDM42, the number of VDM fragment tables (that is, the fourth VDM fragment table T424) corresponding to the highest hierarchy is, for example, 1. In the present embodiment, the number of the fourth VDM fragment tables T424(VDM fragment table corresponding to the highest hierarchy) is assumed to be 1; however, the number of the fourth VDM fragment table T424may be plural. Hereinafter, each of the above-mentioned first VDM fragment table T421to fourth VDM fragment table T424will be described in detail. First, consecutive PBA ranges are allocated to each of the plurality of first VDM fragment tables T421, and the first VDM fragment table T421includes a plurality of entries C421. In addition, each of the plurality of entries C421included in the first VDM fragment table T421stores a bitmap (BMP) formed of 1-bit of bit information that manages the validity of the data stored in each of the plurality of PBAs corresponding to the PBA ranges assigned to the first VDM fragment table T421. In such a bitmap, for each PBA, for example, when the bit information is 1, it can indicate that the data stored in the PBA is valid, and when the bit information is 0, it can indicate that the data stored in the PBA is invalid. The plurality of first VDM fragment tables T421correspond to the plurality of VDM fragment tables T421′ illustrated inFIG.4described above, and the entire PBA ranges in the non-volatile memory4to which the data can be written based on the write command from the host2are divided into the number of VDM fragment tables T421, and the divided PBA range is allocated to each of the first VDM fragment tables T421. With this, in the plurality of VDM fragment tables T421, it is possible to manage the validity of the data written in the entire range of PBAs in the non-volatile memory4that can write data based on the write command from the host2. Next, wider PBA ranges than those of the first VDM fragment table T421described above are allocated to each of the plurality of second VDM fragment tables T422, and the second VDM fragment tables T422includes a plurality of entries C422. Further, for each of the plurality of entries C422included in the second VDM fragment tables T422, the PBA ranges allocated to the first VDM fragment tables T421corresponding to the lower hierarchy of the second VDM fragment tables T422are allocated, and the VDM pointer indicating (position) of the first VDM fragment table T421is stored. In this case, the PBA ranges allocated to each of the second VDM fragment table T422correspond to the PBA ranges allocated to all the first VDM fragment tables T421indicated by the VDM pointer stored in each of the plurality of entries C422included in the second VDM fragment table T422. Further, wider PBA ranges than those of the second VDM fragment table T422described above are allocated to each of the plurality of third VDM fragment tables T423, and the third VDM fragment tables T423includes a plurality of entries C423. Further, for each of the plurality of entries C423included in the third VDM fragment tables T423, the PBA ranges allocated to the second VDM fragment tables T422corresponding to the lower hierarchy of the third VDM fragment tables T423are allocated, and the VDM pointer indicating (position) of the second VDM fragment table T422is stored. In this case, the PBA ranges allocated to each of the third VDM fragment table T423correspond to the PBA ranges allocated to all the second VDM fragment tables T422indicated by the VDM pointer stored in each of the plurality of entries C423included in the third VDM fragment table T423. Next, wider PBA ranges than those of the third VDM fragment table T423described above are allocated to the fourth VDM fragment table T424, and the fourth VDM fragment table T424includes a plurality of entries C424. Further, for each of the plurality of entries C424included in the fourth VDM fragment tables T424, the PBA ranges allocated to the third VDM fragment tables T423corresponding to the lower hierarchy of the fourth VDM fragment tables T424are allocated, and the VDM pointer indicating (position) of the third VDM fragment table T423is stored. In this case, the LBA ranges allocated to each of the fourth VDM fragment table T424correspond to the PBA ranges allocated to all the third VDM fragment tables T423indicated by the VDM pointer stored in each of the plurality of entries C424included in the fourth VDM fragment table T424. As described above, when the number of the fourth VDM fragment table T424corresponding to the highest hierarchy in the hierarchical structure of the VDM42is 1, the PBA ranges allocated to the fourth VDM fragment table T424covers the entire PBA ranges in the non-volatile memory4in which the validity of data is managed. In the VDM42having the hierarchical structure illustrated inFIG.5described above, the VDM pointer stored in each of the entries C424included in the fourth VDM fragment table T424corresponding to the fourth hierarchy (the highest hierarchy) indicates the third VDM fragment table T423corresponding to the third hierarchy, the VDM pointer stored in each of the entries C423included in the third VDM fragment table T423indicates the second VDM fragment table T422corresponding to the second hierarchy, the VDM pointer stored in each of the entries C422included in the second VDM fragment table T422indicates the first VDM fragment table T421corresponding to the first hierarchy (the lowest hierarchy), and each of the entries C421included in the first VDM fragment table T421is configured to store flag information (bitmap) indicating the validity of each data having a predetermined size stored in a plurality of PBAs. According to such a VDM42, for example, the validity of the data can be grasped by sequentially referring to the fourth VDM fragment table T424, the third VDM fragment table T423, the second VDM fragment table T422, and the first VDM fragment table T421based on the PBA in which the data to be confirmed for validity is stored. That is, in the VDM42illustrated inFIG.5, since the validity of the data stored in all PBAs from the fourth VDM fragment table T424corresponding to the fourth hierarchy (that is, the validity of the data managed in the first VDM fragment table) can be grasped, unlike the VDM42′ in the comparative example of the present embodiment described with reference toFIG.4described above, the memory55only needs to hold the VDM pointer (that is, one pointer) indicating the fourth VDM fragment table T424. Here, in the example illustrated inFIG.5, the first VDM fragment table T421is a VDM fragment table corresponding to the lowest hierarchy in the hierarchical structure of the VDM42, and the flag information (bitmap) indicating the validity of each data having a predetermined size stored in a continuous PBA range is stored in each of the entries C421included in the first VDM fragment table T421. In this case, one first VDM fragment table T421includes 32 entries C421, and assuming that one entry C421includes a 32-bit bitmap indicating the validity of 32 pieces of data, 32×32=1,024 PBA ranges are allocated to one first VDM fragment table T421corresponding to the first hierarchy. In this case, assuming that the size of the data written to one PBA is 4 KiB as described above, the validity of the data of 4 KiB×1,024=4 MiB can be managed in one first VDM fragment table T421. Similarly, assuming that the second VDM fragment table T422includes 32 entries C422, and the VDM pointer indicating the first VDM fragment table T421with 1,024 PBAs that store 4 MiB data is stored in each of the entries C422(that is, 1,024 PBA ranges allocated to the first VDM fragment table T421are allocated to each of the entries C422), 1,024×32=32,768 PBA ranges are allocated to the second VDM fragment table T422corresponding to the second hierarchy. In this case, the validity of the data of 4 KiB×32,768=128 MiB can be managed in one second VDM fragment table T422. Further, assuming that the third VDM fragment table T423includes 32 entries C423, and the pointer indicating the second VDM fragment table T422with 32,768 PBAs that store 128 MiB data is stored in each of the entries C423(that is, 32,768 PBA ranges allocated to the second VDM fragment table T422are allocated to each of the entries C423), 32,768×32=1,048,576 PBA ranges are allocated to one third VDM fragment table T423corresponding to the third hierarchy. In this case, the validity of the data of 4 KiB×1,048,576=4 GiB can be managed in one third VDM fragment table T423. Further, assuming that the fourth VDM fragment table T424includes 32 entries C414, and the pointer indicating the third VDM fragment table T423with 1,048,576 PBAs that store 4 GiB data is stored in each of the entries C424(that is, 1,048,576 PBA ranges allocated to the third VDM fragment table T423are allocated to each of the entries C424), 1,048,576×32=33,554,432 PBA ranges are allocated to one fourth VDM fragment table T424corresponding to the fourth hierarchy. In this case, the validity of the data of 4 KiB×33,554,432=128 GiB can be managed in one fourth VDM fragment table T424. That is, in the example of VDM42illustrated inFIG.5, each of the first VDM fragment table T421manages the PBA ranges in which 4 MiB data is stored, each of the second VDM fragment table T422manages the PBA ranges where 128 MiB data is stored, each of the third VDM fragment table T423manages the PBA ranges in which 4 GiB data is stored, and the fourth VDM fragment table T424manages the PBA ranges where 128 GiB data is stored. InFIG.5, an example in which the VDM pointer is stored in each of the plurality of entries C424included in the fourth VDM fragment table T424is illustrated; however, in a case where the plurality of third VDM fragment tables T423indicated by each of the VDM pointers are continuously arranged in the non-volatile memory4, the fourth VDM fragment table T424may be configured to store only a VDM pointer indicating the first third VDM fragment table T423of the plurality of the third VDM fragment tables T423(that is, configured to omit the VDM pointer indicating the third VDM fragment table T423that is not the first). According to this, it is possible to reduce the size of the VDM42. Here, the fourth VDM fragment table T424has been described, but the same applies to other VDM fragment tables. Further, for example, when the validity (valid or invalid) of each of the 4 KiB data written in the PBA ranges allocated to one VDM fragment table is common, it is also possible to collectively manage the validity of the data written in the PBA ranges in the VDM fragment table, and omit the VDM fragment table corresponding to the hierarchy lower than the VDM fragment table (that is, indicated by the VDM pointer stored in the entry included in the VDM fragment table). Specifically, for example, it is assumed that the second VDM fragment table T422manages the PBA ranges in which 128 MiB data is stored, and (all 4 KiB data that includes) the 128 MiB data is all valid or all invalid. In this case, by holding the management data indicating that all 128 MiB data stored in the PBA ranges allocated to the second VDM fragment table T422is valid or invalid in the third VDM fragment table T423(that is, including an entry that stores a VDM pointer indicating the second VDM fragment table T422) corresponding to the higher hierarchy of the second VDM fragment table T422, each of the second VDM fragment table T422and the first VDM fragment table T421corresponding to the lower hierarchy of the second VDM fragment table T422may be discarded. According to this, since it is not necessary to refer to the second VDM fragment table T422and the first VDM fragment table T421lower than the third VDM fragment table T423, the access speed for the data written in the VDM42can be improved. FIG.6illustrates an example of the data structure of the first VDM fragment table T421included in the VDM42in the present embodiment. The first VDM fragment table T421includes, for example, a plurality of map storing units42a, PBA storing units42b, and management data storing units42c. The map storing unit42acorresponds to the entry C421included in the first VDM fragment table T421illustrated inFIG.5. That is, the number of map storing units42ais, for example, 32. The map storing unit42astores the bitmap to be formed of 1-bit flag information that manages the validity (validity or invalidity) of each of the 4 KiB data written in the PBA ranges allocated to the map storing unit42a(entry C421). When 32 PBA ranges are allocated to the map storing unit42a, the size of the bitmap stored in the map storing unit42ais 1 bit×32=32 bits. Further, for example, 8-bit management data MD2is attached to the bitmap stored in the map storing unit42a, and the management data MD2is stored in the map storing unit42atogether with the bitmap. As the management data MD2attached to the bitmap in this way, for example, a magic number called VDM mode is set. The magic number set as the management data MD2includes “0xff” and “0x00”. As described above, although the bitmap stored in the map storing unit42ais formed of 1-bit flag information indicating validity of 4 KiB data stored in each of the 32 PBAs allocated to the map storing unit42a, in the following description, the 4 KiB data stored in each of the 32 PBAs will be referred to as the data managed in the bitmap for convenience. The magic number “0xff” indicates that all the data managed in the bitmap to which the magic number (management data MD2) is attached is valid (that is, all the flag information that makes up the bitmap is 1). That is, according to this magic number “0xff”, it is possible to collectively manage the validity of data written in a certain PBA range, and also possible to grasp that all the data managed in the bitmap is valid without referring to the bitmap to which the magic number is attached. The magic number “0x00” indicates that all the data managed in the bitmap to which the magic number (management data MD2) is attached is invalid (that is, all the flag information that makes up the bitmap is 0). That is, according to this magic number “0x00”, similar to the magic number “0xff” mentioned above, it is possible to collectively manage the validity of data written in a certain PBA range, and also possible to grasp that all the data managed in the bitmap is invalid without referring to the bitmap to which the magic number is attached. In a case where the magic number “0xff” and “0x00” are not set as the management data MD2, it means that the bitmap to which the management data MD2is attached is formed of flag information indicating validity and flag information indicating invalidity (that is, the flag information indicating validity and the flag information indicating invalidity are mixed in the bitmap). When the bitmap and the management data MD2are stored in the map storing unit42aas described above, each size of the map storing unit42ais 40 bits, which is the sum of the size (32 bits) of the bitmap and the size (8 bits) of the management data MD2, and the total size of the 32 map storing units42ais 160 bytes. The PBA storing unit42bstores the first PBA in the PBA ranges allocated to the first VDM fragment table T421. The management data storing unit42cstores Valid ADU Count indicating the number of valid data among the plurality of 4 KiB data stored in the PBA ranges allocated to the first VDM fragment table T421and Grain corresponding to the PBA ranges (PBA ranges managed by the first VDM fragment table T421) allocated to the first VDM fragment table T421. For the first VDM fragment table T421, the maximum value of Valid ADU Count is 1,024. In addition, other information may be stored in the management data storing unit42c. Specifically, the management data storing unit42cmay store identification information (hierarchy ID) or the like for identifying the hierarchy (first hierarchy) corresponding to the first VDM fragment table T421. Here, for example, when the VDM42is updated in the present embodiment, a part of the VDM42(VDM fragment table to be updated) is stored in the cache memory551. In this case, a part of the VDM42is stored in a cache line unit. Further, a part of the VDM42updated in the cache memory551is written back to the non-volatile memory4in the cache line unit. It is assumed that the first VDM fragment table T421is stored in the cache memory551for each cache line described above. Assuming that the first VDM fragment table T421stored in the cache memory551is VDM cache data, the VDM cache data further includes pointers indicating VDM cache data to be associated with each other in, for example, the cache memory551in addition to the map storing unit42a, the PBA storing unit42b, and the management data storing unit42cdescribed above. Specifically, the VDM cache data includes a prior pointer storing unit42dthat stores a pointer indicating VDM cache data referenced prior to the VDM cache data, and a next pointer storing unit42ethat stores a pointer indicating another VDM cache data referenced next to the VDM cache data. The prior pointer storing unit42dand the next pointer storing unit42emay store the pointers indicating the above-mentioned LUT cache data. As the pointers stored in the prior pointer storing unit42dand the next pointer storing unit42edescribed above, for example, a PBA in which other VDM cache data is stored is used, and an address in another format may be used. By using the pointers to the VDM cache data before and after the VDM cache data should be referred to, the access to the cache memory551can be made speed up, and thereby continuous access can be realized. The VDM cache data may further include other management data. Although the data structure of one first VDM fragment table T421has been illustrated inFIG.6, the plurality of first VDM fragment tables T421included in the VDM42all have the same data structure. Next,FIG.7illustrates an example of the data structure of the second VDM fragment table T422included in the VDM42in the present embodiment. Here, the differences from the first VDM fragment table T421illustrated inFIG.6described above will be mainly described. InFIG.6, the first VDM fragment table1421has been described as including the map storing unit42a; however, the second VDM fragment table T422includes a PBA storing unit42finstead of the map storing unit42a. The PBA storing unit42fcorresponds to the entry C422included in the second VDM fragment table T422illustrated inFIG.5. That is, the number of PBA storing units42fis, for example, 32. The PBA storing unit42fstores the PBA in the non-volatile memory4in which the first VDM fragment table T421is stored as a pointer indicating the first VDM fragment table T421corresponding to the lower hierarchy of the second VDM fragment table T422. In a case where the first VDM fragment table T421corresponding to the lower hierarchy is stored in the cache memory551, the address information of the cache memory551is stored in the PBA storing unit42f. The size of the PBA stored in the PBA storing unit42fis, for example, 32 bits. Further, for example, 8-bit management data MD3is attached to the PBA stored in the PBA storing unit42f, and the management data MD3is stored in the PBA storing unit42ftogether with the bitmap. As the management data MD3attached to the bitmap in this way, a magic number called VDM mode is set as in the management data MD2illustrated inFIG.6described above. In addition, although it has been described that “0xff” and “0x00” are set as the magic numbers for the management data MD2, the magic numbers set as the management data MD3further include “0xfc” and “0xfd” in addition to the “0xff” and “0x00”. The magic number “0xfc” indicates that the PBA to which the magic number (management data MD3) is attached is the PBA in the non-volatile memory4. According to this magic number “0xfc”, it is possible to refer to (acquire) the first VDM fragment table T421stored in the non-volatile memory4based on the PBA to which the magic number is attached. The magic number “0xfd” indicates that the PBA to which the magic number (management data MD3) is attached is the address information in the cache memory551. According to this magic number “0xfd”, it is possible to refer to (acquire) the first VDM fragment table T421stored in the cache memory551based on the PBA to which the magic number is attached. As described above, the first VDM fragment table T421referenced based on the PBA stored in the PBA storing unit42fis a VDM fragment table corresponding to the first hierarchy to which the PBA ranges allocated to the PBA storing unit42f(entry C422) is allocated. Further, the above-mentioned magic number “0xff” or “0x00” may be set as the management data MD3. When the magic number “0xff” is set as the management data MD3, it means that all 4 KiB data stored in the PBA ranges (for example, 1,024 PBAs) allocated to the PBA storing unit42fthat stores the PBAs with the magic number is valid. On the other hand, when the magic number “0x00” is set as the management data MD3, it means that all 4 KiB data stored in the PBA ranges (for example, 1,024 PBAs) allocated to the PBA storing unit42fthat stores the PBAs with the magic number is invalid. That is, when one of the magic numbers “0xff” and “0x00” is set as the management data MD3, it is possible to grasp that all the data stored in the PBA ranges allocated to the PBA storing unit42f(entry C422) that stores the PBA with the magic number is valid or invalid. In this case, it is not necessary to refer to the first VDM fragment table T421corresponding to the lower hierarchy based on the PBA with the magic number “0xff” or “0x00”. On the other hand, when the magic numbers “0xff” and “0x00” are not set as the management data MD3(that is, the magic numbers “0xfc” or “0xfd” are set), it is possible to grasp that valid data and invalid data are mixed in the PBA ranges allocated to the PBA storing unit42f(entry C422) that stores the PBA to which the magic number is attached. In this case, it is necessary to refer to the first VDM fragment table T421corresponding to the lower hierarchy based on the PBA with the magic number “0xfc” or “0xfd”. When the PBA and the management data MD3are stored in the PBA storing unit42fas described above, each size of the PBA storing unit42fis 40 bits, which is the sum of the size (32 bits) of the PBA and the size (8 bits) of the management data MD3, and the total size of the 32 PBA storing units42fis 160 bytes. The second VDM fragment table T422further includes the PBA storing unit42band the management data storing unit42cin addition to the PBA storing unit42f, and the PBA storing unit42band the management data storing unit42care as illustrated inFIG.6. Therefore, the detailed description thereof will be omitted here. In addition, the second VDM fragment table T422(VDM cache data) stored in the cache memory551includes the prior pointer storing unit42dand the next pointer storing unit42e, and since the prior pointer storing unit42dand the next pointer storing unit42eare also as illustrated inFIG.6, a detailed description thereof will be omitted here. Although the data structure of one second VDM fragment table T422has been illustrated inFIG.7, the plurality of second VDM fragment tables T422included in the VDM42all have the same data structure. Further, the data structures of the VDM fragment tables (the third VDM fragment table T423to the fourth VDM fragment table T424) other than the second VDM fragment table T422are the same as that of the second VDM fragment table T422. That is, for example, even in the third VDM fragment table T423, if one of the magic numbers “0xff” and “0x00” is set as the management data MD3, it is not necessary to refer to the second VDM fragment table T422corresponding to the lower hierarchy based on the PBA with the magic number. The same applies to the fourth VDM fragment table T424. In the examples illustrated inFIGS.6and7described above, the size of each of the first VDM fragment table T421to the fourth VDM fragment table T424is, for example, a fixed length of 168 bytes, and the size of each of the VDM cache data stored in the cache memory551is, for example, a fixed length of 188 bytes. In the present embodiment, it is assumed that the first VDM fragment table T421to the fourth VDM fragment table T424(that is, a plurality of VDM fragment tables included in the VDM42) are configured to have the same data structure. Further, as illustrated inFIGS.3,6, and7described above, (each LUT fragment table included in) the LUT41and (each VDM fragment table included in) the VDM42in the present embodiment have the same data structure. Hereinafter, the relationship between the above-mentioned LUT41and VDM42will be described. First, assuming that LUT41is the data for managing PBA corresponding to LBA as described above, and one first LUT fragment table T411corresponding to the lowest hierarchy (first hierarchy) includes 32 entries C411(PBA storing unit41a), in the first LUT fragment table T411, 32 LBAs (corresponding PBAs) can be managed. Also, assuming that one second LUT fragment table T412corresponding to the second lowest hierarchy (second hierarchy) also includes 32 entries C412, in the second LUT fragment table T412, 32×32=1,024 LBAs (corresponding PBAs) can be managed. Here, the second hierarchy has been described, but the same applies to the hierarchies higher than the second hierarchy. On the other hand, assuming that VDM42is the data for managing the validity of the data stored in each PBA as described above, and a 32-bit bitmap is stored in one entry C421(map storing unit42a) of one first VDM fragment table T421corresponding to the lowest hierarchy (first hierarchy), in the first VDM fragment table T421, 32 bits×32=1,024 PBAs (data stored in) can be managed. Also, assuming that one second VDM fragment table T422corresponding to the second lowest hierarchy (second hierarchy) also includes 32 entries C422, in the second VDM fragment table T422, (data stored in) 1,024×32=32,768 PBAs can be managed. Here, the second hierarchy has been described, but the same applies to the hierarchies higher than the second hierarchy. That is, in the present embodiment, each of the LUT41and the VDM42manages one fragment table corresponding to the lower hierarchy with one entry, and in both the LUT41and the VDM42, it is possible to manage 32 times as many PBAs each time the hierarchy goes up one level. Here, it is assumed that 4 MiB data corresponding to 1,024 consecutive LBA ranges is written (sequentially written) to 1,024 consecutive PBAs in the non-volatile memory4. Then, it is assumed that 4 KiB data is written in each of the 1,024 PBAs. In this case, it is necessary to manage the correspondence between the LBA corresponding to the 4 MiB data written in the non-volatile memory4and the PBA in which the data is written in the LUT41, and as described above, the second LUT fragment table T412can manage (PBAs corresponding to) 1,024 LBAs. Therefore, in a case where 1,024 LBAs managed (that is, allocated to the second LUT fragment table T412) by the second LUT fragment table T412match the 1024 LBAs corresponding to the 4 MiB data described above, among the LUT pointers stored in the plurality of entries C413included in the third LUT fragment table T413corresponding to the higher hierarchy of the second LUT fragment table T412, the LUT pointer indicating the second LUT fragment table T412can be updated to the first PBA of the 1024 PBAs in which the data of the 4 MiB is written. According to this, one entry C413included in the third LUT fragment table T413can manage the correspondence between the LBA corresponding to the above-mentioned 4 MiB data and the PBA in which the data is written. On the other hand, in a case where 4 MiB data is written to 1,024 consecutive PBAs in the non-volatile memory4as described above, it is necessary to manage the 4 MiB data as valid data in VDM42, and the first VDM fragment table T421can manage 1,024 PBAs. Therefore, in a case where 1,024 PBAs managed (that is, allocated to the first VDM fragment table T421) by the first VDM fragment table T421match the 1,024 PBAs corresponding to the 4 MiB data described above, among the VDM pointers stored in the plurality of entries C422included in the second VDM fragment table T422corresponding to the higher hierarchy of the first VDM fragment table T421, the management data MD3(magic number) attached to the VDM pointer indicating the first VDM fragment table T421can be updated to “0xff”. According to this, one entry C422included in the second VDM fragment table T422can manage that the 4 MiB data stored in the above 1,024 PBAs is valid. That is, when 4 MiB data corresponding to 1,024 consecutive LBA ranges is written to 1,024 consecutive PBAs as described above, the correspondence between the LBA and the PBA can be managed by changing one entry (PBA) included in the LUT fragment table corresponding to the third lowest hierarchy included in the LUT41. In addition, the validity of the 4 MiB data written to the non-volatile memory4in this way can be managed by changing one entry (magic number) included in the VDM fragment table corresponding to the second lowest hierarchy included in VDM42. Thus, in the present embodiment, by making each fragment table included in LUT41and VDM42the same data structure and aligning the management units in the LUT41and VDM42, the update of LUT41and VDM42can be completed only by changing the entry contained in the fragment table of the higher hierarchy without updating the fragment table corresponding to the lowest hierarchy. Here, in order to complete the update of the LUT41and VDM42by changing the entries included in the fragment table corresponding to the higher hierarchy as described above, the VDM fragment table included in the VDM42which has the same data structure as the LUT fragment table contained in LUT41is required to satisfy M=y×N{circumflex over ( )}x (hereinafter, referred to as conditional expression). In the above conditional expression, N is the number of entries included in the first VDM fragment table T421corresponding to the lowest hierarchy, and M is the number of 4 KiB data (that is, the PBA where the data is stored) whose validity is managed in one entry included in the first VDM fragment table T421corresponding to the lowest hierarchy. In the conditional expression, x is an integer of 0 or more, and y is an integer of 1 or more and less than N or the reciprocal of an integer of 1 or more and less than N. Hereinafter, the relationship between N and M described above will be specifically described. Here, the number of (PBAs corresponding to) LBAs allocated to each LUT fragment table is referred to as the number of PBAs managed by the LUT fragment table and the number of PBAs allocated to each VDM fragment table is referred to as the number of PBAs managed by the VDM fragment table. The number of (PBAs corresponding to) LBAs allocated to one entry included in the LUT fragment table (first LUT fragment table T411) corresponding to the first hierarchy (the lowest hierarchy) is 1, and the same applies to the following description. FIG.8illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=32 and M=32. Here, when N=32 and M=32, the number of PBAs managed by the LUT fragment table corresponding to the first hierarchy is 32, and the number of PBAs managed by the VDM fragment table (first VDM fragment table T421) corresponding to the first hierarchy is 1,024. In addition, the number of PBAs managed by the LUT fragment table (second LUT fragment table T412) corresponding to the second hierarchy is 1,024, and the number of PBAs managed by the VDM fragment table (second VDM fragment table T422) corresponding to the second hierarchy is 32,768. Although detailed description of the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to the hierarchy higher than the second hierarchy will be omitted, when N=32, both the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table are 32 times higher when the hierarchy is one level higher. Comparing the LUT fragment table and the VDM fragment table corresponding to the same hierarchy as described above, the number of PBAs managed by the VDM fragment table is larger than the number managed by the LUT fragment table. Further, when the number of PBAs managed as a whole LUT41and the number of PBAs managed as a whole VDM42are the same, the number of hierarchies constituting the hierarchical structure of the VDM42in the present embodiment is smaller than the number of hierarchies constituting the hierarchical structure of the LUT41. When N=32 and M=32, the above conditional expression is satisfied when x=1 and y=1. When the conditional expression is satisfied in this way, x corresponds to the hierarchical difference between the LUT fragment table and the VDM fragment table, and y corresponds to a ratio of the number of PBAs managed by the LUT fragment table to the number of PBAs managed by the VDM fragment table (that is, “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table”). Specifically, when focusing on the LUT fragment table corresponding to the second hierarchy and the VDM fragment table corresponding to the first hierarchy, where the difference between the hierarchies is 1 (that is, x=1), the number of PBAs managed by the LUT fragment table is 1,024, the number of PBAs managed by the VDM fragment table is 1,024, and “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table” is 1 (that is, y=1). If N and M satisfy the above conditional expression in this way, for example, when the data corresponding to the 1,024 LBA ranges allocated to the LUT fragment table corresponding to the second hierarchy is written to the non-volatile memory4, the update of the LUT41can be completed by changing one entry (PBAs stored in PBA storing unit41a) included in the LUT fragment table corresponding to the third hierarchy. Similarly, when the data corresponding to the 1,024 LBA ranges is written to the 1,024 PBAs allocated to the VDM fragment table corresponding to the first hierarchy, the update of the VDM42can be completed by changing one entry (magic number stored in map storing unit42a) included in the VDM fragment table corresponding to the second hierarchy. That is, when N=32 and M=32, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i (i is an integer of 1 or more) hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:1, and for example, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM42can correspond to the update of the VDM42. Note that, N=32 and M=32 illustrated inFIG.8are one of the examples in which the LUT41(correspondence between LBA and PBA) and the VDM42(data validity) can be managed most efficiently. For example, even when N is changed, efficient management of the LUT41and the VDM42can be realized as long as the above conditional expression is satisfied. Hereinafter, the case where N is changed will be described, but detailed description thereof will be omitted for the same parts as those illustrated inFIG.8described above. FIG.9illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=8 and M=32. Here, when N=8 and M=32, the number of PBAs managed by the LUT fragment table corresponding to the first hierarchy is 8, and the number of PBAs managed by the VDM fragment table corresponding to the first hierarchy is 256. Further, the number of PBAs managed by the LUT fragment table corresponding to the second hierarchy is 64, and the number of PBAs managed by the VDM fragment table corresponding to the second hierarchy is 2,048. Although detailed description of the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to the hierarchy higher than the second hierarchy will be omitted, when N=8, both the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table are 8 times higher when the hierarchy is one level higher. In addition, when N=8 and M=32, the above conditional expression is satisfied when x=1 and y=4. Specifically, when focusing on the LUT fragment table corresponding to the second hierarchy and the VDM fragment table corresponding to the first hierarchy, where the difference between the hierarchies is 1 (that is, x=1), the number of PBAs managed by the LUT fragment table is 64, the number of PBAs managed by the VDM fragment table is 256, and “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table” is 4 (that is, y=4). If N and M satisfy the above conditional expression in this way, for example, when the data corresponding to the 256 LBA ranges allocated to four LUT fragment tables corresponding to the second hierarchy is written to the non-volatile memory4, the update of the LUT41can be completed by changing four entries included in the LUT fragment table corresponding to the third hierarchy. Similarly, when the data corresponding to the 256 LBA ranges is written to the 256 PBAs allocated to the VDM fragment table corresponding to the first hierarchy, the update of the VDM42can be completed by changing one entry included in the VDM fragment table corresponding to the second hierarchy. That is, when N=8 and M=32, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:4, and as described above, if the continuity of the LBA and PBA is secured, a change of four entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM42can correspond to the update of the VDM42. When N=8 and M=32, the above conditional expression is satisfied when x=2 and y=½. Although detailed description will be omitted, in this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−2 hierarchy is 2:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of two entries of the VDM fragment table can correspond to the update of the VDM42. FIG.10illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=16 and M=32. Here, when N=16 and M=32, the number of PBAs managed by the LUT fragment table corresponding to the first hierarchy is 16, and the number of PBAs managed by the VDM fragment table corresponding to the first hierarchy is 512. Further, the number of PBAs managed by the LUT fragment table corresponding to the second hierarchy is 256, and the number of PBAs managed by the VDM fragment table corresponding to the second hierarchy is 8,192. Although detailed description of the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to the hierarchy higher than the second hierarchy will be omitted, when N=16, both the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table are 16 times higher when the hierarchy is one level higher. In addition, when N=16 and M=32, the above conditional expression is satisfied when x=1 and y=2. Specifically, when focusing on the LUT fragment table corresponding to the second hierarchy and the VDM fragment table corresponding to the first hierarchy, where the difference between the hierarchies is 1 (that is, x=1), the number of PBAs managed by the LUT fragment table is 256, the number of PBAs managed by the VDM fragment table is 512, and “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table” is 2 (that is, y=2). If N and M satisfy the above conditional expression in this way, for example, when the data corresponding to the 512 LBA ranges allocated to two LUT fragment tables corresponding to the second hierarchy is written to the non-volatile memory4, the update of the LUT41can be completed by changing two entries included in the LUT fragment table corresponding to the third hierarchy. Similarly, when the data corresponding to the 512 LBA ranges is written to the 512 PBAs allocated to the VDM fragment table corresponding to the first hierarchy, the update of the VDM42can be completed by changing one entry included in the VDM fragment table corresponding to the second hierarchy. That is, when N=16 and M=32, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:2, and as described above, if the continuity of the LBA and PBA is secured, a change of two entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=16 and M=32, the above conditional expression is satisfied when x=2 and y=⅛. Although detailed description will be omitted, in this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−2 hierarchy is 8:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of eight entries of the VDM fragment table can correspond to the update of the VDM42. FIG.11illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=64 and M=32. Here, when N=64 and M=32, the number of PBAs managed by the LUT fragment table corresponding to the first hierarchy is 64, and the number of PBAs managed by the VDM fragment table corresponding to the first hierarchy is 2,048. Further, the number of PBAs managed by the LUT fragment table corresponding to the second hierarchy is 4,096, and the number of PBAs managed by the VDM fragment table corresponding to the second hierarchy is 131,072. Although detailed description of the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to the hierarchy higher than the second hierarchy will be omitted, when N=64, both the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table are 64 times higher when the hierarchy is one level higher. In addition, when N=64 and M=32, the above conditional expression is satisfied when x=0 and y=32. Specifically, when focusing on the LUT fragment table corresponding to the first hierarchy and the VDM fragment table corresponding to the first hierarchy, where the difference between the hierarchies is 0 (that is, x=0), the number of PBAs managed by the LUT fragment table is 64, the number of PBAs managed by the VDM fragment table is 2,048, and “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table” is 32 (that is, y=32). If N and M satisfy the above conditional expression in this way, for example, when the data corresponding to the 2,048 LBA ranges allocated to 32 LUT fragment tables corresponding to the first hierarchy is written to the non-volatile memory4, the update of the LUT41can be completed by changing 32 entries included in the LUT fragment table corresponding to the second hierarchy. Similarly, when the data corresponding to the 2,048 LBA ranges is written to the 2,048 PBAs allocated to the VDM fragment table corresponding to the first hierarchy, the update of the VDM42can be completed by changing one entry included in the VDM fragment table corresponding to the second hierarchy. That is, when N=64 and M=32, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i hierarchy is 1:32, and as described above, if the continuity of the LBA and PBA is secured, a change of 32 entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=64 and M=32, the above conditional expression is satisfied when x=1 and y=½. Although detailed description will be omitted, in this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 2:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of two entries of the VDM fragment table can correspond to the update of the VDM42. FIG.12illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=128 and M=32. Here, when N=128 and M=32, the number of PBAs managed by the LUT fragment table corresponding to the first hierarchy is 128, and the number of PBAs managed by the VDM fragment table corresponding to the first hierarchy is 4,096. Further, the number of PBAs managed by the LUT fragment table corresponding to the second hierarchy is 16,384, and the number of PBAs managed by the VDM fragment table corresponding to the second hierarchy is 524,288. Although detailed description of the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to the hierarchy higher than the second hierarchy will be omitted, when N=128, both the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table are 128 times higher when the hierarchy is one level higher. In addition, when N=128 and M=32, the above conditional expression is satisfied when x=0 and y=32. Specifically, when focusing on the LUT fragment table corresponding to the first hierarchy and the VDM fragment table corresponding to the first hierarchy, where the difference between the hierarchies is 0 (that is, x=0), the number of PBAs managed by the LUT fragment table is 128, the number of PBAs managed by the VDM fragment table is 4,096, and “number of PBAs managed by VDM fragment table/number of PBAs managed by LUT fragment table” is 32 (that is, y=32). If N and M satisfy the above conditional expression in this way, for example, when the data corresponding to the 4,096 LBA ranges allocated to 32 LUT fragment tables corresponding to the first hierarchy is written to the non-volatile memory4, the update of the LUT41can be completed by changing 32 entries included in the LUT fragment table corresponding to the second hierarchy. Similarly, when the data corresponding to the 4,096 LBA ranges is written to the 4,096 PBAs allocated to the VDM fragment table corresponding to the first hierarchy, the update of the VDM42can be completed by changing one entry included in the VDM fragment table corresponding to the second hierarchy. That is, when N=128 and M=32, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i hierarchy is 1:32, and as described above, if the continuity of the LBA and PBA is secured, a change of 32 entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=128 and M=32, the above conditional expression is satisfied when x=1 and y=½. Although detailed description will be omitted, in this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 4:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of four entries of the VDM fragment table can correspond to the update of the VDM42. A case of M=32 has been illustrated inFIGS.8to12described above, and a case of M=64 will be described below with reference toFIGS.13to17. Since the same asFIGS.8to12described above except that M is changed,FIGS.13to17will be described in a simplified manner as appropriate. FIG.13illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=64 and M=64. When N=64 and M=64, the above conditional expression is satisfied when x=1 and y=1. That is, when N=64 and M=64, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM42can correspond to the update of the VDM42. As illustrated inFIG.8above, even when N=32 and M=32, it is possible to correspond to the update of the LUT41by changing one entry of the LUT fragment table, and to correspond to the update of the VDM42by changing one entry of the VDM42. That is, in the present embodiment, it can be said that more efficient management of the LUT41and the VDM42can be realized when N=M. FIG.14illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=8 and M=64. When N=8 and M=64, the above conditional expression is satisfied when x=2 and y=1. That is, when N=8 and M=64, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−2 hierarchy is 1:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM42can correspond to the update of the VDM42. In this way, even if N=M is not satisfied, it may be possible to handle both the update of LUT41and the update of VDM42with one entry. According to this, it can be said that more efficient management of the LUT41and the VDM42can be realized even when M=N{circumflex over ( )}x is satisfied, for example. FIG.15illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=16 and M=64. When N=16 and M=32, the above conditional expression is satisfied when x=1 and y=4. That is, when N=16 and M=64, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:4, and as described above, if the continuity of the LBA and PBA is secured, a change of four entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=16 and M=64, the above conditional expression is satisfied when x=2 and y=¼. In this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−2 hierarchy is 4:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of four entries of the VDM fragment table can correspond to the update of the VDM42. FIG.16illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=32 and M=64. When N=32 and M=64, the above conditional expression is satisfied when x=1 and y=2. That is, when N=32 and M=64, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 1:2, and as described above, if the continuity of the LBA and PBA is secured, a change of two entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=32 and M=64, the above conditional expression is satisfied when x=2 and y= 1/16. In this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−2 hierarchy is 16:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of 16 entries of the VDM fragment table can correspond to the update of the VDM42. FIG.17illustrates a relationship between the number of PBAs managed by the LUT fragment table and the number of PBAs managed by the VDM fragment table corresponding to each hierarchy when N=128 and M=64. When N=128 and M=64, the above conditional expression is satisfied when x=0 and y=64. That is, when N=128 and M=64, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i hierarchy is 1:64, and as described above, if the continuity of the LBA and PBA is secured, a change of 64 entries in the LUT fragment table can correspond to the update of the LUT41, and a change of one entry of the VDM fragment table can correspond to the update of the VDM42. When N=128 and M=64, the above conditional expression is satisfied when x=1 and y=½. In this case, the ratio of the number of PBAs managed by the LUT fragment table corresponding to i hierarchy to the number of PBAs managed by the VDM fragment table corresponding to i−1 hierarchy is 2:1, and as described above, if the continuity of the LBA and PBA is secured, a change of one entry in the LUT fragment table can correspond to the update of the LUT41, and a change of two entries of the VDM fragment table can correspond to the update of the VDM42. In the present embodiment, the case of M=32 and the case of M=64 have been described, and the M may be determined to correspond to a calculation bit width (for example, 32 bits or 64 bits) in the memory system3. Hereinafter, the operation of the memory system3according to the present embodiment will be described. First, an example of the processing procedure of the memory system3when a write command is transmitted from the host2will be described with reference to the flowchart ofFIG.18. In a case where the write command is transmitted from the host2as described above, the communication interface control unit51receives the write command (step S1). Here, the write command received in step S1includes data written to the non-volatile memory4(hereinafter, referred to as target data) based on the write command and an LBA used to access the data (hereinafter, referred to as the target LBA). The target data is temporarily stored in the write buffer memory52. Next, the write control unit561writes the target data stored in the write buffer memory52to the non-volatile memory4via the non-volatile memory controller54(step S2). In the following description, the PBA in the non-volatile memory4in which the target data is written in step S2is referred to as a target PBA for convenience. When the process of step S2is executed, the management unit565updates the VDM42based on the target PBA, for example, by cooperating with the non-volatile memory controller54and the cache memory control unit566(step S3). In this step S3, the VDM42is updated to manage that the target data is valid (that is, the data written to the target PBA is valid). Here, in the present embodiment, the VDM42has a hierarchical structure and includes a plurality of VDM fragment tables corresponding to each hierarchy. In this case, in step S3, one or more VDM fragment tables to which the target PBA is allocated are specified by referring to the VDM42, and the specified VDM fragment table is read from the non-volatile memory4as needed. The VDM fragment table read from the non-volatile memory4in this way is stored in the cache memory551and updated on the cache memory551. In the case where the specified VDM fragment table described above is already stored in the cache memory551, it is not necessary to read the VDM fragment table from the non-volatile memory4. Next, among the entries included in the VDM fragment table specified in this way, the entry to which the target PBA is allocated is changed. The VDM fragment table whose entry is updated in this way is read from the cache memory551and written back to the non-volatile memory4. The VDM fragment table whose entry is changed may be a VDM fragment table in a hierarchical structure or corresponding to the lowest hierarchy as described above, or may be a VDM fragment table corresponding to a hierarchy higher than the hierarchy. Specifically, if the target PBA is a PBA in a relatively narrow range, and it is not possible to manage that the target data written to the target PBA is valid unless the entries included in the VDM fragment table corresponding to the lowest hierarchy are changed, the entries included in the VDM fragment table corresponding to the lowest hierarchy are changed. In this case, among the entries included in the VDM fragment table corresponding to the lowest hierarchy, the flag information (flag information corresponding to the target PBA) that constitutes the bitmap stored in the entry (map storing unit42a) to which the target PBA is allocated is changed. Also, if the entire consecutive PBA ranges allocated to at least one entry in the VDM fragment table corresponding to the lowest hierarchy are the target PBAs, the magic number (management data MD2) stored in the entry is changed to “0xff”. On the other hand, if the target PBAs are consecutive PBAs in a relatively wide range, and it is possible to manage that the target data written to the target PBA is valid by changing the entries included in the VDM fragment table corresponding to hierarchies other than the lowest hierarchy, the entries included in the VDM fragment table corresponding to the hierarchies other than the lowest hierarchy may be changed. In this case, among the entries included in the VDM fragment table corresponding to the hierarchies other than the lowest hierarchy, the magic number (management data MD3) stored in the entry (PBA storing unit42f) to which the target PBA is allocated is changed to “0xff”. If the entries included in the VDM fragment table corresponding to the hierarchy other than the lowest hierarchy are changed in this way, since the validity of the target data can be managed only by the VDM fragment table, the VDM fragment table (that is, the VDM fragment table indicated by the pointer stored in the entry) corresponding to the hierarchy lower than the VDM fragment table can be discarded. On the other hand, if it is necessary to change the entries included in the VDM fragment table corresponding to the lowest hierarchy, which is a case where the VDM fragment table does not exist (discarded), a new VDM fragment table including the entry to which the target PBA is allocated is created. In step S3, the VDM42stored in the non-volatile memory4can be updated by executing such a process. The writing back of the VDM42to the non-volatile memory4may be executed at any timing after step S3. When the process of step S3is executed, the management unit565updates the LUT41based on the write command (target LBA) and the target PBA, for example, by cooperating with the non-volatile memory controller54and the cache memory control unit566(step S4). In this step S4, the LUT41is updated so as to manage the correspondence between the target LBA and the target PBA (that is, the target LBA can be converted into the target PBA). Here, in the present embodiment, the LUT41has a hierarchical structure and includes a plurality of LUT fragment tables corresponding to each hierarchy. In this case, in step S4, one or more LUT fragment tables to which the target LBA is allocated are specified by referring to the LUT41, and the specified LUT fragment table is read from the non-volatile memory4as needed. The LUT fragment table read from the non-volatile memory4in this way is stored in the cache memory551and updated on the cache memory551. In the case where the specified LUT fragment table described above is already stored in the cache memory551, it is not necessary to read the LUT fragment table from the non-volatile memory4. Next, among the entries included in the LUT fragment table specified in this way, the entry to which the target LBA is allocated is changed. In this case, the PBA stored in the entry (PBA storing unit41a) to which the target LBA is allocated is changed to the target PBA. The LUT fragment table whose entry is updated in this way is read from the cache memory551and written back to the non-volatile memory4. The LUT fragment table whose entry is changed may be a LUT fragment table in a hierarchical structure or corresponding to the lowest hierarchy as described above, or may be a LUT fragment table corresponding to a hierarchy higher than the hierarchy. Specifically, if the target LBA is a LBA in a relatively narrow range, and it is not possible to manage the target LBA and correspondence of target PBA unless the entries included in the LUT fragment table corresponding to the lowest hierarchy are changed, the entries included in the LUT fragment table corresponding to the lowest hierarchy are changed. On the other hand, if the target LBAs are consecutive LBAs in a relatively wide range, the target data is written to the consecutive PBAs, and it is possible to manage the target LBA and correspondence of target PBA by changing the entries included in the LUT fragment table corresponding to hierarchies other than the lowest hierarchy, the entries included in the LUT fragment table corresponding to the hierarchies other than the lowest hierarchy may be changed. In step S4, the LUT41stored in the non-volatile memory4can be updated by executing such a process. The writing back of the LUT41to the non-volatile memory4may be executed at any timing after step S4. When the process of step S4is executed, the management unit565transmits a response (completion response) to the write command received in step S1to the host2via the communication interface control unit51(step S5). Here, the case where the LUT41and the VDM42are updated based on the write command from the host2has been described, but the LUT41and VDM42also need to be updated when, for example, the Trim command is transmitted from the host2. Hereinafter, an example of the processing procedure of the memory system3when a Trim command is transmitted from the host2will be described with reference to the flowchart ofFIG.19. The Trim command is a command for invalidating the data corresponding to a predetermined file when the predetermined file is deleted in a file system used by the host2, for example. The Trim command is also referred to as, for example, an Unmap command, in accordance with the interface standard for connecting the storage device. Note that, the Trim command does not erase the data written in the non-volatile memory4, and the data is erased by garbage collection. In a case where the Trim command is transmitted from the host2as described above, the communication interface control unit51receives the Trim command (step S11). The Trim command received in step S1lincludes (range of) the LBAs for accessing the data to be invalidated. In the following description, the LBA included in the Trim command is referred to as a target LBA. When the process of step S1lis executed, the address translation unit564refers to the LUT fragment table included in the LUT41in order from the higher hierarchy, and converts the target LBA into the PBA (step S12). As a result, the address translation unit564acquires the PBA corresponding to the target LBA. In the following description, the PBA acquired by the address translation unit564is referred to as a target PBA. Next, the management unit565updates the VDM42to manage that the data (that is, the data corresponding to the target LBA) stored in the target PBA is invalid (step S13). Since the update process of the VDM42based on the Trim command is the same as the process indicated in step S3illustrated inFIG.18except that the VDM42is updated so as to manage that the data is invalid, the detailed description thereof will be omitted here. On the other hand, if the target PBAs are consecutive PBAs in a relatively wide range, and it is possible to manage that the target data written to the target PBA is invalid by changing the entries included in the VDM fragment table corresponding to hierarchies other than the lowest hierarchy, among the entries included in the VDM fragment table corresponding to the hierarchies other than the lowest hierarchy, the magic number stored in the entry to which the target PBA is allocated is changed to “0x00”. Also, if the entire consecutive PBA ranges allocated to at least one entry in the VDM fragment table corresponding to the lowest hierarchy are the target PBAs, the magic number stored in the entry is changed to “0x00”. Further, the management unit565updates the LUT41so as to invalidate the correspondence between the target LBA and the target PBA (the PBA in which the data to be invalidated is stored) (step S14). When invalidating the correspondence between the LBA and the PBA in the LUT41, for example, a magic number is set in the entry (PBA storing unit41a) included in the LUT fragment table to which the LBA is allocated. Since the update process of the LUT41based on the Trim command is the same as step S4illustrated inFIG.18described above except that the correspondence between the LBA and the PBA is invalidated, the detailed description thereof will be omitted here. When the LUT41and the VDM42are updated as described above, the management unit565transmits a response (completion response) to the Trim command to the host2via the communication interface control unit51(step S15). In the example illustrated inFIG.19, the completion response is transmitted to the host2after the LUT41and the VDM42are updated; however, for example, the VDM42update may be configured to be executed (that is, delayed) after the completion response is transmitted. Here, the VDM42mentioned above is necessary for efficient garbage collection, and when the garbage collection is executed, it is necessary to refer to the VDM42and confirm whether the data written in each PBA in the non-volatile memory4is valid or invalid. Hereinafter, referring to the flowchart ofFIG.20, an example of the processing procedure of the memory system3, when confirming whether the data written in a specific PBA (hereinafter, referred to as a target PBA) in the non-volatile memory4is valid or invalid, will be described. First, in the present embodiment, the VDM42has a hierarchical structure formed of a plurality of hierarchies, and the memory55holds a VDM pointer (PBA in which the VDM fragment table is stored) indicating a VDM fragment table corresponding to the highest hierarchy among the plurality of hierarchies. When the VDM fragment table corresponding to the highest hierarchy is stored in the non-volatile memory4, the VDM pointer held in the memory55is the PBA in the non-volatile memory4. When the VDM fragment table corresponding to the highest hierarchy is stored in the cache memory551, the VDM pointer held in the memory55is the address information in the cache memory551. In this case, the management unit565reads the VDM fragment table from the non-volatile memory4or the cache memory551based on the VDM pointer held in the memory55(step S21). Next, the management unit565refers to the magic number (hereafter, referred to as the target magic number) stored in the entry to which the target PBA is allocated, among the plurality of entries included in the VDM fragment table (VDM fragment table corresponding to the highest hierarchy) read in step S1(step S22). If the VDM fragment table read in step S21is not the VDM fragment table corresponding to the lowest hierarchy, as the magic number (management data MD3) stored in the entry included in the VDM fragment table, one of the above-mentioned “0xff”, “0x00”, “0xfc”, and “0xfd” is set. The management unit565determines whether or not the target magic number referred to in this way is “0xff” or “0x00” (step S23). When it is determined that the target magic number is not “0xff” or “0x00” (NO in step S23), the management unit565determines whether or not the target magic number is “0xfc” or “0xfd” (Step S24). When it is determined that the target magic number is “0xfc” or “0xfd” (YES in step S24), the management unit565acquires the VDM pointer to which the target magic number is attached (step S25). When the process of step S25is executed, the process returns to step S21and the process is repeated. Here, the magic number “0xfc” indicates that the VDM pointer (PBA) to which the magic number is attached is the PBA in the non-volatile memory4as described above. Therefore, when the target magic number is “0xfc”, in step S21executed after step S25, the VDM fragment table corresponding to the subsequent hierarchy (lower hierarchy) is read from the non-volatile memory4based on the VDM pointer acquired in step S24. On the other hand, the magic number “0xfd” indicates that the VDM pointer (PBA) to which the magic number is attached is the address information in the cache memory551as described above. Therefore, when the target magic number is “0xfd”, in step S21executed after step S25, the VDM fragment table corresponding to the subsequent hierarchy (lower hierarchy) is read from the cache memory551based on the VDM pointer acquired in step S24. In the present embodiment, by repeating the processes of steps S21to S25in this way, it is possible to sequentially refer to the VDM fragment table corresponding to each hierarchy. On the other hand, it is assumed that the target magic number is determined to be “0xff” or “0x00” in step S23(YES in step S23). Here, the magic number “0xff” indicates that all the data written in the entire PBA ranges allocated to the entry in which the magic number is stored is valid. That is, when the target magic number is “0xff”, it can be grasped that the data stored in the target PBA is valid, so that the process illustrated inFIG.20is terminated. Further, the magic number “0x00” indicates that the data written in the entire PBA ranges allocated to the entry in which the magic number is stored is invalid. That is, when the target magic number is “0x00”, it can be grasped that the data stored in the target PBA is invalid, so that the process illustrated inFIG.20is terminated. If it is determined in step S24that the target magic number is not “0xfc” or “0xfd” (NO in step S24), the magic numbers “0xff”, “0x00”, “0xfc”, and “0xfd” are not set in the entry to which the target PBA is allocated. In this case, the VDM fragment table read in step S21is the VDM fragment table corresponding to the lowest hierarchy, and it can be seen that the validity of the data stored in the range of PBA including the target PBA included in the VDM fragment table is not common (that is, valid data and invalid data are mixed). In this case, the management unit565acquires a bitmap stored in the entry to which the target PBA is allocated of the VDM fragment table (VDM fragment table corresponding to the lowest hierarchy) read in step S21(step S26). The management unit565can grasp whether the data is valid or invalid based on the flag information indicating the validity of the data stored in the target PBA (that is, the flag information corresponding to the target PBA) among the plurality of flag information constituting the bitmap acquired in step S26. As described above, in the present embodiment, the VDM42(data map) stored in the non-volatile memory4has a hierarchical structure formed of a plurality of hierarchies including at least the first hierarchy (the lowest hierarchy) and the second hierarchy (the hierarchy higher than the lowest hierarchy), and includes a plurality of first VDM fragment tables corresponding to the first hierarchy and a second VDM fragment table corresponding to the second hierarchy. Further, in the present embodiment, each of the plurality of first VDM fragment tables manages the validity of each data having a predetermined size (for example, 4 KiB) written in the PBA (physical address) range in the non-volatile memory4allocated to the first VDM fragment table. Further, in the present embodiment, the second VDM fragment table manages a VDM pointer (reference destination information for referencing the first VDM fragment table) indicating the first VDM fragment table for each first VDM fragment table. Here, as described in the comparative example of the present embodiment described above, if the VDM42′ is formed of only the plurality of VDM fragment tables1421′ corresponding to a single hierarchy, it is necessary to hold all the pointers (PBA in which the VDM fragment table T421′ is stored) indicating each of the plurality of VDM fragment tables T421′ in the memory55. On the other hand, in the present embodiment, according to the above-described configuration, since it is sufficient as long as the VDM pointer indicating the VDM fragment table corresponding to the highest hierarchy is held in the memory55, it is possible to reduce the data that continues to occupy a certain memory region on the memory55(VDM42management information is as close to 0 as possible), and thus the validity of the data written in the non-volatile memory4can be efficiently managed. Moreover, in the present embodiment, by reducing the data (VDM pointer indicating the VDM fragment table) stored in memory55(for example, DRAM) as described above, the LUT41can be preferentially expanded on the memory55(cached in the cache memory551), and thereby, the response time (I/O response time) to the command from the host2can be shortened. Note that, the VDM42does not need to be updated when process for the read command from host2is executed, the I/O response time can be further shortened. Further, when the process for the Trim command described above is executed, the process of updating the VDM42may be delayed. In such a configuration, the memory region (that is, the memory ratio) allocated to the LUT41and the VDM42may be dynamically changed. Further, in the comparative examples of the present embodiment, as described above, it takes time for an internal process (starting process and terminating process) when starting and terminating the memory system3; however, in the present embodiment, at the time of starting process, the VDM pointer indicating the VDM fragment table corresponding to the highest hierarchy may be expanded in the memory55, and at the time of terminating process, the VDM pointer may be made non-volatile, so that the time required for internal process can be shortened. Moreover, in the present embodiment, in a case where the validity of each data having a predetermined size written in the PBA range allocated to the first VDM fragment table is not common (that is, valid data and invalid data are mixed as the data written in the PBA range), the second VDM fragment table manages the VDM pointer that indicates the first VDM fragment table corresponding to the lower hierarchy. Further, in a case where the validity of each data having a predetermined size written in the PBA range allocated to the first VDM fragment table is common (that is, all of the data having a predetermined size written in the PBA range is valid or invalid), the second VDM fragment table manages the validity of the data collectively. In the present embodiment, with such a configuration, for example, when updating the validity of data written in wide consecutive PBA ranges, since the VDM42can be updated only by changing the entry (magic number) included in the second VDM fragment table, the process for managing the validity of data can be simplified. Specifically, for example, in a case of the memory system3(non-volatile memory4) capable of storing several PiB data, for example, it is possible to collectively operate (update) several G PBA ranges by simply changing the magic number (8 bits) stored in one entry included in the VDM fragment table corresponding to the highest hierarchy. That is, in the present embodiment, it is possible to suppress bit operations such as updating the bitmaps included in the first VDM fragment table individually, and to reduce the processing cost. Further, in the present embodiment, since the PBA ranges (granularity) allocated to the VDM fragment table are different depending on the hierarchy, the VDM42can be flexibly updated. Further, for example, in the second VDM fragment table, when managing the validity of each data having a predetermined size written in the PBA ranges allocated to the first VDM fragment table collectively, the memory region in which the first VDM fragment table is stored can be released by destroying the first VDM fragment table. According to this, in the present embodiment, it is possible to reduce the memory region required for storing the VDM42. Further, in the present embodiment, the first VDM fragment table corresponding to the first hierarchy and the second VDM fragment table corresponding to the second hierarchy have the same data structure. Specifically, the first VDM fragment table manages the validity of the data having a predetermined size (4 KiB) for each predetermined number of (for example, 32) entries. In addition, the second VDM fragment table manages the VDM pointer indicating each of the first VDM fragment data for each predetermined number of (for example, 32) entries. In the present embodiment, such a configuration simplifies the hierarchical structure of the VDM42, and can reduce the calculation cost when referring to the VDM42(each VDM fragment table). Also, for example, in order to refer to the VDM fragment table to which the target PBA is allocated, it is necessary to go through the plurality of hierarchies, and since the process in such a case can be made uniform regardless of the hierarchy (that is, the same software code can be used), the VDM42can be referred to efficiently. The VDM42in the present embodiment may have a hierarchical structure including at least the first and second hierarchies; however, the number of hierarchies constituting the hierarchical structure of the VDM42may be 3 or more. The number of hierarchies constituting the hierarchical structure of the VDM42may be appropriately changed based on, for example, the storage capacity (the number of PBAs) of the non-volatile memory4. Further, in the present embodiment, similar to the VDM42, the LUT41(address translation table) also has a hierarchical structure, and each of the plurality of LUT fragment tables included in the LUT41has the same data structure as the VDM fragment table included in the VDM42. According to such a configuration, even when tracing a plurality of hierarchies to refer to the LUT41, the same software code as the VDM42can be used, so that efficient process can be realized. Further, for example, the LUT41(LUT fragment table) and the VDM42(VDM fragment table) updated on the cache memory551need to be written back to the non-volatile memory4(that is, made non-volatile), and since the LUT fragment table and the VDM fragment table are configured to have the same size, the LUT fragment table and the VDM fragment table can be collectively made non-volatile without distinction. According to this, the writing efficiency of the LUT fragment table and the VDM fragment table to the non-volatile memory4can be improved, and the non-volatile cost can be reduced. In the present embodiment, since the number of PBAs managed by the LUT fragment table corresponding to the lowest hierarchy is smaller than the number of PBAs managed by the VDM fragment table corresponding to the hierarchy, the number (first number) of hierarchies constituting the hierarchical structure of the VDM42is smaller than the number (second number) of hierarchies constituting the hierarchical structure of the LUT41. Moreover, in the present embodiment, the number N of entries in the VDM fragment table so as to satisfy the above conditional expression (M=y×N{circumflex over ( )}x) and the number M of data (that is, the PBA managed in the entry) having a predetermined size whose validity is managed in one entry of the VDM fragment table corresponding to the lowest hierarchy are determined, and the LUT fragment table and the VDM fragment table are configured to have the same data structure. In the present embodiment, with such a configuration, the LUT41can be updated only by changing the entry (PBA) included in the LUT fragment table corresponding to the higher hierarchy, and the VDM42can be updated simply by changing the entry (magic number) included in the VDM fragment table without changing the bitmap (performing bit manipulation). Therefore, in the present embodiment, it is possible to achieve both efficient management of the correspondence between the LBA and the PBA in the LUT41and efficient management of data validity in the VDM42. In addition, in order to realize more efficient management in the LUT41and the VDM42, N and M that satisfy the conditional expression M=N{circumflex over ( )}x (that is, M is a power of N) may be employed as illustrated inFIG.14above, such as N=8 and M=64, and N and M that satisfy the conditional expression M=N (that is, M is equal to N), such as N=32 and M=32 illustrated inFIG.8and N=64 and M=64 illustrated inFIG.13, may be employed. Further, (values of) N and M may be configured to be appropriately set or changed by the administrator of the memory system3or the like. Here, for example, the pointer size in C language is the same as the calculation bit width. In this case, for example, if M is smaller than the calculation bit width, the pointer (address information in the cache memory551) cannot be stored as it is in the entry of the fragment table. For this, it is conceivable to divide and store the pointer, but the processing cost is high. On the other hand, if M is larger than the calculation bit width, it is possible to store the pointer as it is in the entry of the fragment table, but it is not efficient because there are unused bits (the cache is wasted). Further, in this case, the size of the fragment table becomes large, so that the non-volatile cost increases. For this, for example, it is conceivable to execute a process of excluding unnecessary parts before making the fragment table non-volatile, but the processing cost is high. Therefore, in the present embodiment, M may be determined so as to correspond to (for example, match) the calculation bit width (32 bits or 64 bits) in the memory system3, for example. According to such a configuration, since the pointer (address information in cache memory551) having the same calculation bit width and size can be stored in the entry of the fragment table without processing, the LUT41and the VDM42can be efficiently managed. Further, according to such a configuration, it is not necessary to unnecessarily increase the size of the fragment table. In the present embodiment, although it has been described that the controller5included in the memory system3functions as a flash translation layer (FTL) configured to perform data management and block management of the non-volatile memory4(NAND type flash memory), the function as the FTL may be possessed by the host2side connected to the memory system3. In the case of such a configuration, the LUT41and the VDM42described in the present embodiment are managed by the host2, and the update process or the like of the LUT41and the VDM42is executed on the host2side. In the case of such a configuration, the address translation from the LBA to the PBA may also be executed on the host2side, and the command from the host2(for example, the read command) in this case may include the PBA. Next, controlling performed by the VDM42structured as above, that is, the memory system3with a data map of the present embodiment in order to decrease processing costs of the data map will be explained. As described above, the memory cell array of the non-volatile memory4includes a plurality of blocks, and each of the blocks includes many pages. In the memory system3(SSD), each block functions as an erase unit of data. Furthermore, each page is a unit of data write operation and data read operation. The size of block is an integral multiple of the size of data by which the validity is collectively managed by the management data MD3of the second VDM fragment table T422, for example (in this example, 4 KiB×32×32=4 MiB). Here, a conventional model in which each of the blocks is cyclically used will be explained as a comparative example with reference toFIG.21. Note that, in this example, the comparative example will be explained using the structure of the memory system3of the present embodiment (write control unit561and garbage collection control unit563). The blocks will be roughly divided into blocks of a free block group a1(free block a11) and blocks of an allocated block group a2(being-written blocks a21and written blocks a22). The free block a11is a block to which data is not written. Upon supply of the free block a11, the write control unit561writes write data requested to be written to the non-volatile memory4by a write command from the host2. When the write data is written, the block transits to a being-written block a21from the free block a11. That is, the being-written block a21is a write destination block of data designated by the write control unit561. While there is an empty page in the being-written block a21, the write control unit561executes write of the write data with respect to the being-written block a21. When the write data is written to all pages of the being-written block a21, the block transits to a written block a22from the being-written block a21. That is, the written block a22is a block to which data write by the write control unit561has been completed. Upon completion of data write to a being-written block21, the write control unit561receives supply of a new free block a11, and executes write of write data. As the above step proceeds, the number of free blocks a11decreases while the number of written blocks a22increases. Furthermore, in an SSD which cannot perform overwrite of data, the update of data is executed by invalidating before-update data stored in a page, and writing updated data in a different page. Thus, there may be a condition where invalid data occupies the majority of a written block a22. The garbage collection control unit563moves valid data in N written blocks a22in which many invalid data exist to M blocks (M<N) to create N-M free blocks a11. That is, through garbage collection (GC) by the garbage collection control unit563, the written blocks a22partly transit to free blocks a11. As above, each of the blocks will be cyclically used from free block a11to being-written block a21, from being-written block a21to written block a22, and from written block a22to free block a11. Now, with reference toFIG.22, the size of data used to manage the validity by the flag information and various kinds of management data (MD2, and MD3) of the data map (VDM42) in the memory system3of the present embodiment will be reviewed. InFIG.22, symbol b1indicates the size of data validity of which is managed by the flag information of the first VDM fragment table T421. The flag information represents the validity of the data written in one PBA (4 KiB, in this example) in one bit. As described above, the first VDM fragment table T421includes, for example, 32 entries. In each entry, for example, 32 flag information are included. Those 32 flag information of each entry form 32-bit bitmap in which each bit is indicative of validity of 4 KiB data with respect to the data written in the 32 PBAs (4 KiB×32=128 KiB data). Symbol b2indicates the size of data validity of which is managed by the management data MD2of the first VDM fragment table T421. One management data MD2is provided with each entry of the first VDM fragment table T421. To the management data MD2, a magic number collectively representing the validity of 128 KiB data indicated by the 32-bit bitmap formed by the flag information may be set. That is, the management data MD2can collectively represent the validity of 128 KiB data. Symbol b3indicates the size of data validity of which is managed by the management data MD3of the second VDM fragment table T422. One management data MD3is provided with one first VDM fragment table T421. To the management data MD3, a magic number collectively representing the validity of 128 KiB×32=4 MiB indicated by the 32 management data MD2of the first VDM fragment table T421(validity of data written in 1024 PBAs indicated by 32×32=1024 flag information) may be set. That is, the management data MD3can collectively represent the validity of 4 MiB. In the memory system3of the present embodiment which comprises the data map (VDM42) including the flag information and the management data (MD2and MD3), if all data written in one block are, for example, an integral multiple of the size of data validity of which can be collectively managed by the management data MD2(128 KiB), the operation of the flag information (bit operation) becomes unnecessary. Furthermore, for example, if they are an integral multiple of the size of data validity of which can be collectively managed by the management data MD3(4 MiB), the operation of the management data MD2further becomes unnecessary. In other words, if data written in one block are, for example, less than the size of data validity of which can collectively managed by the management data MD2(128 KiB), or are an integral multiple of such a size+data size of which is below such a size (data including a fraction below such a size), the operation of the flag information becomes necessary. Thus, the write data of the host2is preferred to be set to be an integral multiple of the size of data (128 KiB) validity of which can be collectively managed by the management data MD2, or furthermore, set to be an integral multiple of the size of data validity of which can collectively managed by the management data MD3(4 MiB). However, the host2generates highly frequent data accesses of very small size which is, for example, below 4 KiB with respect to the memory system3in conjunction with a file system used to manage files, directories, and the like by an operating system (OS). Furthermore, the host2may vary a unit of data accesses with respect to the memory system3based on, for example, processes of application programs operated under control of the OS. A hypothetical situation where the write control unit561receives supply of the free block a11and writes data size of which is smaller than the size of data validity of which can be collectively managed by the management data MD2(128 KiB) will be considered. In that case, even if the data size of which is an integral multiple of the data validity of which can be collectively managed by the management data MD2(128 KiB) is sent next time, the operation of the flag information becomes necessary thereinafter with respect to a part of (fractions before and after) the write data to be written in the being-written block a21following the data directly before thereof (transitioning from the free block a11). In consideration of this point, one of the usage models of the blocks of the memory system3of the present embodiment will be explained with reference toFIG.23. In the memory system3of the present embodiment, the write control unit561writes the write data to the free block a11. A difference from the above comparative example is that the write control unit561of the present embodiment secures various kinds of being-written blocks a21which are data write destinations based on the size of the write data (for example, first block a21-1, second block a21-2, and third block a21-3). Specifically, the write control unit561controls the data write such that data size of which is less than the data validity of which can be collectively managed by the management data MD2(128 KiB), or an integral multiple of the size+data size of which is below the size, and data size of which is an integral multiple (data including fragments which are below the size) do not mix in one block. In other words, the write control unit561collects data requiring the operation by the flag information in the same kind of block. Note that, as explained in the above comparative example, each of the blocks is used cyclically from free block a11to being-written block a21, from being-written block a21to written block a22, and from written block a22to free block a11. Thus, if the block using writing as the first block a21-1transits from the written block a22to the free block a11, the block may be supplied to the write control unit561as any of the first to third blocks a21-1, a21-2, and a21-3in the next cycle instead of the first block a21-1. That is, each block has not been preliminarily associated with any of the first to third blocks a21-1, a21-2, and a21-3. Or, at each time when each block transits from a state where the block is associated with a free block group a1to a state where the block is associated with an allocated block group a2, many pages of the block may be determined. That is, many pages of each of the blocks may be arbitrarily rearranged. FIG.24is a diagram illustrating one example of selecting a write destination block by the write control unit561of the present embodiment. Initially, the write control unit561determines whether or not the size of write data of the host2is an integral multiple of the size of data validity of which can be collectively managed by the management data MD2(128 KiB). If it is not an integral multiple of 128 KiB, the write control unit561selects the second block a21-2as the write destination of the write data at that time. That is, the second block a21-2is a block collecting data requiring the operation of the flag information. If it is an integral multiple of 128 KiB, the write control unit561then determines whether or not the size of the write data of the host2is an integral multiple of the size of the data validity of which can be collectively managed by the management data MD3(4 MiB). Note that, in this example, the data map (VDM42) has a hierarchical structure including the first VDM fragment table T421(first hierarchy [lowest hierarchy]) and the second VDM fragment table T422(second hierarchy [upper layer of the first hierarchy]); however, if the data map does not have a hierarchical structure with only the first VDM fragment table T421, the write control unit561may select the first block a21-1as the write destination of the write data when the size of the write data of the host2is determined an integral multiple of 128 KiB. If it is not an integral multiple of 4 MiB, the write control unit561selects the first block a21-1as the write destination of the write data. The first block a21-1is a data block which does not require the operation the flag information but requires the operation of the management data MD2. On the other hand, if it is an integral multiple of 4 MiB, the write control unit561selects the third block a21-3as the write destination of the write data. The third block a21-3is a data block which does not require the operation of the flag information and the management data MD2but the operation of the management data MD3. If data is being written to the first block a21-1and the block is full, the write control unit561does not select the second block a21-2or the third block a21-3as the write destination of the data even if there is an empty space therein, and receives the supply of a new free block a11as the first block a21-1to execute write of the remaining data. As above, the write destination blocks are switched based on the size of the write data of the host2, and thus, in the memory system3of the present embodiment, the operation of the flag information (bit operation) is not required with respect to the first block a21-1and the third block a21-3which are other than the second block a21-2at the data write time and data update time (invalidation time of pre-update data). The operation of the management data MD2becomes further unnecessary with respect to the third block a21-3. Furthermore, in the garbage collection (GC), with respect to a block which transits from the first block a21-1which is one of the being-written blocks a21to the written block a22, reference of the flag information (bit scan) becomes unnecessary when the valid data in the block is moved. The block which transits from the third block a21-3to the written block a22does not require reference of the management data MD2. Not only the transition original block but also the transition destination block, the operation of the flag information is not necessary, the operation of the management data MD2and the management data MD3, or the operation of the management data MD3alone suffice. Furthermore, in the garbage collection (GC), if the block which transits to the written block a22from the first block a21-1or the third block a21-3is selected as a target, an effect of overhead reduction will be expected. For example, if the page size is 16 KiB and only 4 KiB therein is valid, there will be a 12 KiB unnecessary read in an operation of 16 KiB read and 4 KiB write. In the block which transits to the written block a22from the first block a21-1or the third block a21-3, only the large data of 128 KiB unit or 4 MiB unit may exist, and such an unnecessity does not occur. Furthermore, read of valid data and write of the valid data to the transition destination can be executed in a bulk size of multiple pages. Furthermore, if the data map (VDM42) has a hierarchical structure, switching of the write destination blocks based on the size of write data maintains address continuity, and a compression rate of the table compression increases, and thus, memory capacity secured on the memory55for the data map can be reduced. The reduced amount of the memory capacity can be used for performance improvement of the memory system3, or the capacity of the memory55can be minimized. FIG.25is a flowchart illustrating an example of an order of selection of write destination blocks based on a data size, which is included in the data write operation of the memory system3of the present embodiment, as explained with reference toFIG.18. The order is executed by the write control unit561in step S2ofFIG.18. The write control unit561determines whether or not the data size is an integral multiple of 128 KiB (so the validity thereof can be collectively managed by the management data MD2) (step S31). If it is not an integral multiple of 128 KiB (NO in step S31), the write control unit561selects the second block a21-2as a write destination of data (step S32). If it is an integral multiple of 128 KiB (YES in step S31), the write control unit561then determines whether or not the data size is an integral multiple of 4 MiB (so the validity thereof can be collectively managed by the management data MD3) (step S33). If it is an integral multiple of 4 MiB (YES in step S33), the write control unit561selects the third block a21-3as a write destination of data (step S34). On the other hand, if it is not an integral multiple of 4 MiB (NO in step S33), the write control unit561selects the first block a21-1as a write destination of data (step S35). As above, the memory system3of the present embodiment, switching of write destination blocks based on the data size, the process costs of the data map (VDM42) can be suppressed. Note that, as explained above, the host2may include a function as an FTL, and LUT41and VDM42may be managed by the host2, and an update process and the like of the LUT41and the VDM42may be executed in the host2side. In that case, selection of write destination blocks to suppress the process costs of the data map may be executed in the host2side. Furthermore, in that case, number M of data validity of which is managed in one entry of the first VDM fragment table T421corresponding to the lowest hierarchy (PBA storing the data) may be determined to correspond to a calculation bit width of the host2. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 144,499 |
11861198 | The drawings have not necessarily been drawn to scale. Similarly, some components and/or operations may be separated into different blocks or combined into a single block for the purposes of discussion of some embodiments of the present technology. Moreover, while the present technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the present technology to the particular embodiments described. On the contrary, the present technology is intended to cover all modifications, equivalents, and alternatives falling within the scope of the present technology as defined by the appended claims. DETAILED DESCRIPTION The techniques described herein are directed to journal replay optimization for a distributed storage architecture. The distributed storage architecture includes nodes that manage and provide clients with access to distributed storage. The distributed storage may be composed of storage devices local to each node. Data within the distributed storage may be organized into storage containers. A storage container may comprise a logical unit number (LUN). A LUN serves as an identifier for a certain amount of storage of the distributed storage. The LUN is used to provide clients with access to data within the distributed storage through a file system (e.g., a network file system). The nodes implement storage operating system instances that create and host volumes within the LUN. The storage operating system instances expose these volumes to clients for network file system access to data within the volumes. In this way, the distributed storage is exposed to clients through multiple nodes as LUNs that provide clients with network file system access to data through volumes. A storage operating system instance of a node may utilize a portion of a LUN as a journal. In some embodiments, the journal may be maintained within relatively faster storage than disk storage, such as within memory. In some embodiments, the journal is implemented as a simulated non-volatile random-access memory (NVRAM) device that is block addressable where log records of the journal are stored within 4 kb blocks or any other fixed sized blocks. The journal is used to log metadata and data of write operations as the log records. For example, a write operation is received by the storage operating system instance from a client. The write operation is writing data to a file. The file is identified by an inode, and the location of where that data is being written is identified by a file block number. In this way, a log record is created within the journal to comprise the data and the metadata that includes the inode and the file block number. Once the log record is created, a response that the write operation has been successfully implemented is provided back to the client. Logging write operations to the journal in memory is faster than individually executing each write operation upon storage devices (disk storage) of the distributed storage before responded back to the clients, thus improving client performance and reducing latency of processing the write operations. Over time, the journal is populated with the log records corresponding to changes by write operations that have been accumulated within the journal. Periodically, a consistency point is triggered to update a file system based upon the changes. During the consistency point, file system metadata (e.g., inodes, file block numbers, buftree identifiers of buftrees used to translate virtual volume block numbers into block address space of a LUN, etc.) and disk locations for the data (e.g., indirect blocks pointing to user blocks storing the actual data) are updated within the file system based upon the log records. As part of implementing the consistency point, the data portion of the log records (e.g., data being written by the write operations logged into the journal) are stored from the journal to physical storage (disk storage) used by the file system to persist data. During the consistency point, read operations from clients are responded to with consistent data from the journal because this up-to-date consistent data is still within the journal in memory before being stored to the physical storage. If the storage operating system instance experiences a failure before the consistency point has completed, then the log records within the journal must be replayed in order to make the file system consistent. Replay must be performed to make the file system consistent before client I/O operations can be processed because the client I/O operations would either fail or return stale data. Thus, clients will be unable to access data within the distributed storage until the replay has successfully completed. Once the replay has successfully updated the file system and stored data within the log records to the physical storage, the client I/O operations can be processed. Replay can result in prolonged client downtime where the clients are unable to access the data within the distributed storage. One reason why replay can take a substantial amount of time is that indirect blocks, pointing to physical disk locations of the user blocks comprising actual user data, must be loaded into memory from disk storage. The indirect blocks are loaded into memory during replay because write operations being replayed from log records within the journal may modify the indirect blocks so that the indirect blocks point to new disk locations of where the write operations are writing data. These indirect blocks may be part of a hierarchical structure (e.g., a file system tree) that includes a root node of a file system at the top, and then one or more levels of indirect blocks pointing to blocks within lower levels, and a lowest level of user blocks comprising actual user data. Loading the indirect blocks from disk storage to memory results in a lot of small disk I/O operations due to the small sizes of the indirect blocks (e.g., an indirect block may be comprised of a 4 kb block). Thus, a large number of small disk I/O operations must be performed to load the indirect blocks for the log records into memory (e.g., thousands of 4 kb indirect blocks), which increases the time to perform the replay and thus increasing client downtime. Furthermore, the disk locations of the indirect blocks is not yet known until the log records are being processed, and thus the indirect blocks cannot be prefetched into memory. Various embodiments of the techniques provided herein reduce the time to perform the replay by directly caching indirect blocks within log records so that the indirect blocks do not need to be loaded from disk storage to memory during replay. Reducing the time to complete the replay reduces client downtime where client I/O is blocked until replay completes. In some embodiments of caching indirect blocks into logs records of the journal, a write operation is received by a journal caching process from a client. The node evaluates the write operation to identify an indirect block of data targeted by the incoming write operation. The indirect block points to a disk location where the data will be written by the incoming write operation to the distributed storage. The journal caching process may use various criteria for determining whether and how to cache the indirect block. In some embodiments of using the criteria to determine whether to cache the indirect block, the journal caching process determines whether the indirect block is dirty or clean. The indirect block is clean if the indirect block has not already been cached within the journal, and thus there are no already logged write operations that will modify the indirect block. The indirect block is dirty if the indirect block has already been cached within a log record in the journal for a logged write operation that will modify the indirect block. In this scenario, the logged write operation and the incoming write operation target the same data pointed to by the indirect block. If the indirect block is dirty and already cached within the journal, then the indirect block is not re-cached with the incoming write operation into the journal. This is because the cached indirect block will be loaded into memory from the journal during a subsequent replay process and the cached indirect block only needs to be loaded into memory once. If the indirect block is clean and not already cached within the journal, then the indirect block is cached within free space of a log record within which metadata (e.g., an inode and a file block number of a file targeted by the incoming write operation) and data of the incoming write operation is being logged. In some embodiments of using the criteria to determine how to cache the indirect block, a size of the free space within the log record is determined. In some embodiments, the log record is composed of a header block and one or more journal blocks. The metadata of the write operation is stored within the header block. The data being written by the write operation is stored within the one or more journal blocks. In some embodiments, the header block and the journal blocks are separated out into logical block addresses with fixed block sizes (e.g., each logical block address is 4096 bytes), which allows for block sharing of the log records with a consistency point process that stores the data within the journal blocks to physical storage during a consistency point. Some of those blocks may have free space that is not being consumed by the metadata and/or the data. In some embodiments, the metadata within the header block consumes 512 bytes, and thus there is 3.5 kb of free space remaining within the header block. If the size of the indirect block fits within free space of the header block or any of the journal blocks of the log record, then the indirect block is directly cached into the free space. If the size of the indirect block does not fit within the free space of the header block or any journal blocks, then the indirect block is modified to reduce the size of the indirect block so that the indirect block fits within the free space. The size of the indirect block can be compressed to a compressed size that fits within the free space and/or an unused portion of the indirect block may be removed from the indirect block to reduce the size of the indirect block to a size that fits within the free space. In this way, the indirect block is cached within the log record used to log the write operation. Because the indirect blocks are directly cached within log records of the journal stored in memory, the indirect blocks do not need to be retrieved from disk storage into memory during replay. This greatly reduces the time of performing the replay, and thus reducing the client downtime where client I/O operations are blocked until the replay fully completes. Replay is performed after a failure in order to recover from the failure and bring a file system back into a consistent state. During replay, log records are used to generate file system messages that are executed to bring the file system back into the consistent state reflected by the write operations logged within the journal. The write operations may modify indirect blocks during replay. This process is performant and is accomplished with lower latency because the indirect blocks are already available in memory and do not need to be read from disk storage into the memory where the journal is maintained within the memory. Various embodiments of the present technology provide for a wide range of technical effects, advantages, and/or improvements to computing systems and components. For example, various embodiments may include one or more of the following technical effects, advantages, and/or improvements: 1) caching indirect blocks associated with data modified by write operations into log records of a journal within which the write operations are logged; 2) selectively determining whether to cache indirect blocks based upon whether the indirect blocks are dirty (e.g., an indirect block already cached within the journal by another write operation targeting the indirect block) or clean (e.g., an indirect block not yet cached) so that indirect blocks are not redundantly cached within the journal; 3) modifying indirect blocks by removing unused portions of indirect blocks and/or by compressing the indirect blocks in order to reduce a size of the indirect blocks to fit within free space of log records; 4) caching a single instance of an indirect block within the journal in memory so that an first write operation modifying the indirect block and all subsequent write operations modifying the indirect can benefit from the indirect block being cached merely once within the memory used to host the journal; 5) reducing the time to perform a replay after a failure in order to bring a file system to a consistent state by utilizing already cached indirect blocks within memory without having the read the indirect blocks from slower disk storage into the faster memory; and/or 6) reducing client downtime where client I/O operations are blocked during the replay by reducing the time to perform the replay. FIG.1Ais a block diagram illustrating an example of a distributed storage architecture102of nodes in accordance with an embodiment of the present technology. The distributed storage architecture102hosts a first node104, a second node105, a third node107, and/or other nodes that manage distributed storage110accessible to the nodes. The distributed storage110is composed of storage devices that are accessible to the nodes. The distributed storage may be composed of storage devices112managed by the first node104, storage devices114managed by the second node105, storage devices116managed by the third node107. The distributed storage architecture102may implement the nodes as servers, virtual machines, containers within a container orchestration platform (e.g., Kubernetes), serverless threads, etc. The nodes may provide various types of clients with access to the distributed storage. The nodes may provide a client device120, a client virtual machine122, a client container application (e.g., a file system service application hosted within a container of a container orchestration platform), and/or other types of clients with access to the distributed storage. In some embodiments, a node may create a LUN within the distributed storage110. The LUN may be comprised of storage located across one or more of the storage devices of the distributed storage110. A storage operating system instance of the node may create volumes within the LUN. The storage operating system instance may provide clients with access to data stored within the volumes of the LUN through a network file system. In this way, the clients are provided within network file system access to the distributed storage110. As will be discussed in further detail, the storage operating system instance may utilize a portion of the LUN as a simulated non-volatile random-access memory (NVRAM) device. The NVRAM device is used as a journal for logging write operations from the clients. When the node receives a write operation, the node may log the write operation into the journal as a log record. As write operations are accumulated within the journal as log records, a consistency point may be reached (e.g., a certain amount of time occurring since a prior consistency point, the journal reaching a certain number of log records, the journal becoming full or close to full, etc.). During the consistency point, the data of the write operations logged within the journal are stored to the distributed storage110(e.g., stored to final destinations within the distributed storage110). FIG.1Bis a block diagram illustrating an example of the first node104of the distributed storage architecture102in accordance with an embodiment of the present technology. The first node104may comprise a data management system (DMS)142and a storage management system (SMS)148. The data management system142is a client facing frontend, which allows clients (e.g., a client152) to interact with the first node104. The clients may interact with the data management system142through an API endpoint140configured to receive API commands from the clients, such as commands to access data stored within the distributed storage110. The storage management system148is a distributed backend (e.g., instances of the storage management system148may be distributed amongst multiple nodes of the distributed storage architecture102) used to store data on storage devices of the distributed storage124. The data management system142may host one or more storage operating system instances144, such as a storage operating system instance accessible to the client152for storing data. In some embodiments, the first storage operating system instance may run on an operating system (e.g., Linux) as a process and may support various protocols, such as NFS, CIFS, and/or other file protocols through which clients may access files through the storage operating system instance. The storage operating system instance may provide an API layer through which applications may set configurations (e.g., a snapshot policy, an export policy, etc.), settings (e.g., specifying a size or name for a volume), and transmit I/O operations directed to volumes146(e.g., FlexVols) exported to the clients by the storage operating system instance. In this way, the applications communicate with the storage operating system instance through this API layer. The data management system142may be specific to the first node104(e.g., as opposed to the storage management system (SMS)148that may be a distributed component amongst nodes of the distributed storage architecture102). The storage operating system instance may comprise an operating system stack that includes a protocol layer (e.g., a layer implementing NFS, CIFS, etc.), a file system layer, a storage layer (e.g., a RAID layer), etc. The storage operating system instance may provide various techniques for communicating with storage, such as through ZAPI commands, REST API operations, etc. The storage operating system instance may be configured to communicate with the storage management system148through iSCSI, remote procedure calls (RPCs), etc. For example, the storage operating system instance may communicate with virtual disks provided by the storage management system148to the data management system142, such as through iSCSI and/or RPC. The storage management system148may be implemented by the first node104as a storage backend. The storage management system148may be implemented as a distributed component with instances that are hosted on each of the nodes of the distributed storage architecture102. The storage management system148may host a control plane layer. The control plane layer may host a full operating system with a frontend and a backend storage system. The control plane layer may form a control plane that includes control plane services, such as the slice service106that manages slice files used as indirection layers for accessing data on storage devices of the distributed storage110, the block service108that manages block storage of the data on the storage devices of the distributed storage110, a transport service used to transport commands through a persistence abstraction layer to a storage manager150, and/or other control plane services. The slice service106may be implemented as a metadata control plane and the block service108may be implemented as a data control plane. Because the storage management system148may be implemented as a distributed component, the slice service106and the block service108may communicate with one another on the first node104and/or may communicate (e.g., through remote procedure calls) with other instances of the slice service106and the block service108hosted at other nodes within the distributed storage architecture102. In some embodiments, the first node104may be a current owner of an object (a volume) whose data is sliced/distributed across storage device of multiple nodes, and the first node104can use the storage management system148to access the data stored within the storage devices of the other nodes by communicating with the other instances of the storage management system. In some embodiments of the slice service106, the slice service106may utilize slices, such as slice files, as indirection layers. The first node104may provide the clients with access to a storage container such as a LUN or volume using the storage operating system instances144of the data management system142. The LUN may have N logical blocks that may be 1 kb each. If one of the logical blocks is in use and storing data, then the logical block has a block identifier of a block storing the actual data. A slice file for the LUN (or volume) has mappings that map logical block numbers of the LUN (or volume) to block identifiers of the blocks storing the actual data. Each LUN or volume will have a slice file, so there may be hundreds of slices files that may be distributed amongst the nodes of the distributed storage architecture102. A slice file may be replicated so that there is a primary slice file and one or more secondary slice files that are maintained as copies of the primary slice file. When write operations and delete operations are executed, corresponding mappings that are affected by these operations are updated within the primary slice file. The updates to the primary slice file are replicated to the one or more secondary slice files. After, the write or deletion operations are responded back to a client as successful. Also, read operations may be served from the primary slice since the primary slice may be the authoritative source of logical block to block identifier mappings. In some embodiments, the control plane layer may not directly communicate with the distributed storage124but may instead communicate through the persistence abstraction layer to a storage manager150that manages the distributed storage124. In some embodiments, the storage manager150may comprise storage operating system functionality running on an operating system (e.g., Linux). The storage operating system functionality of the storage manager150may run directly from internal APIs (e.g., as opposed to protocol access) received through the persistence abstraction layer. In some embodiments, the control plane layer may transmit I/O operations through the persistence abstraction layer to the storage manager150using the internal APIs. For example, the slice service106may transmit I/O operations through the persistence abstraction layer to a slice volume hosted by the storage manager150for the slice service106. In this way, slice files and/or metadata may be stored within the slice volume exposed to the slice service106by the storage manager150. The first node104may implement a journal caching process154configured to perform journaling of write operations using a journal156. In some embodiments, the journal caching process154may be hosted by the data management system142or the storage management system148. The journal156may be stored within memory of the first node104as opposed to within the distributed storage110so that the journal caching process154can quickly access the journal156at lower latencies than accessing the distributed storage110. When write operations are received by the first node104, the write operations are initially logged within the journal156as log records. These write operations may target data organized within a file system. Once a write operation is logged into the journal156, a success response for the write operation can be quickly provided back to the client. The success response is returned much quicker than if the success response was returned to the client after executing the write operation to store data to the slower storage devices of the distributed storage110. Thus, client performance is improved and write operation execution latency is reduced by logging the write operations into the journal156. As part of logging the write operation, the journal caching process154evaluates the write operation to identify an indirect block of data targeted by the incoming write operation. In particular, the incoming write operation may target a file system that is organized according to a hierarchical tree structure. At the top of the hierarchical tree structure is a root node. A lowest level (level L0) of the hierarchical tree structure comprises user blocks (L0 blocks) within which user data is stored. The hierarchical tree structure may comprise one or more intermediary levels between the root node and the lowest level (level L0) of user blocks. The one or more intermediary levels are used as indirection layers that comprise indirect blocks pointing to blocks in lower levels of the hierarchical tree structure. In some embodiments, indirect blocks (L1 blocks) within a level (level L1) directly above the lowest level (level L0) of user blocks comprises indirect blocks (L1 blocks) pointing to the user blocks. A level (level L2) directly above the level (Level L1) of indirect blocks may also comprise indirect blocks (L2 blocks) that point to the indirect blocks (L1 blocks) of the level (level L1). In this way, the root node and the indirect blocks within the intermediary levels of the hierarchical tree structure can be used to traverse down through the hierarchical tree structure to identify and access user data within the user blocks. In some embodiments, an indirect block comprises a pointer to another block. The pointer may comprise a physical volume block number and virtual volume block number used to access the block. If the indirect block has not already been cached within the journal156(e.g., the indirect block is clean), then the journal caching process154caches the indirect block within the log record within which the write operation is logged. Otherwise, if the indirect block has already been cached within the journal156(e.g., the indirect block is dirty), then the journal caching process154does not cache the indirect block within the log record. Once the write operation and/or the indirect block has been cached within the log record, then a response is provided back to the client that the write operation was successfully performed. Responding back to the client after merely logging the write operation and caching the indirect block significantly reduces a timespan that the client would have to wait for a response if the response was otherwise provided only after the write operation was executed to disk storage, which would increases latency of the write operation due to the higher latency of disk storage. A subsequently journal replay operation of the log record will be faster because the indirect block is already cached within the log record in memory and will not need to be read from the higher latency disk storage into memory because the log record is already stored within the memory. Periodically or based upon various triggers, a consistency point process160is implemented to perform consistency points to store data of the logged write operations from the log records in the journal156to the distributed storage110. The consistent point process160may trigger a consistency point based upon the journal156having a threshold number of log records, the journal156becoming full or a threshold amount full (e.g., 85% of memory assigned to the journal156has been consumed), a threshold amount of time occurring since a prior consistency point, etc. The consistency point process160may update file system metadata of a file system and assign disk locations for the data being stored to the storage devices of the distributed storage110. If there is a failure within the distributed storage architecture102(e.g., a failure of the first node104or a different node such that the first node104is to take over for the failed node), then a replay process158is initiated as part of recovering from the failure. The replay process158may be triggered based upon a determination that the failure occurred during the consistency point process160. Because the consistency point process160did not fully complete in this scenario, the replay process158is performed to bring the file system into a consistent state. During the replay process158, log records are used to generate file system messages that are executed to bring the file system into the consistent state. The replay process158can be performed more quickly and efficiently because the indirect blocks are cached within the log records in the journal156that is stored within the relatively faster and lower latency memory compared to having to retrieve the indirect blocks from the slower and higher latency storage devices112of the distributed storage110into the memory. The indirect blocks are needed by the replay process158because logged write operations may modify the indirect blocks (e.g., a write operation may update an indirect block for data to point to a new disk location where the write operation is writing the data). A replay consistency point may be performed to store the data within the log records to the distributed storage110. FIG.2is a flow chart illustrating an example of a set of operations for caching indirect blocks into log records of the journal156in accordance with various embodiments of the present technology. This example is discussed in conjunction withFIG.3that shows a block diagram illustrating an example of caching indirect blocks into log records of the journal156in accordance with an embodiment of the present technology. During operation202of method200, the first node104may receive an incoming write operation304from a client device302. In some embodiments, the incoming write operation304may be received by the data management system142for processing by a storage operating system instance based upon the incoming write operation304targeting one of the volumes146. The incoming write operation304may be an operation to write a block of data to a particular file stored within the distributed storage110on behalf of the client device302. The incoming write operation304may include the data being written to the file, an inode of the file, and offset at which the data is being written. A log record306may be created within the journal156for logging the incoming write operation304into the journal156. The log record306may be comprised of one or more blocks. The blocks may have a fixed size (e.g., 4 kb aligned blocks) that is also used by the consistency point process160so that the consistency point process160can share the blocks within the journal156while performing a consistency point. In some embodiments, the log record306used to log the incoming write operation304comprises a header block308. The inode of the file and the offset at which the data is being written by the incoming write operation304is stored within the header block308. In some embodiments, the inode and offset may consume less than the entire size of the header block308, such as200bytes of the 4 kb header block. This leaves free space within the header block308. The log record306comprises one or more journal blocks used to store data of the incoming write operation304. The incoming write operation304may be writing data that is stored into the entire4096bytes of a first journal block310and 1 byte of a second journal block312with the remaining portion of the second journal block312having unused free space. During operation204of method200, the incoming write operation304may be evaluated by the journal caching process154to identify an indirect block305of the data targeted by (being written by) the incoming write operation304. In some embodiments, the incoming write operation304is received at the API endpoint140and is routed by the data management system142to the journal caching process154. The incoming write operation304comprises a payload of what data is being written and specifies where the data is to be written (e.g., writing data to a particular user block of a file that is pointed to by the indirect block305). In this way, the indirect block305can be identified by the journal caching process154by evaluating the information within the incoming write operation304that specifies where the data is to be written. The indirect block305may comprise a pointer used to locate the data targeted by the incoming write operation304. The indirect block305may specify a physical disk location of the data within a storage device of the distributed storage110. The journal caching process154may determine whether and how to cache the indirect block305into the log record306. In some embodiments of determining whether the cache the indirect block305, the indirect block305is evaluated to determine whether the indirect block305is clean or dirty, during operation206of method200. In some embodiments, the indirect block305is clean if the indirect block305is not already cached within the journal156, thus indicating that there are no logged write operations targeting the data pointed to by the indirect block305. In some embodiments, the indirect block305is clean if the indirect block points to a user block for which there are no logged write operations that are to write to that user block. In some embodiments, the indirect block305is clean if there are no logged write operations that will modify the indirect block305, utilize the indirect block305, and/or comprise information identifying the indirect block305and/or the user block pointed to by the indirect block305. The indirect block305is dirty if the indirect block305is already cached within the journal156, thus indicating that there is at least one logged write operation targeting the data pointed to by the indirect block305. If the indirect block305is dirty (e.g., the indirect block305is already cached within the journal156), then the indirect block305is not cached within the log record306because the indirect block305is already cached within the journal156. Instead of re-caching a duplicate of the indirect block305, the log record306is created without the indirect block305and is stored within the journal156in order to log the incoming write operation304, during operation208of method200. During operation210of method200, a response is returned to the client device302to indicate that the incoming write operation304was successful. The response is returned based upon the incoming write operation304being logged into the journal156using the log record306. If the indirect block305is clean and not dirty, then a determination is made as to whether a size of the indirect block305is greater than free space within each of the blocks (e.g., the 4 kb fixed size header and journal blocks) of the log record306(e.g., free space within the header block308or free space within the second journal block312), during operation212of method200. Free space within the header block308may be known because the header block308has a fixed size (e.g., 4 kb) and the size of the inode and offset within the header block may be known (e.g., 200 bytes), thus leaving the remaining portion of the header block308as free space. In some embodiments, if the header block308has sufficient free space, then the header block is used. If the header block308has insufficient free space, then each journal block is evaluated until a journal block with sufficient free space is found and is used. If the header block308and all journal blocks do not have sufficient free space, then a new journal block is created within the log record306to store the indirect block305. If the size of indirect block305is not greater than the free space within a block of the log record306(e.g., the header block308, the second journal block312, etc.), then the indirect block305is cached within the free space, during operation216of method200. In some embodiments, the indirect block305is cached as cached metadata within the header block308. It may be appreciated that the indirect block305may be cached elsewhere within the log record306(e.g., within the second journal block312, within a newly created third journal block created to store the indirect block305, etc.). In some embodiments, an indicator (e.g., one or more bits, a flag, etc.) may be stored with the indirect block305(e.g., just before a starting location of the indirect block305) within the log record306to indicate that the subsequent data following the indicator is the indirect block305. In some embodiments, if there is data stored after the200bytes of the inode and offset stored within the header block308, then that data will be assumed to be the indirect block305. During operation218of method200, the response with the success message for the incoming write operation304is provided back to the client device302. Because the journal156may be stored within memory by the first node104, the indirect block305may be quickly accessed from the journal156without having the read the indirect block305from the distributed storage110(disk storage) into the memory. If the size of the indirect block305is greater than the free space of each block of the log record306, then the indirect block305may be compressed to reduce the size of the indirect block305to a size smaller than the free space of at least one block within the log record306, during operation214of method200. In some embodiments of compressing the indirect block305, a particular compression algorithm capable of compressing the indirect block305to the size smaller than the free space may be selected and used to compress the indirect block305so that the indirect block305fits within the free space of a block within the log record306(e.g., the header block308). In some embodiments of compressing the indirect block305, the indirect block305may be evaluated to identify a portion of the indirect block305to remove. The portion may correspond to an unused portion of the indirect block305or a portion of the indirect block305storing other data than the pointer to the data (the disk location of the data) targeted by the incoming write operation304. The portion is removed from the indirect block305to reduce the size of the indirect block305so that the indirect block305can fit within the free space. In some embodiments, the indirect block305may have1024bytes of spare space (e.g., known zeros), which may be removed by a compression technique that removes/eliminates known zeros. In some embodiments, if compression will not reduce the size of the indirect block305to fit within the free space, then a new journal block may be created within the log record306for storing the indirect block305(e.g., a new 4 kb journal block to store the 4 kb indirect block305). Once the indirect block305has been compressed, the indirect block305is cached within the free space of the log record306, during operation216of method200. In some embodiments, the indirect block305is cached as the cached metadata within the header block308. It may be appreciated that the indirect block305may be cached elsewhere within the log record306. In some embodiments, if the compressed size of the indirect block305does not fit into the free space (e.g., free space of the header block308), then the indirect block305(e.g., uncompressed or compressed) is inserted elsewhere within the log record306(e.g., appended to an end of the log record306). During operation218method200, the response with the success message for the incoming write operation304is provided back to the client device302. Other information may be cached as the cached metadata within the log record306. In some embodiments, a raid checksum may be stored into the cached metadata within the log record306. The raid checksum can be subsequently used by a process (e.g., the replay process158and/or the consistency point process160) to verify the indirect block305. If the raid checksum within the cached metadata does not match a raid checksum calculated for the indirect block305, then the indirect block305within the log record306is determined to be corrupt and the indirect block will be read from the distributed storage110into the memory for use by the process. If the raid checksums match, then the indirect block305within the log record306is determined to be valid and can be used by the process. In some embodiments, context information may be stored as the cached metadata within the log record306. The context information may comprise a buftree identifier (e.g., an identifier of a buftree comprising indirect blocks of the file targeted by the incoming write operation304), a file block number of the file, and/or a consistency point count (e.g., a current count of consistency points performed by the consistency point process160). The context information can be subsequently used by a process (e.g., the replay process158and/or the consistency point process160) to determine whether the indirect block305within the log record306is corrupt or not and/or whether the indirect block is pointing the file system to the correct data in the distributed storage110. Other log records may be stored within the journal156. In some embodiments, the journal156comprises a second log record314for a write operation. A header block316of the second log record314comprises an inode and offset of a file being modified by the write operation. The header block316may comprise cached metadata for the write operation. The cached metadata may comprise context information, a raid checksum, and/or an indirect block of data being written by the write operation. The data being written by the write operation may be stored within a first journal block318, a second journal block320, and a third journal block322. In this way, write operations are logged into the journal156as log records within which metadata may also be cached. When a consistency point is triggered, the consistency point process160stores the data from the log records into the distributed storage110, which may involve modifying cached indirect blocks within the log records based upon the write operations logged into the journal156. FIG.4is a flow diagram illustrating an example of a set of operations for performing the replay process158in accordance with various embodiments of the present technology. This example is discussed in conjunction withFIGS.5A and5Bthat show block diagrams illustrating examples of performing the replay process158in accordance with an embodiment of the present technology. During operation402of method400, the distributed storage architecture102is monitored for a failure. In some embodiments, heartbeat communication may be exchanged between nodes. If a node does not receive heart communication from another node, then the node may determine that the other node experienced a failure. In some embodiments, the distributed storage architecture102may monitor operational states of nodes to determine whether the nodes are operational or have experienced failures. It may be appreciated that a variety of other failure detection mechanisms may be implemented. During operation404of method400, a determination is made as to whether a failure has been detected. If no failures have been detected, then monitoring of the distributed storage architecture102for failures continues. If a failure is detected, then the replay process158is performed as part of recovering from the failure. In some embodiments, the first node104implements the replay process158to replay write operations logged within log records502of the journal156to bring a file system508into a consistent state. As part of implementing the replay process158, the replay process158sequentially reads504batches of the log records502from the journal156. The replay process158builds file system messages512based upon the log records502, during operation406of method400. The file system messages512are used to bring the file system508into the consistent state after the failure. The file system508could be in an inconsistent state if a consistency point was in progress by the consistency point process160during the failure. The replay process158identifies indirect blocks and/or other metadata that was cached within the log records502. During operation408of method400, the replay process stores510the indirect blocks from the log records502into an in-memory hash table507indexed by disk locations identified by the indirect blocks. The in-memory hash table507may be maintained within memory506of the first node104. In some embodiments, various verifications may be performed upon the indirect blocks cached within the log records502to determine whether the indirect blocks are valid or corrupt. In some embodiments, raid checksums for the indirect blocks were cached within the log records502. The raid checksums may be compared to raid checksums calculated for the indirect blocks (e.g., calculated during the replay process158). If the raid checksums match for an indirect block, then the indirect block is valid and is stored within the in-memory hash table507. If the raid checksums do not match, then the indirect block is determined to be corrupt and is not stored into the in-memory hash table507. Instead, the indirect block is read from the distributed storage110into the in-memory hash table507. In some embodiment, context information (e.g., a buftree identifier, a file block number, a consistency point count, etc.) may be used to determine whether the indirect block is not corrupt and is pointing the file system508to the correct data within the distributed storage110. If the indirect block points to data that does not match the context information, then the indirect block may be corrupt and is not stored into the in-memory hash table507. Instead, the indirect block is read from the distributed storage110into the in-memory hash table507. Otherwise, if the data pointed to by the indirect block matches the context information, then the indirect block is stored510into the in-memory hash table507. During operation410of method400, the replay process158executes the file system messages512to update the file system508to a consistent state. Some of the file system messages512may relate to write operations that utilize and/or modify the indirect blocks within the in-memory hash table507, and thus the file system messages512utilize the in-memory hash table507during execution. Because only a single instance of an indirect block is cached within the log records502, that single instance of the indirect block is stored510into the in-memory hash table507, which may be accessed and/or modified by multiple file system messages derived from write operations targeting the data pointed to by the indirect block. This also may improve the efficiency of the replay process158because multiple file system messages (write operations) can benefit from a single instance of an indirect block being cached within the in-memory hash table508. During operation412of method400, a determination may be made as to whether a consistency point has been reached (e.g., a threshold amount of time since a last consistency point, a certain number of file system messages being executed, etc.), as illustrated byFIG.5B. If the replay consistency point has not been reached, then the file system messages may continue to be executed. If the replay consistency point has been reached, then the consistency point process160is triggered to store550data (e.g., data being written by the write operations used to build the file system messages502) to disk locations indicated by the indirect blocks within the in-memory hash table507in the memory506of the first node104, during operation414of method400. In this way, the replay process158and the consistency point process160are utilized to bring the file system508into a consistent state and to store the data from the log records to the distributed storage110. FIG.6is an example of a computer readable medium600in which various embodiments of the present technology may be implemented. An example embodiment of a computer-readable medium or a computer-readable device that is devised in these ways is illustrated inFIG.6, wherein the implementation comprises a computer-readable medium608, such as a compact disc-recordable (CD-R), a digital versatile disc-recordable (DVD-R), flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data606. This computer-readable data606, such as binary data comprising at least one of a zero or a one, in turn comprises processor-executable computer instructions604configured to operate according to one or more of the principles set forth herein. In some embodiments, the processor-executable computer instructions604are configured to perform at least some of the exemplary methods602disclosed herein, such as method200ofFIG.2and/or method400ofFIG.4, for example. In some embodiments, the processor-executable computer instructions604are configured to implement a system, such as at least some of the exemplary systems disclosed herein, such as system100ofFIGS.1A and1B, system300ofFIG.3, and/or system500ofFIGS.5A and5B, for example. Many such computer-readable media are contemplated to operate in accordance with the techniques presented herein. In some embodiments, the described methods and/or their equivalents may be implemented with computer executable instructions. Thus, in some embodiments, a non-transitory computer readable/storage medium is configured with stored computer executable instructions of an algorithm/executable application that when executed by a machine(s) cause the machine(s) (and/or associated components) to perform the method. Example machines include but are not limited to a processor, a computer, a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, and so on. In some embodiments, a computing device is implemented with one or more executable algorithms that are configured to perform any of the disclosed methods. It will be appreciated that processes, architectures and/or procedures described herein can be implemented in hardware, firmware and/or software. It will also be appreciated that the provisions set forth herein may apply to any type of special-purpose computer (e.g., file host, storage server and/or storage serving appliance) and/or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings herein can be configured to a variety of storage system architectures including, but not limited to, a network-attached storage environment and/or a storage area network and disk assembly directly attached to a client or host computer. Storage system should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. In some embodiments, methods described and/or illustrated in this disclosure may be realized in whole or in part on computer-readable media. Computer readable media can include processor-executable instructions configured to implement one or more of the methods presented herein, and may include any mechanism for storing this data that can be thereafter read by a computer system. Examples of computer readable media include (hard) drives (e.g., accessible via network attached storage (NAS)), Storage Area Networks (SAN), volatile and non-volatile memory, such as read-only memory (ROM), random-access memory (RAM), electrically erasable programmable read-only memory (EEPROM) and/or flash memory, compact disk read only memory (CD-ROM)s, CD-Rs, compact disk re-writeable (CD-RW)s, DVDs, magnetic tape, optical or non-optical data storage devices and/or any other medium which can be used to store data. Some examples of the claimed subject matter have been described with reference to the drawings, where like reference numerals are generally used to refer to like elements throughout. In the description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. Nothing in this detailed description is admitted as prior art. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Various operations of embodiments are provided herein. The order in which some or all of the operations are described should not be construed to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated given the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Furthermore, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard application or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer application accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. As used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component includes a process running on a processor, a processor, an object, an executable, a thread of execution, an application, or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers. Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B and/or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used, such terms are intended to be inclusive in a manner similar to the term “comprising”. Many modifications may be made to the instant disclosure without departing from the scope or spirit of the claimed subject matter. Unless specified otherwise, “first,” “second,” or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first set of information and a second set of information generally correspond to set of information A and set of information B or two different or two identical sets of information or the same set of information. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. | 57,149 |
11861199 | DETAILED DESCRIPTION Some examples of the claimed subject matter are now described with reference to the drawings, where like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. Nothing in this detailed description is admitted as prior art. The techniques described herein are directed to framing blocks of data from a persistent memory tier of a node to a file system tier of the node in order to enable data management operations, such as file clone and snapshot operations, across both the first system tier and the persistent memory tier. In particular, blocks within the persistent memory tier that comprise more up-to-date data than corresponding blocks within the file system tier are identified and framed by sending messages from the persistent memory tier to the file system tier for notifying the file system tier that the more up-to-date data of the corresponding blocks within the file system tier are stored within the blocks of the persistent memory tier. In this way, when a data management operation is executed upon the file system tier, the data management operation will be able to identify locations of the more up-to-date data within the persistent memory tier so that the data management operation does not operate upon stale or missing data within the file system tier. As an example, the file system tier may implement a storage file system that stores and organizes data within storage, such as cloud storage, hard disk drives, solid state drives, block-addressable storage, etc. The persistent memory tier may implement a persistent memory file system that stores and organizes data within persistent memory, such as byte-addressable storage. Because the persistent memory of the persistent memory tier may be relatively faster and provide relatively lower latency than the storage of the file system tier, certain data such as frequently accessed data or recently accessed data may be stored within the persistent memory tier, such as where copies of data from the file system tier are copied into the persistent memory tier. Unfortunately, when operations modify the data within the persistent memory tier through the persistent memory file system, the storage file system of the file system tier is unaware of such modifications, and thus the file system tier will comprise stale or missing data. When the storage file system of the file system tier implements a data management operation, such as a snapshot operation or a file clone operation implemented, the data management operation would operate upon the stale or missing data as opposed to the up-to-date data within the persistent memory tier because the file system tier is unaware of the fact that the persistent memory tier comprises more up-to-date data. Accordingly, as provided herein, framing is performed to notify the file system tier that blocks within the persistent memory tier comprise more up-to-date data than corresponding blocks within the file system tier. Once the file system tier has been notified of what blocks within the persistent memory tier comprise more up-to-date data than corresponding blocks within the file system tier, data management operations may be implemented cross-tier across both data within the file system tier and data within the persistent memory tier. In this way, file clones, snapshots, and other data management operations will execute upon and reflect up-to-date data stored across both of the tiers, as opposed to merely stale or missing data within the file system tier. Thus, the node is capable of leveraging the benefits of persistent memory such as low latency without losing the ability to implement data management operations because the data management operations can be implemented across both the persistent memory tier and the file system tier in order to capture the most up-to-date data. In an embodiment, a node may be implemented as a computing device, a server, an on-premise device, a virtual machine, hardware, software, or combination thereof. The node may be configured to manage storage on behalf of client devices using a storage environment, such as hard drives, solid state drives, cloud storage, or other types of storage within which client data may be stored through volumes, aggregates, cloud storage objects, etc. The node may manage this storage utilizing a storage operating system that can provide data protection and storage efficiency for the client data. For example, the storage operation system may implement and/or interact with storage services that can provide snapshot functionality, data migration functionality, compression, deduplication, encryption, backup and restore, cloning, synchronous and/or asynchronous replication, data mirroring, and/or other functionality for efficiently storing, protecting, and managing client data stored by a file system tier. The node may implement a storage file system for the file system tier through the storage operating system for organizing and managing the client data. In this way, a client device can connect to the node in order to access the client data through the storage file system. The storage file system may be tailored to access and store data within block-addressable storage media, such as disk drives, solid state drives, etc. The storage file system may utilize data structures and/or functionality tailored for block-addressable semantics that are used to locate, store, and retrieve client data from blocks within the block-addressable storage media. As new types of storage media become available, it may be advantageous to leverage such storage media for use by the node for storing client data. However, the storage file system may not be tailored to leverage certain types of storage media because the storage file system may have been created and tailored to only be capable of managing the storage of client data within block-addressable storage media, such as within hard drives, solid state drives, disk drives, etc. Thus, the storage file system may be unable to natively utilize these newer and faster types of storage media, such as persistent memory (pmem), that have different storage semantics than block-addressable storage media. Persistent memory provides relatively lower latency and faster access speeds than block-addressable storage media that the storage file system is natively tailored to manage. Because the persistent memory is byte-addressable instead of block-addressable, the storage file system, data structures of the storage file system used to locate data within the block-addressable storage media, and the commands used to store and retrieved data from the block-addressable storage media cannot be leveraged for the byte-addressable persistent memory. Accordingly, a persistent memory tier, separate from the file system tier, is implemented with data structures and functionality such as commands for accessing and managing byte-addressable persistent memory of the node. This persistent memory tier also enables the ability to capture snapshots of volumes and file clones of files whose data or portions thereof may be stored within the persistent memory (e.g., volume snapshots and file clones may be captured of volumes and files whose data is at least partially stored or completely stored within the persistent memory). The persistent memory tier provides a tiering solution for storage managed by a storage operating system of a node, such that data may be tiered between the storage such as block-addressable storage and the persistent memory. The persistent memory tier implements a persistent memory file system tailored for block-addressable storage in order to access the persistent memory for storing and retrieving data. The persistent memory tier is hosted at a level within a storage operating system storage stack above a file system tier used to manage the storage file system that stores data within block-addressable storage, such as disk drives and solid state storage. The persistent memory tier implements the persistent memory file system that is separate from the storage file system implemented by the file system tier. The persistent memory file system is tailored for block-addressable access and storage semantics of the persistent memory having an address space arranged into a contiguous set of pages, such as 4 KB pages or any other size of pages within the persistent memory. One of the pages within the file system, such as a page (1), comprises a file system superblock. The file system superblock is a root of a file system tree of the persistent memory file system for the persistent memory. The file system superblock comprises a location of a list of file system info objects. In an embodiment, the list of file system info objects is a linked list of pages within the persistent memory, where each page contains a set of file system info objects. If there are more file system info objects than what can be stored within a single page (e.g., a single 4 kb page), then the remaining file system info objects are stored within one or more additional pages within the persistent memory (e.g., within a second 4 kb page). Each page will contain a location of a next page comprising file system info objects. Each file system info object defines a file system instance for a volume, such as an active file system of the volume or snapshots of the volume. Each file system info object comprises a persistent memory location of a root of an inofile (a page tree) comprising inodes of files of the file system instance defined by a file system info object. Each file system instance will have its own inofile of inodes for that file system instance. An inode comprises metadata about a corresponding file of the file system instance. The inofile may comprise indirect pages (intermediate nodes in the page tree) and direct blocks (leaf nodes in the page tree). The direct blocks of the inofile are logically arranged as an array of the inodes indexed by file identifiers of each file represented by the inodes. Each inode stores a location of a root of a file tree for a given file. Direct blocks of the file tree of file (leaf nodes) comprise the actual user data stored within the file. Each indirect page of the file tree of the file (intermediate nodes) comprises 512 indirect entries or any other number of indirect entries. The indirect entries are used to find a page's child page for a given offset in a user file or the inofile. That is, an indirect entry (a page) comprises a reference to a block/node (a child page) one level lower within a page tree or file tree. An inode of a file points to a single inode root indirect page. This inode root indirect page can point to either direct blocks comprising file data if the 512 indirect entries are sufficient to index all pages of the file. Else, the inode root indirect page points to a next level down of indirect pages. A size of a file determines the number of levels of indirect pages. For example, the pages are arranged as the file tree with one or more levels, such that the lowest level comprises direct blocks of user data and levels above the lowest level are indirect levels of indirect pages with pointers to blocks in a level below. In an embodiment, the file tree may be a balanced tree where the direct blocks of user data are all the same distance from the root of the file tree. A given offset in a file for a page is at a fixed path down the file tree based upon that offset. Only files that have been selected for tiering will be present in the persistent memory, and only data present in the persistent memory will have direct blocks in the file tree of the file, and thus an indirect page may lack a reference to a direct block if that block is not present in persistent memory or comprise an indicator of such. When a page is removed from the persistent memory, the page will be effectively removed from the file tree by a scavenging process. A per-page structure is used to track metadata about each page within the persistent memory. Each page will correspond to a single per-page structure that tracks/stores metadata about the page. In an embodiment, the per-page structures are stored in an array within the persistent memory, sized one entry within the array per page. Per-page structures correspond to file superblock pages, file system info pages, indirect pages of the inofile, user data pages, per-page structure array pages, etc. The persistent memory can be viewed as an array of pages (e.g., 4 kb pages or any other size of pages) indexed by page block numbers, which may be tracked by the per-page structures. It may be appreciated that in some instances, the term block and page within the persistent memory may be used to refer to the same storage structure within the persistent memory. In an embodiment of implementing per-page structure to page mappings (e.g., mappings of a per-page structure to a physical page within the persistent memory) using a one-to-one mapping, a per-page structure for a page can be fixed at a page block number offset within a per-page structure table. In an embodiment of implementing per-page structure to page mappings using a variable mapping, a per-page structure of a page stores the page block number of the page represented by the per-page structure. With the variable mapping, persistent memory objects (e.g., objects stored within the file system superblock to point to the list of file system info objects; objects within a file system info object to point to the root of the inofile; objects within an inode to point to a root of a file tree of a file; and objects within indirect pages to point to child blocks (child pages)) will store a per-page structure ID of its per-page structure as a location of the page being pointed to, and will redirect through the per-page structure using the per-page structure ID to identify the physical block number of the page being pointed to. Thus, an indirect entry of an indirect page will comprise a per-page structure ID that can be used to identify a per-page structure having a physical block number of the page pointed to by the indirect page. An indirect entry will comprise a generation count of a page being pointed to by the indirect entry. Each per-page structure will also store a generation count, which is incremented each time a corresponding page is scavenged where the page is evicted from the persistent memory. When a page is linked into a parent indirect page (an indirect entry), the per-page structure ID is set and a current generation count is set. As the persistent memory becomes full, pages must be scavenged (evicted) for reuse as other data and/or metadata. Instead of a scavenging process having to locate a page's parent linking to the page, zeroing out the per-page structure ID, and updating a checksum, the generation count within the per-page structure is simply increased. Any code and commands that walk the file system tree will first check for generation count mismatch between a generation count within an indirect entry and a generation count within the per-page structure. If there is a mismatch, then the code and commands will know that the page being pointed to has been scavenged and evicted from the persistent memory. Thus, in a single step, all references to the scavenged page will be invalidated because the generation count in all of the indirect pages referencing the scavenged page will not match the increased generation count within the per-page structure. In an embodiment, a generation count of a child page pointed to by an indirect entry of an indirect page is stored within a generation count field within the indirect entry. A per-page structure ID of a per-page structure for the child page pointed to by the indirect entry of the indirect page is stored within a per-page structure field within the indirect entry. The generation count field and the per-page structure field may be stored within 8 bytes of the indirect entry so that the generation count field and the per-page structure field are 8 byte aligned. This allows the generation count field and the per-page structure field to be atomically set together, such that either both fields will successfully be set or both fields will fail to be set such as in the event of a crash or failure so that there is no partial modification of either field (e.g., both fields can be set by a single operation to the persistent memory). This prevents data loss that would otherwise occur if only one or the other or portions thereof of the generation count field and/or the per-page structure field are updated before the crash or failure. In an example of updating the fields based upon a copy-on-write operation of a page, a parent indirect entry of the page is updated to reflect a new per-page structure ID and generation count of the page targeted by the copy-on-write operation. A per-page structure of a page may comprise additional metadata information. In an embodiment, the per-page structure comprises a checksum of content in the page. When the page is updated in place by a first transaction, the checksum may be updated by a second transaction. If the second transaction does not complete due to a crash, then the existing checksum may not match the data. However, this does not necessarily imply a corruption since that data was updated by the first transaction. Thus, the second transaction can be tried again after recovery from the crash. In an embodiment, the per-page structure comprises a reference count to the page. The reference count may correspond to how many references to the page there are by an active file system of a volume, volume snapshots of the volume, and file clones of a file whose data is stored within the page. In an example, the present memory file system for the persistent memory may utilize hierarchical reference counting to support volume snapshots and file clones. Thus, a hierarchical reference on the page may be stored within the per-page structure. FIG.1is a diagram illustrating an example operating environment100in which an embodiment of the techniques described herein may be implemented. In one example, the techniques described herein may be implemented within a client device128, such as a laptop, a tablet, a personal computer, a mobile device, a server, a virtual machine, a wearable device, etc. In another example, the techniques described herein may be implemented within one or more nodes, such as a first node130and/or a second node132within a first cluster134, a third node136within a second cluster138, etc. A node may comprise a storage controller, a server, an on-premise device, a virtual machine such as a storage virtual machine, hardware, software, or combination thereof. The one or more nodes may be configured to manage the storage and access to data on behalf of the client device128and/or other client devices. In another example, the techniques described herein may be implemented within a distributed computing platform102such as a cloud computing environment (e.g., a cloud storage environment, a multi-tenant platform, a hyperscale infrastructure comprising scalable server architectures and virtual networking, etc.) configured to manage the storage and access to data on behalf of client devices and/or nodes. In yet another example, at least some of the techniques described herein are implemented across one or more of the client device128, the one or more nodes130,132, and/or136, and/or the distributed computing platform102. For example, the client device128may transmit operations, such as data operations to read data and write data and metadata operations (e.g., a create file operation, a rename directory operation, a resize operation, a set attribute operation, etc.), over a network126to the first node130for implementation by the first node130upon storage. The first node130may store data associated with the operations within volumes or other data objects/structures hosted within locally attached storage, remote storage hosted by other computing devices accessible over the network126, storage provided by the distributed computing platform102, etc. The first node130may replicate the data and/or the operations to other computing devices, such as to the second node132, the third node136, a storage virtual machine executing within the distributed computing platform102, etc., so that one or more replicas of the data are maintained. For example, the third node136may host a destination storage volume that is maintained as a replica of a source storage volume of the first node130. Such replicas can be used for disaster recovery and failover. In an embodiment, the techniques described herein are implemented by a storage operating system or are implemented by a separate module that interacts with the storage operating system. The storage operating system may be hosted by the client device,128, a node, the distributed computing platform102, or across a combination thereof. In an example, the storage operating system may execute within a storage virtual machine, a hyperscaler, or other computing environment. The storage operating system may implement a storage file system to logically organize data within storage devices as one or more storage objects and provide a logical/virtual representation of how the storage objects are organized on the storage devices. A storage object may comprise any logically definable storage element stored by the storage operating system (e.g., a volume stored by the first node130, a cloud object stored by the distributed computing platform102, etc.). Each storage object may be associated with a unique identifier that uniquely identifies the storage object. For example, a volume may be associated with a volume identifier uniquely identifying that volume from other volumes. The storage operating system also manages client access to the storage objects. The storage operating system may implement a file system for logically organizing data. For example, the storage operating system may implement a write anywhere file layout for a volume where modified data for a file may be written to any available location as opposed to a write-in-place architecture where modified data is written to the original location, thereby overwriting the previous data. In an example, the file system may be implemented through a file system layer that stores data of the storage objects in an on-disk format representation that is block-based (e.g., data is stored within 4 kilobyte blocks and inodes are used to identify files and file attributes such as creation time, access permissions, size and block location, etc.). In an example, deduplication may be implemented by a deduplication module associated with the storage operating system. Deduplication is performed to improve storage efficiency. One type of deduplication is inline deduplication that ensures blocks are deduplicated before being written to a storage device. Inline deduplication uses a data structure, such as an incore hash store, which maps fingerprints of data to data blocks of the storage device storing the data. Whenever data is to be written to the storage device, a fingerprint of that data is calculated and the data structure is looked up using the fingerprint to find duplicates (e.g., potentially duplicate data already stored within the storage device). If duplicate data is found, then the duplicate data is loaded from the storage device and a byte by byte comparison may be performed to ensure that the duplicate data is an actual duplicate of the data to be written to the storage device. If the data to be written is a duplicate of the loaded duplicate data, then the data to be written to disk is not redundantly stored to the storage device. Instead, a pointer or other reference is stored in the storage device in place of the data to be written to the storage device. The pointer points to the duplicate data already stored in the storage device. A reference count for the data may be incremented to indicate that the pointer now references the data. If at some point the pointer no longer references the data (e.g., the deduplicated data is deleted and thus no longer references the data in the storage device), then the reference count is decremented. In this way, inline deduplication is able to deduplicate data before the data is written to disk. This improves the storage efficiency of the storage device. Background deduplication is another type of deduplication that deduplicates data already written to a storage device. Various types of background deduplication may be implemented. In an example of background deduplication, data blocks that are duplicated between files are rearranged within storage units such that one copy of the data occupies physical storage. References to the single copy can be inserted into a file system structure such that all files or containers that contain the data refer to the same instance of the data. Deduplication can be performed on a data storage device block basis. In an example, data blocks on a storage device can be identified using a physical volume block number. The physical volume block number uniquely identifies a particular block on the storage device. Additionally, blocks within a file can be identified by a file block number. The file block number is a logical block number that indicates the logical position of a block within a file relative to other blocks in the file. For example, file block number 0 represents the first block of a file, file block number 1 represents the second block, etc. File block numbers can be mapped to a physical volume block number that is the actual data block on the storage device. During deduplication operations, blocks in a file that contain the same data are deduplicated by mapping the file block number for the block to the same physical volume block number, and maintaining a reference count of the number of file block numbers that map to the physical volume block number. For example, assume that file block number 0 and file block number 5 of a file contain the same data, while file block numbers 1-4 contain unique data. File block numbers 1-4 are mapped to different physical volume block numbers. File block number 0 and file block number 5 may be mapped to the same physical volume block number, thereby reducing storage requirements for the file. Similarly, blocks in different files that contain the same data can be mapped to the same physical volume block number. For example, if file block number 0 of file A contains the same data as file block number 3 of file B, file block number 0 of file A may be mapped to the same physical volume block number as file block number 3 of file B. In another example of background deduplication, a changelog is utilized to track blocks that are written to the storage device. Background deduplication also maintains a fingerprint database (e.g., a flat metafile) that tracks all unique block data such as by tracking a fingerprint and other filesystem metadata associated with block data. Background deduplication can be periodically executed or triggered based upon an event such as when the changelog fills beyond a threshold. As part of background deduplication, data in both the changelog and the fingerprint database is sorted based upon fingerprints. This ensures that all duplicates are sorted next to each other. The duplicates are moved to a dup file. The unique changelog entries are moved to the fingerprint database, which will serve as duplicate data for a next deduplication operation. In order to optimize certain filesystem operations needed to deduplicate a block, duplicate records in the dup file are sorted in certain filesystem sematic order (e.g., inode number and block number). Next, the duplicate data is loaded from the storage device and a whole block byte by byte comparison is performed to make sure duplicate data is an actual duplicate of the data to be written to the storage device. After, the block in the changelog is modified to point directly to the duplicate data as opposed to redundantly storing data of the block. In an example, deduplication operations performed by a data deduplication layer of a node can be leveraged for use on another node during data replication operations. For example, the first node130may perform deduplication operations to provide for storage efficiency with respect to data stored on a storage volume. The benefit of the deduplication operations performed on first node130can be provided to the second node132with respect to the data on first node130that is replicated to the second node132. In some aspects, a data transfer protocol, referred to as the LRSE (Logical Replication for Storage Efficiency) protocol, can be used as part of replicating consistency group differences from the first node130to the second node132. In the LRSE protocol, the second node132maintains a history buffer that keeps track of data blocks that it has previously received. The history buffer tracks the physical volume block numbers and file block numbers associated with the data blocks that have been transferred from first node130to the second node132. A request can be made of the first node130to not transfer blocks that have already been transferred. Thus, the second node132can receive deduplicated data from the first node130, and will not need to perform deduplication operations on the deduplicated data replicated from first node130. In an example, the first node130may preserve deduplication of data that is transmitted from first node130to the distributed computing platform102. For example, the first node130may create an object comprising deduplicated data. The object is transmitted from the first node130to the distributed computing platform102for storage. In this way, the object within the distributed computing platform102maintains the data in a deduplicated state. Furthermore, deduplication may be preserved when deduplicated data is transmitted/replicated/mirrored between the client device128, the first node130, the distributed computing platform102, and/or other nodes or devices. In an example, compression may be implemented by a compression module associated with the storage operating system. The compression module may utilize various types of compression techniques to replace longer sequences of data (e.g., frequently occurring and/or redundant sequences) with shorter sequences, such as by using Huffman coding, arithmetic coding, compression dictionaries, etc. For example, an uncompressed portion of a file may comprise “ggggnnnnnnqqqqqqqqqq”, which is compressed to become “4g6n10q”. In this way, the size of the file can be reduced to improve storage efficiency. Compression may be implemented for compression groups. A compression group may correspond to a compressed group of blocks. The compression group may be represented by virtual volume block numbers. The compression group may comprise contiguous or non-contiguous blocks. Compression may be preserved when compressed data is transmitted/replicated/mirrored between the client device128, a node, the distributed computing platform102, and/or other nodes or devices. For example, an object may be created by the first node130to comprise compressed data. The object is transmitted from the first node130to the distributed computing platform102for storage. In this way, the object within the distributed computing platform102maintains the data in a compressed state. In an example, various types of synchronization may be implemented by a synchronization module associated with the storage operating system. In an example, synchronous replication may be implemented, such as between the first node130and the second node132. It may be appreciated that the synchronization module may implement synchronous replication between any devices within the operating environment100, such as between the first node130of the first cluster134and the third node136of the second cluster138and/or between a node of a cluster and an instance of a node or virtual machine in the distributed computing platform102. As an example, during synchronous replication, the first node130may receive a write operation from the client device128. The write operation may target a file stored within a volume managed by the first node130. The first node130replicates the write operation to create a replicated write operation. The first node130locally implements the write operation upon the file within the volume. The first node130also transmits the replicated write operation to a synchronous replication target, such as the second node132that maintains a replica volume as a replica of the volume maintained by the first node130. The second node132will execute the replicated write operation upon the replica volume so that file within the volume and the replica volume comprises the same data. After, the second node132will transmit a success message to the first node130. With synchronous replication, the first node130does not respond with a success message to the client device128for the write operation until both the write operation is executed upon the volume and the first node130receives the success message that the second node132executed the replicated write operation upon the replica volume. In another example, asynchronous replication may be implemented, such as between the first node130and the third node136. It may be appreciated that the synchronization module may implement asynchronous replication between any devices within the operating environment100, such as between the first node130of the first cluster134and the distributed computing platform102. In an example, the first node130may establish an asynchronous replication relationship with the third node136. The first node130may capture a baseline snapshot of a first volume as a point in time representation of the first volume. The first node130may utilize the baseline snapshot to perform a baseline transfer of the data within the first volume to the third node136in order to create a second volume within the third node136comprising data of the first volume as of the point in time at which the baseline snapshot was created. After the baseline transfer, the first node130may subsequently create snapshots of the first volume over time. As part of asynchronous replication, an incremental transfer is performed between the first volume and the second volume. In particular, a snapshot of the first volume is created. The snapshot is compared with a prior snapshot that was previously used to perform the last asynchronous transfer (e.g., the baseline transfer or a prior incremental transfer) of data to identify a difference in data of the first volume between the snapshot and the prior snapshot (e.g., changes to the first volume since the last asynchronous transfer). Accordingly, the difference in data is incrementally transferred from the first volume to the second volume. In this way, the second volume will comprise the same data as the first volume as of the point in time when the snapshot was created for performing the incremental transfer. It may be appreciated that other types of replication may be implemented, such as semi-sync replication. In an embodiment, the first node130may store data or a portion thereof within storage hosted by the distributed computing platform102by transmitting the data within objects to the distributed computing platform102. In one example, the first node130may locally store frequently accessed data within locally attached storage. Less frequently accessed data may be transmitted to the distributed computing platform102for storage within a data storage tier108. The data storage tier108may store data within a service data store120, and may store client specific data within client data stores assigned to such clients such as a client (1) data store122used to store data of a client (1) and a client (N) data store124used to store data of a client (N). The data stores may be physical storage devices or may be defined as logical storage, such as a virtual volume, LUNs, or other logical organizations of data that can be defined across one or more physical storage devices. In another example, the first node130transmits and stores all client data to the distributed computing platform102. In yet another example, the client device128transmits and stores the data directly to the distributed computing platform102without the use of the first node130. The management of storage and access to data can be performed by one or more storage virtual machines (SVMs) or other storage applications that provide software as a service (SaaS) such as storage software services. In one example, an SVM may be hosted within the client device128, within the first node130, or within the distributed computing platform102such as by the application server tier106. In another example, one or more SVMs may be hosted across one or more of the client device128, the first node130, and the distributed computing platform102. The one or more SVMs may host instances of the storage operating system. In an example, the storage operating system may be implemented for the distributed computing platform102. The storage operating system may allow client devices to access data stored within the distributed computing platform102using various types of protocols, such as a Network File System (NFS) protocol, a Server Message Block (SMB) protocol and Common Internet File System (CIFS), and Internet Small Computer Systems Interface (iSCSI), and/or other protocols. The storage operating system may provide various storage services, such as disaster recovery (e.g., the ability to non-disruptively transition client devices from accessing a primary node that has failed to a secondary node that is taking over for the failed primary node), backup and archive function, replication such as asynchronous and/or synchronous replication, deduplication, compression, high availability storage, cloning functionality (e.g., the ability to clone a volume, such as a space efficient flex clone), snapshot functionality (e.g., the ability to create snapshots and restore data from snapshots), data tiering (e.g., migrating infrequently accessed data to slower/cheaper storage), encryption, managing storage across various platforms such as between on-premise storage systems and multiple cloud systems, etc. In one example of the distributed computing platform102, one or more SVMs may be hosted by the application server tier106. For example, a server (1)116is configured to host SVMs used to execute applications such as storage applications that manage the storage of data of the client (1) within the client (1) data store122. Thus, an SVM executing on the server (1)116may receive data and/or operations from the client device128and/or the first node130over the network126. The SVM executes a storage application and/or an instance of the storage operating system to process the operations and/or store the data within the client (1) data store122. The SVM may transmit a response back to the client device128and/or the first node130over the network126, such as a success message or an error message. In this way, the application server tier106may host SVMs, services, and/or other storage applications using the server (1)116, the server (N)118, etc. A user interface tier104of the distributed computing platform102may provide the client device128and/or the first node130with access to user interfaces associated with the storage and access of data and/or other services provided by the distributed computing platform102. In an example, a service user interface110may be accessible from the distributed computing platform102for accessing services subscribed to by clients and/or nodes, such as data replication services, application hosting services, data security services, human resource services, warehouse tracking services, accounting services, etc. For example, client user interfaces may be provided to corresponding clients, such as a client (1) user interface112, a client (N) user interface114, etc. The client (1) can access various services and resources subscribed to by the client (1) through the client (1) user interface112, such as access to a web service, a development environment, a human resource application, a warehouse tracking application, and/or other services and resources provided by the application server tier106, which may use data stored within the data storage tier108. The client device128and/or the first node130may subscribe to certain types and amounts of services and resources provided by the distributed computing platform102. For example, the client device128may establish a subscription to have access to three virtual machines, a certain amount of storage, a certain type/amount of data redundancy, a certain type/amount of data security, certain service level agreements (SLAs) and service level objectives (SLOs), latency guarantees, bandwidth guarantees, access to execute or host certain applications, etc. Similarly, the first node130can establish a subscription to have access to certain services and resources of the distributed computing platform102. As shown, a variety of clients, such as the client device128and the first node130, incorporating and/or incorporated into a variety of computing devices may communicate with the distributed computing platform102through one or more networks, such as the network126. For example, a client may incorporate and/or be incorporated into a client application (e.g., software) implemented at least in part by one or more of the computing devices. Examples of suitable computing devices include personal computers, server computers, desktop computers, nodes, storage servers, nodes, laptop computers, notebook computers, tablet computers or personal digital assistants (PDAs), smart phones, cell phones, and consumer electronic devices incorporating one or more computing device components, such as one or more electronic processors, microprocessors, central processing units (CPU), or controllers. Examples of suitable networks include networks utilizing wired and/or wireless communication technologies and networks operating in accordance with any suitable networking and/or communication protocol (e.g., the Internet). In use cases involving the delivery of customer support services, the computing devices noted represent the endpoint of the customer support delivery process, i.e., the consumer's device. The distributed computing platform102, such as a multi-tenant business data processing platform or cloud computing environment, may include multiple processing tiers, including the user interface tier104, the application server tier106, and a data storage tier108. The user interface tier104may maintain multiple user interfaces, including graphical user interfaces and/or web-based interfaces. The user interfaces may include the service user interface110for a service to provide access to applications and data for a client (e.g., a “tenant”) of the service, as well as one or more user interfaces that have been specialized/customized in accordance with user specific requirements (e.g., as discussed above), which may be accessed via one or more APIs. The service user interface110may include components enabling a tenant to administer the tenant's participation in the functions and capabilities provided by the distributed computing platform102, such as accessing data, causing execution of specific data processing operations, etc. Each processing tier may be implemented with a set of computers, virtualized computing environments such as a storage virtual machine or storage virtual server, and/or computer components including computer servers and processors, and may perform various functions, methods, processes, or operations as determined by the execution of a software application or set of instructions. The data storage tier108may include one or more data stores, which may include the service data store120and one or more client data stores122-124. Each client data store may contain tenant-specific data that is used as part of providing a range of tenant-specific business and storage services or functions, including but not limited to ERP, CRM, eCommerce, Human Resources management, payroll, storage services, etc. Data stores may be implemented with any suitable data storage technology, including structured query language (SQL) based relational database management systems (RDBMS), file systems hosted by operating systems, object storage, etc. In accordance with one embodiment of the invention, the distributed computing platform102may be a multi-tenant and service platform operated by an entity in order to provide multiple tenants with a set of business related applications, data storage, and functionality. These applications and functionality may include ones that a business uses to manage various aspects of its operations. For example, the applications and functionality may include providing web-based access to business information systems, thereby allowing a user with a browser and an Internet or intranet connection to view, enter, process, or modify certain types of business information or any other type of information. A clustered network environment200that may implement one or more aspects of the techniques described and illustrated herein is shown inFIG.2. The clustered network environment200includes data storage apparatuses202(1)-202(n) that are coupled over a cluster or cluster fabric204that includes one or more communication network(s) and facilitates communication between the data storage apparatuses202(1)-202(n) (and one or more modules, components, etc. therein, such as, node computing devices206(1)-206(n), for example), although any number of other elements or components can also be included in the clustered network environment200in other examples. This technology provides a number of advantages including methods, non-transitory computer readable media, and computing devices that implement the techniques described herein. In this example, node computing devices206(1)-206(n) can be primary or local storage controllers or secondary or remote storage controllers that provide client devices208(1)-208(n) with access to data stored within data storage devices210(1)-210(n) and cloud storage device(s)236(also referred to as cloud storage node(s)). The node computing devices206(1)-206(n) may be implemented as hardware, software (e.g., a storage virtual machine), or combination thereof. The data storage apparatuses202(1)-202(n) and/or node computing devices206(1)-206(n) of the examples described and illustrated herein are not limited to any particular geographic areas and can be clustered locally and/or remotely via a cloud network, or not clustered in other examples. Thus, in one example the data storage apparatuses202(1)-202(n) and/or node computing device206(1)-206(n) can be distributed over a plurality of storage systems located in a plurality of geographic locations (e.g., located on-premise, located within a cloud computing environment, etc.); while in another example a clustered network can include data storage apparatuses202(1)-202(n) and/or node computing device206(1)-206(n) residing in a same geographic location (e.g., in a single on-site rack). In the illustrated example, one or more of the client devices208(1)-208(n), which may be, for example, personal computers (PCs), computing devices used for storage (e.g., storage servers), or other computers or peripheral devices, are coupled to the respective data storage apparatuses202(1)-202(n) by network connections212(1)-212(n). Network connections212(1)-212(n) may include a local area network (LAN) or wide area network (WAN) (i.e., a cloud network), for example, that utilize TCP/IP and/or one or more Network Attached Storage (NAS) protocols, such as a Common Internet Filesystem (CIFS) protocol or a Network Filesystem (NFS) protocol to exchange data packets, a Storage Area Network (SAN) protocol, such as Small Computer System Interface (SCSI) or Fiber Channel Protocol (FCP), an object protocol, such as simple storage service (S3), and/or non-volatile memory express (NVMe), for example. Illustratively, the client devices208(1)-208(n) may be general-purpose computers running applications and may interact with the data storage apparatuses202(1)-202(n) using a client/server model for exchange of information. That is, the client devices208(1)-208(n) may request data from the data storage apparatuses202(1)-202(n) (e.g., data on one of the data storage devices210(1)-210(n) managed by a network storage controller configured to process I/O commands issued by the client devices208(1)-208(n)), and the data storage apparatuses202(1)-202(n) may return results of the request to the client devices208(1)-208(n) via the network connections212(1)-212(n). The node computing devices206(1)-206(n) of the data storage apparatuses202(1)-202(n) can include network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, cloud storage (e.g., a storage endpoint may be stored within cloud storage device(s)236), etc., for example. Such node computing devices206(1)-206(n) can be attached to the cluster fabric204at a connection point, redistribution point, or communication endpoint, for example. One or more of the node computing devices206(1)-206(n) may be capable of sending, receiving, and/or forwarding information over a network communications channel, and could comprise any type of device that meets any or all of these criteria. In an example, the node computing devices206(1) and206(n) may be configured according to a disaster recovery configuration whereby a surviving node provides switchover access to the storage devices210(1)-210(n) in the event a disaster occurs at a disaster storage site (e.g., the node computing device206(1) provides client device212(n) with switchover data access to data storage devices210(n) in the event a disaster occurs at the second storage site). In other examples, the node computing device206(n) can be configured according to an archival configuration and/or the node computing devices206(1)-206(n) can be configured based on another type of replication arrangement (e.g., to facilitate load sharing). Additionally, while two node computing devices are illustrated inFIG.2, any number of node computing devices or data storage apparatuses can be included in other examples in other types of configurations or arrangements. As illustrated in the clustered network environment200, node computing devices206(1)-206(n) can include various functional components that coordinate to provide a distributed storage architecture. For example, the node computing devices206(1)-206(n) can include network modules214(1)-214(n) and disk modules216(1)-216(n). Network modules214(1)-214(n) can be configured to allow the node computing devices206(1)-206(n) (e.g., network storage controllers) to connect with client devices208(1)-208(n) over the storage network connections212(1)-212(n), for example, allowing the client devices208(1)-208(n) to access data stored in the clustered network environment200. Further, the network modules214(1)-214(n) can provide connections with one or more other components through the cluster fabric204. For example, the network module214(1) of node computing device206(1) can access the data storage device210(n) by sending a request via the cluster fabric204through the disk module216(n) of node computing device206(n) when the node computing device206(n) is available. Alternatively, when the node computing device206(n) fails, the network module214(1) of node computing device206(1) can access the data storage device210(n) directly via the cluster fabric204. The cluster fabric204can include one or more local and/or wide area computing networks (i.e., cloud networks) embodied as Infiniband, Fibre Channel (FC), or Ethernet networks, for example, although other types of networks supporting other protocols can also be used. Disk modules216(1)-216(n) can be configured to connect data storage devices210(1)-210(n), such as disks or arrays of disks, SSDs, flash memory, or some other form of data storage, to the node computing devices206(1)-206(n). Often, disk modules216(1)-216(n) communicate with the data storage devices210(1)-210(n) according to the SAN protocol, such as SCSI or FCP, for example, although other protocols can also be used. Thus, as seen from an operating system on node computing devices206(1)-206(n), the data storage devices210(1)-210(n) can appear as locally attached. In this manner, different node computing devices206(1)-206(n), etc. may access data blocks, files, or objects through the operating system, rather than expressly requesting abstract files. While the clustered network environment200illustrates an equal number of network modules214(1)-214(n) and disk modules216(1)-216(n), other examples may include a differing number of these modules. For example, there may be a plurality of network and disk modules interconnected in a cluster that do not have a one-to-one correspondence between the network and disk modules. That is, different node computing devices can have a different number of network and disk modules, and the same node computing device can have a different number of network modules than disk modules. Further, one or more of the client devices208(1)-208(n) can be networked with the node computing devices206(1)-206(n) in the cluster, over the storage connections212(1)-212(n). As an example, respective client devices208(1)-208(n) that are networked to a cluster may request services (e.g., exchanging of information in the form of data packets) of node computing devices206(1)-206(n) in the cluster, and the node computing devices206(1)-206(n) can return results of the requested services to the client devices208(1)-208(n). In one example, the client devices208(1)-208(n) can exchange information with the network modules214(1)-214(n) residing in the node computing devices206(1)-206(n) (e.g., network hosts) in the data storage apparatuses202(1)-202(n). In one example, the storage apparatuses202(1)-202(n) host aggregates corresponding to physical local and remote data storage devices, such as local flash or disk storage in the data storage devices210(1)-210(n), for example. One or more of the data storage devices210(1)-210(n) can include mass storage devices, such as disks of a disk array. The disks may comprise any type of mass storage devices, including but not limited to magnetic disk drives, flash memory, and any other similar media adapted to store information, including, for example, data and/or parity information. The aggregates include volumes218(1)-218(n) in this example, although any number of volumes can be included in the aggregates. The volumes218(1)-218(n) are virtual data stores or storage objects that define an arrangement of storage and one or more filesystems within the clustered network environment200. Volumes218(1)-218(n) can span a portion of a disk or other storage device, a collection of disks, or portions of disks, for example, and typically define an overall logical arrangement of data storage. In one example volumes218(1)-218(n) can include stored user data as one or more files, blocks, or objects that may reside in a hierarchical directory structure within the volumes218(1)-218(n). Volumes218(1)-218(n) are typically configured in formats that may be associated with particular storage systems, and respective volume formats typically comprise features that provide functionality to the volumes218(1)-218(n), such as providing the ability for volumes218(1)-218(n) to form clusters, among other functionality. Optionally, one or more of the volumes218(1)-218(n) can be in composite aggregates and can extend between one or more of the data storage devices210(1)-210(n) and one or more of the cloud storage device(s)236to provide tiered storage, for example, and other arrangements can also be used in other examples. In one example, to facilitate access to data stored on the disks or other structures of the data storage devices210(1)-210(n), a filesystem may be implemented that logically organizes the information as a hierarchical structure of directories and files. In this example, respective files may be implemented as a set of disk blocks of a particular size that are configured to store information, whereas directories may be implemented as specially formatted files in which information about other files and directories are stored. Data can be stored as files or objects within a physical volume and/or a virtual volume, which can be associated with respective volume identifiers. The physical volumes correspond to at least a portion of physical storage devices, such as the data storage devices210(1)-210(n) (e.g., a Redundant Array of Independent (or Inexpensive) Disks (RAID system)) whose address, addressable space, location, etc. does not change. Typically the location of the physical volumes does not change in that the range of addresses used to access it generally remains constant. Virtual volumes, in contrast, can be stored over an aggregate of disparate portions of different physical storage devices. Virtual volumes may be a collection of different available portions of different physical storage device locations, such as some available space from disks, for example. It will be appreciated that since the virtual volumes are not “tied” to any one particular storage device, virtual volumes can be said to include a layer of abstraction or virtualization, which allows it to be resized and/or flexible in some regards. Further, virtual volumes can include one or more logical unit numbers (LUNs), directories, Qtrees, files, and/or other storage objects, for example. Among other things, these features, but more particularly the LUNs, allow the disparate memory locations within which data is stored to be identified, for example, and grouped as data storage unit. As such, the LUNs may be characterized as constituting a virtual disk or drive upon which data within the virtual volumes is stored within an aggregate. For example, LUNs are often referred to as virtual drives, such that they emulate a hard drive, while they actually comprise data blocks stored in various parts of a volume. In one example, the data storage devices210(1)-210(n) can have one or more physical ports, wherein each physical port can be assigned a target address (e.g., SCSI target address). To represent respective volumes, a target address on the data storage devices210(1)-210(n) can be used to identify one or more of the LUNs. Thus, for example, when one of the node computing devices206(1)-206(n) connects to a volume, a connection between the one of the node computing devices206(1)-206(n) and one or more of the LUNs underlying the volume is created. Respective target addresses can identify multiple of the LUNs, such that a target address can represent multiple volumes. The I/O interface, which can be implemented as circuitry and/or software in a storage adapter or as executable code residing in memory and executed by a processor, for example, can connect to volumes by using one or more addresses that identify the one or more of the LUNs. Referring toFIG.3, node computing device206(1) in this particular example includes processor(s)300, a memory302, a network adapter304, a cluster access adapter306, and a storage adapter308interconnected by a system bus310. In other examples, the node computing device206(1) comprises a virtual machine, such as a virtual storage machine. The node computing device206(1) also includes a storage operating system312installed in the memory302that can, for example, implement a RAID data loss protection and recovery scheme to optimize reconstruction of data of a failed disk or drive in an array, along with other functionality such as deduplication, compression, snapshot creation, data mirroring, synchronous replication, asynchronous replication, encryption, etc. In some examples, the node computing device206(n) is substantially the same in structure and/or operation as node computing device206(1), although the node computing device206(n) can also include a different structure and/or operation in one or more aspects than the node computing device206(1). The network adapter304in this example includes the mechanical, electrical and signaling circuitry needed to connect the node computing device206(1) to one or more of the client devices208(1)-208(n) over network connections212(1)-212(n), which may comprise, among other things, a point-to-point connection or a shared medium, such as a local area network. In some examples, the network adapter304further communicates (e.g., using TCP/IP) via the cluster fabric204and/or another network (e.g. a WAN) (not shown) with cloud storage device(s)236to process storage operations associated with data stored thereon. The storage adapter308cooperates with the storage operating system312executing on the node computing device206(1) to access information requested by one of the client devices208(1)-208(n) (e.g., to access data on a data storage device210(1)-210(n) managed by a network storage controller). The information may be stored on any type of attached array of writeable media such as magnetic disk drives, flash memory, and/or any other similar media adapted to store information. In the exemplary data storage devices210(1)-210(n), information can be stored in data blocks on disks. The storage adapter308can include I/O interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a storage area network (SAN) protocol (e.g., Small Computer System Interface (SCSI), Internet SCSI (iSCSI), hyperSCSI, Fiber Channel Protocol (FCP)). The information is retrieved by the storage adapter308and, if necessary, processed by the processor(s)300(or the storage adapter308itself) prior to being forwarded over the system bus310to the network adapter304(and/or the cluster access adapter306if sending to another node computing device in the cluster) where the information is formatted into a data packet and returned to a requesting one of the client devices208(1)-208(2) and/or sent to another node computing device attached via the cluster fabric204. In some examples, a storage driver314in the memory302interfaces with the storage adapter to facilitate interactions with the data storage devices210(1)-210(n). The storage operating system312can also manage communications for the node computing device206(1) among other devices that may be in a clustered network, such as attached to a cluster fabric204. Thus, the node computing device206(1) can respond to client device requests to manage data on one of the data storage devices210(1)-210(n) or cloud storage device(s)236(e.g., or additional clustered devices) in accordance with the client device requests. The file system module318of the storage operating system312can establish and manage one or more filesystems including software code and data structures that implement a persistent hierarchical namespace of files and directories, for example. As an example, when a new data storage device (not shown) is added to a clustered network system, the file system module318is informed where, in an existing directory tree, new files associated with the new data storage device are to be stored. This is often referred to as “mounting” a filesystem. In the example node computing device206(1), memory302can include storage locations that are addressable by the processor(s)300and adapters304,306, and308for storing related software application code and data structures. The processor(s)300and adapters304,306, and308may, for example, include processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The storage operating system312, portions of which are typically resident in the memory302and executed by the processor(s)300, invokes storage operations in support of a file service implemented by the node computing device206(1). Other processing and memory mechanisms, including various computer readable media, may be used for storing and/or executing application instructions pertaining to the techniques described and illustrated herein. For example, the storage operating system312can also utilize one or more control files (not shown) to aid in the provisioning of virtual machines. In this particular example, the memory302also includes a module configured to implement the techniques described herein, as discussed above and further below. The examples of the technology described and illustrated herein may be embodied as one or more non-transitory computer or machine readable media, such as the memory302, having machine or processor-executable instructions stored thereon for one or more aspects of the present technology, which when executed by processor(s), such as processor(s)300, cause the processor(s) to carry out the steps necessary to implement the methods of this technology, as described and illustrated with the examples herein. In some examples, the executable instructions are configured to perform one or more steps of a method described and illustrated later. FIG.4illustrates a system400comprising node402that implements a file system tier424to manage storage426and a persistent memory tier422to manage persistent memory416of the node402. The node402may comprise a server, an on-premise device, a virtual machine, computing resources of a cloud computing environment (e.g., a virtual machine hosted within the cloud), a computing device, hardware, software, or combination thereof. The node402may be configured to manage the storage and access of data on behalf of clients, such as a client device428. The node402may host a storage operating system configured to store and manage data within and/or across various types of storage devices, such as locally attached storage, cloud storage, disk storage, flash storage, solid state drives, tape, hard disk drives, etc. For example, the storage operating system of the node402may store data within storage426, which may be composed of one or more types of block-addressable storage (e.g., disk drive, a solid state drive, etc.) or other types of storage. The data may be stored within storage objects, such as volumes, containers, logical unit numbers (LUNs), aggregates, cloud storage objects, etc. In an example, an aggregate or other storage object may be comprised of physical storage of a single storage device or storage of multiple storage devices or storage providers. The storage operating system of the node402may implement a storage file system418that manages the storage and client access of data within the storage objects stored within the storage426associated with the node402. For example, the client device428may utilize the storage file system418in order to create, delete, organize, modify, and/or access files within directories of a volume managed by the storage file system418. The storage operating system may be associated with a storage operating system storage stack420that comprises a plurality of levels through which operations, such as read and write operations from client devices, are processed. An operation may first be processed by a highest level tier, and then down through lower level tiers of the storage operating system storage stack420until reaching a lowest level tier of the storage operating system storage stack420. The storage file system418may be managed by a file system tier424within the storage operating system storage stack420. When an operation reaches the file system tier424, the operation may be processed by the storage file system418for storage within the storage426. The storage file system418may be configured with commands, APIs, data structures (e.g., data structures used to identify block address locations of data within the storage426), and/or other functionality (e.g., functionality to access certain block ranges within the storage426) that is tailored to the block-addressable storage426. Because the storage file system418is tailored for the block-addressable semantics of the storage426, the storage file system418may be unable to utilize other types of storage that use a different addressing semantics such as persistent memory416that is byte-addressable. The persistent memory416provides relatively lower latency and faster access speeds than the block-addressable storage426that the storage file system418is natively tailored to manage. Because the persistent memory416is byte-addressable instead of block-addressable, the storage file system418, data structures of the storage file system418used to locate data according to block-addressable semantics of the storage426, and the commands to store and retrieved data from the block-addressable storage426cannot be leveraged for the byte-addressable persistent memory416. Accordingly, a persistent memory file system414and the persistent memory tier422for managing the file system414are implemented for the persistent memory416so that the node402can use the persistent memory file system414to access and manage the persistent memory416or other types of byte-addressable storage for storing user data. The persistent memory416may comprise memory that is persistent, such that data structures can be stored in a manner where the data structures can continue to be accessed using memory instructions and/or memory APIs even after the end of a process that created or last modified the data structures. The data structures and data will persist even in the event of a power loss, failure and reboot, etc. The persistent memory416is non-volatile memory that has nearly the same speed and latency of DRAM and has the non-volatility of NAND flash. The persistent memory416could dramatically increase system performance of the node402compared to the higher latency and slower speeds of the block-addressable storage426accessible to the node402through the storage file system418using the file system tier424(e.g., hard disk drives, solid state storage, cloud storage, etc.). The persistent memory416is byte-addressable, and may be accessed through a memory controller. This provides faster and more fine-grained access to persistent storage within the persistent memory416compared to block-based access to the block-addressable storage426through the storage file system418. The persistent memory file system414implemented for the byte-addressable persistent memory416is different than the storage file system418implemented for the block-addressable storage426. For example, the persistent memory file system414may comprise data structures and/or functionality tailored to byte-addressable semantics of the persistent memory416for accessing bytes of storage, which are different than data structures and functionality of the storage file system418that are tailored to block-addressable semantics of the storage426for accessing blocks of storage. Furthermore, the persistent memory file system414is tailored for the relatively faster access speeds and lower latency of the persistent memory416, which improves the operation of the node402by allowing the node402to process I/O from client devices much faster using the persistent memory tier422, the file system414, and the persistent memory416. In order to integrate the persistent memory416into the node402in a manner that allows client data of client devices, such as the client device428, to be stored into and read from the persistent memory416, the persistent memory tier422is implemented within the storage operating system storage stack420for managing the persistent memory416. The persistent memory tier422is maintained at a higher level within the storage operating system storage stack420than the file system tier424used to manage the storage file system418. The persistent memory tier422is maintained higher in the storage operating system storage stack420than the file system tier424so that operations received from client devices by the node402are processed by the persistent memory tier422before the file system tier424even though the operations may target the storage file system418managed by the file system tier424. This occurs because higher levels within the storage operation system storage stack420process operations before lower levels within the storage operation system storage stack420. The persistent memory tier422may implement various APIs, functionality, data structures, and commands for the persistent memory file system414to access and/or manage the persistent memory416. For example, the persistent memory tier422may implement APIs to access the persistent memory file system414of the persistent memory416for storing data into and/or retrieving data from the persistent memory416according to byte-addressable semantics of the persistent memory416. The persistent memory tier422may implement functionality to determine when data should be tiered out from the persistent memory416to the storage426based upon the data becoming infrequently accessed, and thus cold. The persistent memory file system414is configured with data structures for tracking and locating data within the persistent memory416according to the byte-addressable semantics. For example, the persistent memory file system414indexes the persistent memory416of the node402as an array of pages (e.g., 4 kb pages) indexed by page block numbers. One of the pages, such as a page (1), comprises a file system superblock that is a root of a file system tree of the persistent memory file system414. A duplicate copy of the file system superblock may be maintained within another page of the persistent memory416(e.g., a last page, a second to last page, a page that is a threshold number of indexed pages away from page (1), etc.). The file system superblock comprises a location of a list of file system info objects404. The list of file system info objects404comprises a linked list of pages, where each page contains a set of file system info objects. If there are more file system info objects than what can be stored within a page, then additional pages may be used to store the remaining file system info objects and each page will have a location of the next page of file system info objects. In this way, a plurality of file system info objects can be stored within a page of the persistent memory416. Each file system info object defines a file system instance for a volume and snapshot (e.g., a first file system info object correspond to an active file system of the volume, a second file system info object may correspond to a first snapshot of the volume, a third file system info object may correspond to a second snapshot of the volume, etc.). Each file system info object comprises a location within the persistent memory416of an inofile (e.g., a root of a page tree of the inofile) comprising inodes of a file system instance. An inofile406of the file system instance comprises an inode for each file within the file system instance. An inode of a file comprises metadata about the file. Each inode stores a location of a root of a file tree for a given file. In particular, the persistent memory file system414maintains file trees408, where each file is represented by a file tree of indirect pages (intermediate nodes of the file tree) and direct blocks (leaf nodes of the file tree). The direct blocks are located in a bottom level of the file tree, and one or more levels of indirect pages are located above the bottom level of the file tree. The indirect pages of a particular level comprise references to blocks in a next level down within the file tree (e.g., a reference comprising a file block number of a next level down node or a reference comprising a per-page structure ID of a per-page structure having the file block number of the next level down node). Direct blocks are located at a lowest level in the file tree and comprise user data. Thus, a file tree for a file may be traversed by the file system414using a byte range (e.g., a byte range specified by an I/O operation) mapped to a page index of a page (e.g., a 4 k offset) comprising the data within the file to be accessed. The persistent memory file system414may maintain other data structures used to track and locate data within the persistent memory416. In an embodiment, the persistent memory file system414maintains per-page structures410. A per-page structure is used to track metadata about each page within the persistent memory416. Each page will correspond to a single per-page structure that comprises metadata about the page. In an embodiment, the per-page structures are stored in an array within the persistent memory416. The per-page structures correspond to file system superblock pages, file system info pages, indirect pages of the inofile406, user data pages within the file trees408, per-page structure array pages, etc. In an embodiment of implementing per-page structure to page mappings using a one-to-one mapping, a per-page structure for a page can be fixed at a page block number offset within a per-page structure table. In an embodiment of implementing per-page structure to page mappings using a variable mapping, a per-page structure of a page stores a page block number of the page represented by the per-page structure. With the variable mapping, persistent memory objects (e.g., objects stored within the file system superblock to point to the list of file system info objects; objects within a file system info object to point to the root of the inofile; objects within an inode to point to a root of a file tree of a file; and objects within indirect pages to point to child blocks (child pages)) will store a per-page structure ID of its per-page structure as a location of a child page being pointed to, and will redirect through the per-page structure using the per-page structure ID to identify the physical block number of the child page being pointed to. Thus, an indirect entry of an indirect page will comprise a per-page structure ID that can be used to identify a per-page structure having a physical block number of the page child pointed to by the indirect page. The persistent memory tier422may implement functionality to utilize a policy to determine whether certain operations should be redirected to the persistent memory file system414and the persistent memory416or to the storage file system418and the storage426(e.g., if a write operation targets a file that the policy predicts will be accessed again, such as accessed within a threshold timespan or accessed above a certain frequency, then the write operation will be retargeted to the persistent memory416). For example, the node402may receive an operation from the client device428. The operation may be processed by the storage operating system using the storage operating system storage stack420from a highest level down through lower levels of the storage operating system storage stack420. Because the persistent memory tier422is at a higher level within the storage operating system storage stack420than the file system tier424, the operation is intercepted by the persistent memory tier422before reaching the file system tier424. The operation is intercepted by the persistent memory tier422before reaching the file system tier424even though the operation may target the storage file system418managed by the file system tier424. This is because the persistent memory tier422is higher in the storage operating system storage stack420than the file system tier424, and operations are processed by higher levels before lower levels within the storage operating system storage stack420. Accordingly, the operation is intercepted by the persistent memory tier422within the storage operating system storage stack420. The persistent memory tier422may determine whether the operation is to be retargeted to the persistent memory file system414and the persistent memory416or whether the operation is to be transmitted (e.g., released to lower tiers within the storage operating system storage stack420) by the persistent memory tier422to the file system tier424for processing by the storage file system418utilizing the storage426. In this way, the tiers within the storage operating system storage stack420are used to determine how to route and process operations utilizing the persistent memory416and/or the storage426. In an embodiment, an operation401is received by the node402. The operation401may comprise a file identifier of a file to be accessed. The operation401may comprise file system instance information, such as a volume identifier of a volume to be accessed and/or a snapshot identifier of a snapshot of the volume to be accessed. If an active file system of the volume is to be accessed, then the snapshot identifier may be empty, null, missing, comprising a zero value, or otherwise comprising an indicator that no snapshot is to be accessed. The operation401may comprise a byte range of the file to be accessed. The list of file system info objects404is evaluated using the file system information to identify a file system info object matching the file system instance information. That is, the file system info object may correspond to an instance of the volume (e.g., the active file system of the volume or a snapshot identified by the snapshot identifier of the volume identified by the volume identifier within the operation401) being targeted by the operation401, which is referred to as an instance of a file system or a file system instance. In an example of the list of file system info objects404, the list of file system info objects404is maintained as a linked list of entries. Each entry corresponds to a file system info object, and comprises a volume identifier and a snapshot identifier of the file system info object. In response to the list of file system info objects404not comprising any file system info objects that match the file system instance information, the operation401is routed to the file system tier424for execution by the storage file system418upon the block-addressable storage426because the file system instance is not tiered into the persistent memory416. However, if the file system info object matching the file system instance information is found, then the file system info object is evaluated to identify an inofile such as the inofile406as comprising inodes representing files of the file system instance targeted by the operation401. The inofile406is traversed to identify an inode matching the file identifier specified by the operation401. The inofile406may be represented as a page tree having levels of indirect pages (intermediate nodes of the page tree) pointing to blocks within lower levels (e.g., a root points to level 2 indirect pages, the level 2 indirect pages point to level 1 indirect pages, and the level 1 indirect pages point to level 0 direct blocks). The page tree has a bottom level (level 0) of direct blocks (leaf nodes of the page tree) corresponding to the inodes of the file. In this way, the indirect pages within the inofile406are traversed down until a direct block corresponding to an inode having the file identifier of the file targeted by the operation401is located. The inode may be utilized by the persistent memory file system414to facilitate execution of the operation401by the persistent memory tier422upon the persistent memory416in response to the inode comprising an indicator (e.g., a flag, a bit, etc.) specifying that the file is tiered into the persistent memory416of the node402. If the indicator specifies that the file is not tiered into the persistent memory416of the node402, then the operation401is routed to the file system tier424for execution by the storage file system418upon the block-addressable storage426. In an example where the operation401is a read operation and the inode comprises an indicator that the file is tiered into the persistent memory416, the inode is evaluated to identify a pointer to a file tree of the file. The file tree may comprise indirect pages (intermediate nodes of the file tree comprising references to lower nodes within the file tree) and direct blocks (leaf nodes of the file tree comprising user data of the file). The file tree may be traversed down through levels of the indirect pages to a bottom level of direct blocks in order to locate one or more direct blocks corresponding to pages within the persistent memory416comprising data to be read by the read operation (e.g., a direct block corresponding to the byte range specified by the operation401). That is, the file tree may be traversed to identify data within one or more pages of the persistent memory416targeted by the read operation. The traversal utilizes the byte range specified by the read operation. The byte range is mapped to a page index of a page (e.g., a 4 kb offset) of the data within the file to be accessed by the read operation. In an example, the file tree is traversed to determine whether the byte range is present within the persistent memory416. If the byte range is present, then the read operation is executed upon the byte range. If the byte range is not present, then the read operation is routed to the file system tier424for execution by the storage file system418upon the block-based storage426because the byte range to be read is not stored within the persistent memory416. In an example where the operation401is a write operation, access pattern history of the file (e.g., how frequently and recently the file has been accessed) is evaluated in order to determine whether the execute the write operation upon the persistent memory416or to route the write operation to the file system tier424for execution by the storage file system418upon the block-addressable storage426. In this way, operations are selectively redirected by the persistent memory tier422to the persistent memory file system414for execution upon the byte-addressable persistent memory416or routed to the file system tier424for execution by the storage file system418upon the block-addressable storage426based upon the access pattern history (e.g., write operations targeting more frequently or recently accessed data/files may be executed against the persistent memory416). One embodiment of data management across a persistent memory tier606and a file system tier604of a node602is illustrated by an exemplary method500ofFIG.5and further described in conjunction with system600ofFIGS.6A-6D. In an embodiment, the node602may corresponding to the node402ofFIG.4, such as where the node602comprises the storage operating system storage stack420within which the persistent memory tier606(persistent memory tier422) and the file system tier604(file system tier424) are implemented, the storage file system418used to store and access data within the storage426, the persistent memory file system414used to store and access data within the persistent memory416, etc. The persistent memory file system414used to store and access data within the persistent memory416is separate and distinct from the storage file system418used to store and access data within the storage426. The node602may utilize the file system tier604to store data within blocks of storage, such as within a block (A)608, a block (B)610, a block (C)612, a block (D)614, and/or other blocks through a storage file system associated with the storage. Data stored within the blocks of the storage may correspond to a container, such as a flexible volume. The container may be used for abstracting physical resources of the storage (e.g., disk drives, solid state storage, cloud storage, etc.). The container may be used for separating the manipulation and use of logical resources from their underlying implementation. The container may be used for efficient data management, such as for creating, managing, and utilizing snapshots and clones. The node602may utilize the persistent memory tier606to store data in blocks (within pages) of persistent memory, such as a block (A′)616, a block (B′)618, a block (C′)620, a block (D′)622, and/or other blocks through a persistent memory file system associated with the persistent memory. As a simplified example, the block (A′)616(e.g., a page A′ within the persistent memory of the persistent memory tier606) may initially correspond to the block (A)608within the storage of the file system tier604, such as where data within the block (A)608is cached (tiered up) from the file system tier604into the persistent memory tier606due to the data being frequently or recently accessed. Similarly, block (B′)618of the persistent memory tier606may initially correspond to the block (B)610within the storage of the file system tier604, block (C′)620of the persistent memory tier606may initially correspond to the block (C)612within the storage of the file system tier604, and block (D′)622of the persistent memory tier606may initially correspond to the block (D)614within the storage of the file system tier604. When an operation from a client device is received by the node602, the node602may determine whether the operation is to be processed using the persistent memory tier606or the file system tier604. For example, the persistent memory tier606is higher up in a storage operating system stack of the node602compared to the file system tier604, and thus the operation is first processed by the persistent memory tier606. The persistent memory tier606may determine whether the operation should be executed by the persistent memory file system against the persistent memory (e.g., the operation targets data that is currently stored by the persistent memory tier606) or should be passed along to the file system tier604for execution by the storage file system against the storage (e.g., the operation targets data that is not currently stored by the persistent memory tier606). As the node602processes operations from client devices, data within blocks maintained by the persistent memory tier606in the pages of the persistent memory will change and diverge from corresponding blocks within the storage maintained by the file system tier604. For example, the block (B′)618, the block (C′)620, and the block (D′)622may have been modified since the data from the corresponding block (B)610, block (C)612, and block (D)614was initially cached (tiered up) from the file system tier604to the persistent memory tier606, as illustrated byFIG.6B. The persistent memory tier606may maintain state machines626for the blocks (pages) within the persistent memory. For example, a first state machine may be maintained for the block (A′)616, which may indicate that the block (A′)616has a non-dirty state indicating that data within the block (A′)616is the same as data within the corresponding block (A)608within the file system tier604. A second state machine may be maintained for the block (B′)618, which may indicate that the block (B′)618has a dirty state indicating that the block (B′)618comprises more up-to-date data that is different than data within the corresponding block (B)610within the file system tier604(e.g., an operation may have written to the block (B′)618, thus changing the data within the block (B′)618). A third state machine may be maintained for the block (C′)620, which may indicate that the block (C′)620has a dirty state indicating that the block (C′)620comprises more up-to-date data that is different than data within the corresponding block (C)612within the file system tier604(e.g., an operation may have written to the block (C′)620, thus changing the data within the block (C′)620). A fourth state machine may be maintained for the block (D′)622, which may indicate that the block (D′)622has a dirty state indicating that the block (D′)622comprises more up-to-date data that is different than data within the corresponding block (D)614within the file system tier604(e.g., an operation may have written to the block (D′)622, thus changing the data within the block (D′)622). At502of the method500ofFIG.5, a determination may be made that a block within the persistent memory tier606of the node602has up-to-date data (more up-to-date data) compared to a corresponding block within the file system tier604of the node602. For example, the state machines associated with the block (B′)618, the block (C′)620, the block (D′)622, and/or other blocks may indicate that the blocks have a dirty state and thus have more up-to-date data than the corresponding block (B)610, block (C)612, block (D)614, and/or other blocks within the file system tier604. In an embodiment, a threshold number of blocks within the persistent memory tier606(e.g., a threshold number of pages within the persistent memory) that comprise more up-to-date data than corresponding blocks of the file system tier604may be identified. Identifying the threshold number of blocks (e.g., 1 block, 3 blocks, 10 blocks, or any other number of blocks), may trigger the persistent memory tier606to perform framing of those blocks in order to notify the file system tier604that those blocks comprise more up-to-date data than the corresponding blocks within the file system tier604. In an example, the threshold number of blocks is greater than 1, which may improve the efficiency of framing. As part of framing, the persistent memory tier606creates messages for the blocks of the persistent memory tier606that comprise the more up-to-date data compared to the corresponding blocks within the file system tier604. For example, a batch of messages627may be created and transmitted from the persistent memory tier606to the file system tier604for notifying the file system tier604that the block (B′)618, the block (C′)620, and the block (D′)622comprise more up-to-date data than the corresponding block (B)610, block (C)612, and block (D)614. The batch of messages627may comprise location information of locations of the block (B′)618, the block (C′)620, and the block (D′)622within the pages of the persistent memory of the persistent memory tier606. At504of the method500ofFIG.5, the corresponding block (B)610, block (C)612, and block (D)614are marked as dirty blocks within the file system tier604of the node602, as illustrated byFIG.6C. In an embodiment, the corresponding block (B)610, block (C)612, and block (D)614are marked as dirty blocks using flags to indicate that the more up-to-date data for the corresponding block (B)610, block (C)612, and block (D)614is stored by the persistent memory tier606, and thus the corresponding block (B)610, block (C)612, and block (D)614comprise stale data (e.g., data tiered from the file system tier604to the persistent memory tier606was subsequently modified within the persistent memory tier606) or missing data (e.g., missing data because data was initially written to the persistent memory tier606and was never written to the file system tier604). In an example, a flag of a dirty block is used as an indicator to a consistency point operation that is used to flush data from the file system tier604to the storage. The consistent point operation is executed to store dirty data to physical storage used by the file system tier604. The flag may indicate to the consistency point operation that additional handling is to be performed for the dirty block. In an embodiment, the flag triggers the consistency point operation to allocate a virtual volume block number for dirty block within the file system tier604based upon the flag indicating that up-to-date data of the dirty block is stored within the persistent memory tier606. The flag triggers the consistency point operation to stamp a special physical block number (store a physical block number allocation) within a user indirect associated with the dirty block within the file system tier604, which can be used to help facilitate virtual layer translation of the dirty block. For example, physical block numbers correspond to disk locations where data for the blocks are written out to physical storage. A special physical block number comprises a special value that does not correspond to an actual disk location. Rather, the special physical block number serves as an indicator that the location of data is not stored by the file system tier604and is actually within the persistent memory tier606, which can be obtained using the container within which the location information of the data within the persistent memory tier606is encoded. In an embodiment, the flag triggers the consistency point operation to refrain from allocating a physical block number for the dirty block, and to instead encode a persistent memory tier block location of a corresponding block comprising the up-to-date data within the persistent memory tier606(e.g., a location of a page within the persistent memory comprising the up-to-date data). In particular, at506of the method500ofFIG.5, the location information of the locations of the block (B′)618, the block (C′)620, and the block (D′)622within the persistent memory tier606are encoded into the container associated with the block (B)610, the block (C)612, and the block (D)614. In an example, the flag serves as an indicator to the consistency point operation that there is no actual data associated with the dirty blocks to flush to the physical storage because the actual data is stored within the persistent memory tier606. In an embodiment, as the file system tier604is processing the messages627of blocks being framed from the persistent memory tier606to the file system tier604, the messages627are logged within a log624, such as a non-volatile log (NV log). The file system tier604may log a message that a block within the persistent memory tier606has more up-to-date data than a corresponding block in the file system tier604(e.g., a message indicating that the block (C′)620comprises more up-to-date data than block (C)612) into the log624after the file system tier604has marked the block as being a dirty block and/or has encoded location information of the block into the container. In an embodiment, a read operation630directed to a block within the file system tier604, such as the block (C)612, is received by the node602. In an example, the read operation630may correspond to a data management operation being implemented by the storage operating system of the node602in association with the file system tier604, such as a snapshot operation or a file clone operation. In an example, the read operation630is part of a cross-tier data management operation that targets data stored across the file system tier604and the persistent memory tier606, such as a file clone operation that clones a file whose data is stored across both the file system tier604and the persistent memory tier606(e.g., some up-to-date data of the file may be stored within the block (C′)620of the persistent memory tier606). As part of implementing the read operation630targeting the block (C)612within the file system tier604, the location information of the block (C′)620is obtained from the container based upon the block (C)612being marked as a dirty block using a flag, at508of the method500ofFIG.5. At510of the method500ofFIG.5, the location information is used to retrieve628the more up-to-date data from the block (C′)620within the persistent memory tier606for processing the read operation630. In this way, the read operation630utilizes the more up-to-date data from the block (C′)620instead of utilizing stale or missing data from the block (C)612marked as a dirty block within the file system tier604. In an example, the up-to-date data is retrieved from the persistent memory tier606and is stored into the block (C)612of the file system tier604, and the block (C)612is no longer marked as a dirty block and the flag is removed. The log624may be used by the node602in the event the node602experiences a failure and is attempting to recover640from the failure, as illustrated byFIG.6D. In an embodiment of recovering640from the failure, messages within the log624are replayed642upon the file system tier604. As part of replaying a message indicating that a block (page) within the persistent memory tier606comprises more up-to-date data than a corresponding block within the file system tier604, the corresponding block within the file system tier604is marked as a dirty block and location information of the block (page) within the persistent memory tier606is encoded into the container associated with the corresponding block. If the log624is determined to be compromised due to the failure, then the replay642is not performed (skipped). Once the replay642of the log624is performed to replay the messages logged within the log624or the replay642is skipped, an asynchronous operation644is executed. In an embodiment, the asynchronous operation644is executed by the persistent memory tier606. The asynchronous operation644is implemented to walk the persistent memory file system of the persistent memory tier606to identify a set of blocks within the persistent memory tier606comprising more up-to-date data compared to corresponding blocks within the file system tier604. If the replay642of the log624was performed, then the set of blocks exclude any blocks for which messages were replayed from the log624. Thus, the set of blocks correspond to a set of messages that were provided to the file system tier604but were not successfully completed and logged within the log624before the failure of the node602. If the replay642of the log624was skipped, then the set of blocks would additionally include those blocks associated with messages that were previously logged into the log624. Thus, the set of blocks correspond to a set of messages that were provided to the file system tier604but were not successfully completed and logged within the log624before the failure of the node602, and also correspond to messages that were successfully completed and logged within the log624before the failure of the node602. In an example, the set of blocks exclude blocks corresponding to snapshots of the file system of the persistent memory tier606. In an embodiment of implementing the asynchronous operation644, the asynchronous operation644evaluates the state machines626of the blocks within the persistent memory tier606to identify the set of blocks as blocks (pages) having a dirty state. In an embodiment of implementing the asynchronous operation644, the asynchronous operation644may be executed in parallel with the node602processing incoming I/O operations from client devices. In this way, the asynchronous operation644does not withhold/queue/block client I/O, thus improving the efficiency and recovery of the node602. Once the set of blocks are identified by the asynchronous operation644, a set of messages (a new set of messages) may be generated and sent to the file system tier604to reframe the set of blocks within the persistent memory tier606. The set of messages may indicate that the set of blocks within the persistent memory tier606comprise more up-to-date data than corresponding blocks within the file system tier604, and may also comprise location information of the set of blocks within the persistent memory tier606. The set of messages may trigger the file system tier604to mark blocks within the file system tier604as dirty blocks based upon the blocks corresponding to the set of blocks within the persistent memory tier606. Also, the set of messages may trigger the file system tier604to encode locations of the set of blocks within the persistent memory tier606(locations of pages within the persistent memory) into the container of the file system tier604corresponding to the dirty blocks within the file system tier604. In this way, the set of blocks within the persistent memory tier606comprising more up-to-date data than corresponding blocks within the file system tier604are reframed by the asynchronous operation644for notifying the file system tier604using the set of message that the more up-to-date data is stored within the persistent memory tier606. Still another embodiment involves a computer-readable medium700comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device that is devised in these ways is illustrated inFIG.7, wherein the implementation comprises a computer-readable medium708, such as a compact disc-recordable (CD-R), a digital versatile disc-recordable (DVD-R), flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data706. This computer-readable data706, such as binary data comprising at least one of a zero or a one, in turn comprises processor-executable computer instructions704configured to operate according to one or more of the principles set forth herein. In some embodiments, the processor-executable computer instructions704are configured to perform a method702, such as at least some of the exemplary method500ofFIG.5, for example. In some embodiments, the processor-executable computer instructions704are configured to implement a system, such as at least some of the exemplary system400ofFIG.4and/or at least some of the exemplary system600ofFIGS.6A-6D, for example. Many such computer-readable media are contemplated to operate in accordance with the techniques presented herein. In an embodiment, the described methods and/or their equivalents may be implemented with computer executable instructions. Thus, in an embodiment, a non-transitory computer readable/storage medium is configured with stored computer executable instructions of an algorithm/executable application that when executed by a machine(s) cause the machine(s) (and/or associated components) to perform the method. Example machines include but are not limited to a processor, a computer, a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, and so on. In an embodiment, a computing device is implemented with one or more executable algorithms that are configured to perform any of the disclosed methods. It will be appreciated that processes, architectures and/or procedures described herein can be implemented in hardware, firmware and/or software. It will also be appreciated that the provisions set forth herein may apply to any type of special-purpose computer (e.g., file host, storage server and/or storage serving appliance) and/or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings herein can be configured to a variety of storage system architectures including, but not limited to, a network-attached storage environment and/or a storage area network and disk assembly directly attached to a client or host computer. Storage system should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. In some embodiments, methods described and/or illustrated in this disclosure may be realized in whole or in part on computer-readable media. Computer readable media can include processor-executable instructions configured to implement one or more of the methods presented herein, and may include any mechanism for storing this data that can be thereafter read by a computer system. Examples of computer readable media include (hard) drives (e.g., accessible via network attached storage (NAS)), Storage Area Networks (SAN), volatile and non-volatile memory, such as read-only memory (ROM), random-access memory (RAM), electrically erasable programmable read-only memory (EEPROM) and/or flash memory, compact disk read only memory (CD-ROM)s, CD-Rs, compact disk re-writeable (CD-RW)s, DVDs, cassettes, magnetic tape, magnetic disk storage, optical or non-optical data storage devices and/or any other medium which can be used to store data. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Various operations of embodiments are provided herein. The order in which some or all of the operations are described should not be construed to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated given the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Furthermore, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard application or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer application accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. As used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component includes a process running on a processor, a processor, an object, an executable, a thread of execution, an application, or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers. Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B and/or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used, such terms are intended to be inclusive in a manner similar to the term “comprising”. Many modifications may be made to the instant disclosure without departing from the scope or spirit of the claimed subject matter. Unless specified otherwise, “first,” “second,” or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first set of information and a second set of information generally correspond to set of information A and set of information B or two different or two identical sets of information or the same set of information. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. | 113,121 |
11861200 | DETAILED DESCRIPTION OF THE DISCLOSURE Embodiments of the invention include i) automated processes for recording information at a granular level; ii) methods for checking/verifying that data is used and processed is consistent with an entity's internal policies and/or external regulations; and iii) methods for producing reports to authorized users (e.g., individuals and organizations) with information related to items i) and ii). Embodiments also include systems for capturing required data in an immutable fashion so that users outside of an entity (e.g., public, third parties) can check and audit that internal policies and other regulatory policies and frameworks are followed. These policies and frameworks may include ensuring that: i) data is collected appropriately; ii) data is appropriately processed to be used as inputs to fraud or risk models; iii) inputs are processed by fraud and risk models and workflows to produce scores (e.g., risk scores) appropriately; and iv) scores are used by fraud and risk systems appropriately. In an embodiment, a system is provided such that members of a consortium can also check and audit these four processes. A consortium may, for example, check that user location data is not used for ad targeting or that user data is not used to build risk models. An example of a workflow for a risk model is when a user risk model is used to produce a score about the user's overall risk; a separate transaction risk model is used to produce a score about the risk of a particular transaction; both of the scores, plus additional inputs, are used as input to a third risk model that produces a third score that integrates the scores from the two models; and, the third score is used as input to a fourth model, that rescales the score and applies certain business rules, such as ignoring small dollar transactions that may unnecessarily inconvenience the user compared to the potential reduction in risk to the organization. Examples of other outputs of models beyond scores include the confidence level associated with the score and certain explanatory codes or strings that can be used to help explain to the user why the score was particularly high or low. There are several benefits and advantages of the embodiments provided herein, including but not limited to, the following examples. Embodiments herein utilize distributed data objects (e.g., data storage blocks) in order to capture all the relevant collected and processed data at a scale and level of granularity required. Distributed data objects (e.g. provenance blocks) are provided that contain provenance information about the data, how and when it is accessed, and how and when it is processed and “cell level access methods” are provided that ensure users are given access to precisely the data that they are authorized to view at the granularity required. In addition, analytic processing of data is expressed in workflow languages and immutable logs of the workflows are created by the embodiments using provenance blocks which are used to capture the internal processing of data, model inputs, model outputs, and system alerts and notifications at the scale and granularity required. Further, distributed data and provenance blocks with blockchain or centralized ledgers according to embodiments herein provide access to public and consortium members to precisely the data they are authorized to see. FIG.1is an overview of a system100according to an embodiment of the present invention. The system100consists of a plurality of layers, with modules and components in layers communicating to the layers above or below using application programming interfaces (APis). Layer 1 is a distributed data block-based secure storage layer that includes one or more data storage blocks101and one or more data provenance blocks102in a data lake (not shown). Layer 2 is an infrastructure or management module103coupled to Layer 1 for managing the data storage blocks101and data provenance blocks102. In an embodiment, the data block system management module103of Layer 2 includes a Data and Model Provenance Blocks (hereinafter “DMPB”) module103a. In an embodiment, Layer 2 includes a centralized ledger103b. The management module103is adapted and configured to store and/or manage one or more of the following: immutable cryptographically signed logs, claims, and other assertions about user access, data access, data provenance, data processing and related events. (Note, the acronym DMPB herein is used throughout this disclosure to describe the data and model provenance blocks that contain information about how data is processed, and, in particular, how data is processed to produce fraud and risk models.) In an embodiment, DMPB module103auses blockchain so that a mechanism can be provided to members of the public who have contributed their own data and interacted with the system can check how their data is used by the system100and that this use is consistent with the required policies and regulations. Once a user registers with the system100, the user is assigned a random string of letters and numbers (i.e., the block chain user ID) that is associated with all user related data in data storage blocks101and all provenance related data in data provenance blocks102. Since the data storage blocks101and data provenance blocks102may be immutable and cannot be changed once they are written, the system100may provide the user with the necessary information about what data of theirs was collected and how it was used. Layer 3 is a logging module104that includes an identity, authorization and access management (IAM) module403. The logging module104communicates to the DMPB module in Layer 2 via API calls. The IAM module403provides: a) identity access management, b) role based and attribute-based access controls; c) fine-grained cell-based access controls; and d) data provenance and auditing. The IAM module104writes immutable cryptographically signed logs about user access, data access, data provenance, data processing and related events to Layer 2. Layer 4 is a rule (e.g. regulatory and policy) analytics module105that is adapted and configured to provide real-time processing and auditing, including: a) continuous checking of data sharing rules; b) continuous checking of privacy rules; c) continuous checking of regulatory requirements, and d) real-time auditing of the continuous checking of steps a), b) and c). Layer 5 is a fraud and analytics module106that provides functionality for building and deploying risk and fraud models with data provided by layer 1, with identify and access management provided by Layer 2, and with rules (e.g. data sharing, privacy rules, and regulatory requirements) checking provided by Layer 3. Embodiments of the identity, authorization and access management TAM module403of Layer 3 and the rule analytics module105of Layer 4 may be provided with either a centralized ledger102or DMPB module in Layer 2. A public governance model for Layer 2 data blocks can be used, or a consortium or federated governance model for a centralized ledger can be used so that access to the data is limited, for example, to partners providing data for the fraud and risk models or to partners deploying the risk and fraud models developed by the system. FIG.2illustrates the operation of the management module103of Layer 2 inFIG.1. As the various steps required to build fraud and risk models are completed by the system100, the completion of each step (e.g., assertions or claims) is written by Layer 3 to a DMPB103aor to a centralized ledger103bin Layer 2, depending upon a desired governance model. The management module103provides all the information necessary to check that fraud, risk and other models are collecting and processing data appropriately as required by the rules (e.g., internal policies and external policies and regulations). As data is accessed by users (e.g., User1, User2. . . User n) within an organization205, as data is processed to produce models, and as scores produced by models are processed, data and model provenance assertions/claims are written by the IAM module104of Layer 3, as also illustrated inFIG.2. Whenever data is collected or accessed, appropriate checks are made to ensure that all required conditions and regulations are satisfied, and the appropriate assertions/claims would be recorded by Layer 4. Data storage blocks101are the smallest, most granular piece of information that is to be stored within the system100. All data storage blocks101are cryptographically bound to the visibility and sharing restrictions in accordance with policies defined by the user such as encrypted data block301. Data storage blocks101are then encrypted for processing and persistence using encryption header302and encrypted payload303. In an embodiment, data storage blocks101are centralized. In another embodiment, data storage blocks101are geographically distributed within all applicable geographic regions to enable high-availability, failover, locality-based speed of response, and consistency of user experience. Visibility of data storage blocks101is cryptographically attached to each data storage block101. Access to these data storage blocks101requires the appropriate authorizations for secure data sharing based on user access visibility assessed through a Smart Contract associated with contract checker204. Data provenance blocks102are a record of all interactions with data storage blocks101. They also provide an immutable record of how fraud and risk models are built and how they are used to process and to score user data. When a data storage block101is created by a user within an organization, who or what created it, when it was accessed, who or what accessed it, why it was accessed, and where it was used are stored. Provenance blocks are used for patterns of life, attribution, pedigree, and lineage of the data blocks. This is a continuous process for appending immutable transaction details to the data block for its lifetime. Provenance records, unless otherwise prohibited by law or customer policy, are retained after data blocks are deleted for analysis. FIG.3illustrates the structure of an embodiment of an encrypted data block301such as data storage block101or data provenance block102, which may be utilized in Layer 1 ofFIG.1. Encrypted data block301includes an encryption header302. The encryption header302contains the information necessary so that each encrypted data block301may be part of a distributed block-based storage system that may include a plurality of data storage blocks101and data provenance blocks102. The encrypted data block301also includes an encrypted payload303, whose encrypted key is provided in the encryption header302. The encrypted payload303is comprised of two parts: 1) a crypto header303a, which contains provenance related information, and the associated payload305. The crypto header303acontains a cryptographic signature that is used to verify the integrity of the data storage block30, so that it is immutable. This is necessary so that the encrypted data block301itself can be audited by the Regulatory and Provenance Analytics of the rule analytics module105. Finally, the payload303bcontains the actual data being managed by the encrypted data block301. This may include the original data and/or provenance information about the data generated by the system100. The payload305may contain several different types of data, that includes, but is not limited to: data collected for analysis by the system; cleaned, aggregated, and transformed data that are inputs to analytic models; the outputs of analytic models, which may be the inputs of other analytic models that are part of an analytic workflow. Scores produced by analytic models or analytic workflows; analytic models themselves in a serialized or other format so that they can be stored in one or more data storage blocks101. Rules that are used for post-processing analytic models and analytic workflows before they are passed to other external interfaces and components. These rules are also in a serialized or other format so that they can be stored in one or more data storage blocks101. Creation of provenance records. Provenance records in data provenance blocks102are created by the system for a number of different reasons and purposes, including, but not limited to when new data storage block101is created, updated, or deleted. In an embodiment, data is only deleted or changed when required by the rules such as regulations or policy. Data is immutable and changes to data are made by appending the changes to the current state of the data, or using another mechanism for creating and maintain immutable data, so that there is a complete audit chain of all changes to the data under one or more of the following conditions: when data storage blocks101are access by any user or system process; or when a policy requirement of a regulation changes the access rules for data. A regulation change may be, for example, that provenance records can be hidden after a requirement to purge data following a request for the right to be forgotten. Returning toFIG.2, the provenance & data block manager201continuously evaluates data storage blocks101for changes in customer policy and enforces regulatory modifications required for access to the data storage blocks101. If a data block policy is updated for any reason, it is tracked via data provenance block102for later analysis. The provenance & data block manager201is also adapted and configured to provide auditing and precision data deletions periodically or as desired. Centralized Ledger202is an immutable storage mechanism for the provenance & data block manager201. Centralized ledger202supports continuous auditing and transparency for the life of provenance block102and data storage block101. The encryption manager203provides functionality to associate inbound data storage block101and provenance block102with the appropriate encryption tokens and supports alignment of users and/or processes access through the contract checker204(e.g., Smart Contract Controller Gateway) or to the encryption tokens required for access to the data within the data storage block101. The contract checker204is the entry point from the organization205to the management module103. It facilitates authentication and authorization for each user or process that is to be granted access. It is the policy determination point which verifies the user and/or process identity, location, and access privileges. FIG.4illustrates how the logging module104logs all relevant data. The logging module104logs both user access406and system access405. Both are authenticated and authorized using the identity, authorization and access management (IAM module)403. Based upon the IAM module403, reads and writes are permitted on the data storage block101and data provenance block102. The rules analytics module105accesses the data provenance blocks102to check that the appropriate policies and regulations are enforced. Note, data storage block101and data provenance block102are not part of the logging module104, as can be seen inFIG.4. FIG.5illustrates how smart contracts allow users and organizations to verify directly that internal entity policies and third-party regulations are being followed, without an entity's participation or the participation from other third parties. For example, as illustrated inFIG.5, organization205(e.g., user group, entity, system process etc.) can use a contract checker204to access encrypted data blocks301. As long as the user or system request has access to the relevant data, as determined by the IAM module403, the provenance data within the encrypted data block301may be accessed and analyzed by the rules analytics module105,801, with the results returned to the requester. This is because the necessary provenance data has been stored in immutable provenance blocks102by the model and analytics logging module402, as shown inFIG.7. FIG.6shows how the fine-grained identity and access management (IAM) is handled by the IAM module403for both user access406and system access405to data storage block101and data provenance block102. User and system identity is first authenticated with the authentication services603. Once authenticated the access to a particular field of data storage block101or data provenance block102is provided by the authorization service604. Data may be from multiple data sources605a-n, but access is provided to precisely the data source605a-nand to precisely only the authorized fields within the data sources as needed. In other words, the authorization to access data is fine grained or, what is sometimes called cell based, and authorization is not provided to entire datasets, unless the user or system process is authorized to access the entire dataset. The data and provenance blocks are encrypted as shown inFIG.3and only decrypted when the user or system is authorized to access a particular field or fields. FIG.7is a flow chart of a method for model and analytic scoring, logging and auditing according to an embodiment of the present invention. Provenance records are kept of the workflows used for the following steps. Step701collects data from multiple sources605a-n. Step2cleans and normalizes the data. Step3computes features (e.g., features for aggregating or transforming data). Step704uses the features as inputs to models and workflows. Step705uses the scores produced by the models and workflows in step704as inputs to the post-processing rules to produce the final scores and other outputs. The processing steps701through705may be implemented or expressed in different ways. One of the common implementations is to express each workflow as a directed acyclic graph (DAG), in which each node of the graph is a software program or application called, and with a directed edge between two nodes indicating how the outputs from one node are used as the inputs to another node. Each software program or application is labeled with a unique label and available in an environment or framework that allows its execution. For example, in an embodiment, the software program or application may be in a Docker container or other container, which provides a virtualized environment that encapsulates software applications and all the required libraries and configuration files. Alternately, in another implementation, the software program or application may be part of a serverless framework. In this context, a container is a packaging of software and the necessary software libraries and configuration files so that the container may be run using a cloud-computing platform as a service execution model that uses virtualization to: i) support the execution of programs within containers, and ii) the ability of containers to communicate with other containers, as specified in appropriate configuration files. In this context, a serverless framework is another cloud-computing execution model in which the cloud service provider runs the server or servers executing the software code, and dynamically manages the allocation of machine resources required to run the server or servers. In this way, each node in each workflow corresponding to a software program or application is assigned a unique label and this information is persisted in an immutable provenance block102. In addition, each workflow is assigned a unique label and also persisted in immutable provenance blocks. In this way, provenance information persists in the provenance blocks102capturing the data source605nand the processing workflow steps701,702, . . . ,705. This enables the logging module402to associate an immutable provenance record with each score or other output produced by the fraud and risk analytics module106ofFIG.2. Given these provenance records, the rules module105can review the compliance of the scoring either record by record as the scoring is done, or periodically by examining batches of scored data and their associated provenance record. As an alternative implementation, a single provenance record that contains the totality of information characterizing the data source605nand the processing701,702, . . . ,705can be associated and used to provide the immutable provenance information for each single score or for a batch of scores produced by the fraud and risk analytics system106. In some implementations, the clean and normalized data from step702is directly used by the analytic models and workflow step704and does not need necessarily to compute features in step703. This is the case, for example, with deep learning models. FIG.8illustrates software modules for checking that an entity's internal policies and external policies required by users, data suppliers, and third-party regulatory agencies, industry best practices, and others are being supported. For example, a particular user may not have given permission for his or her historical purchases be used to score fraud and risk models. In this case, when the module802developed a fraud or risk model using data available in the datamart702, module801would verify that this user's historical purchases, or data derived from this user's historical purchases, would not be used as part of the training data from datamart702by module802to build any models. In addition, as part of an audit analysis of historical scores, provenance blocks102could be analyzed to be sure that the user's historical purchases were not used to build any models. Similarly, provenance records102could be analyzed for any user of interest to check whether any model deployed by module803used data from the user being audited to ensure this data was consistent with the user's preferences at that time as provided by the user data401, and as recorded in the relevant immutable data provenance block101. In this way, audit reports806can be produced over batches of data, as well as alerts804and real-time monitoring records805for individual records being scored, as provided by the model and analytics logging data402. The subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). A computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processor of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of nonvolatile memory, including by way of example semiconductor memory devices, (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks, (e.g., internal hard disks or removable disks); magneto optical disks; and optical disks (e.g., CD and DVD disks). The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. The subject matter described herein can be implemented in a computing system that includes a back end component (e.g., a data server), a middleware component (e.g., an application server), or a front end component (e.g., a client computer mobile device, wearable device, having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described herein), or any combination of such back end, middleware, and front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet. It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter. Although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter, which is limited only by the claims which follow. | 27,525 |
11861201 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION System Overview: In some implementations, the present disclosure may be embodied as a method, system, or computer program product. Accordingly, in some implementations, the present disclosure may take the form of an entirely hardware implementation, an entirely software implementation (including firmware, resident software, micro-code, etc.) or an implementation combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, in some implementations, the present disclosure may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. In some implementations, any suitable computer usable or computer readable medium (or media) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-usable, or computer-readable, storage medium (including a storage device associated with a computing device or client electronic device) may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a digital versatile disk (DVD), a static random access memory (SRAM), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, a media such as those supporting the internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be a suitable medium upon which the program is stored, scanned, compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of the present disclosure, a computer-usable or computer-readable, storage medium may be any tangible medium that can contain or store a program for use by or in connection with the instruction execution system, apparatus, or device. In some implementations, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. In some implementations, such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. In some implementations, the computer readable program code may be transmitted using any appropriate medium, including but not limited to the internet, wireline, optical fiber cable, RF, etc. In some implementations, a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. In some implementations, computer program code for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like. Java® and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. However, the computer program code for carrying out operations of the present disclosure may also be written in conventional procedural programming languages, such as the “C” programming language, PASCAL, or similar programming languages, as well as in scripting languages such as Javascript, PERL, or Python. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the internet using an Internet Service Provider). In some implementations, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGAs) or other hardware accelerators, micro-controller units (MCUs), or programmable logic arrays (PLAs) may execute the computer readable program instructions/code by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. In some implementations, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus (systems), methods and computer program products according to various implementations of the present disclosure. Each block in the flowchart and/or block diagrams, and combinations of blocks in the flowchart and/or block diagrams, may represent a module, segment, or portion of code, which comprises one or more executable computer program instructions for implementing the specified logical function(s)/act(s). These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the computer program instructions, which may execute via the processor of the computer or other programmable data processing apparatus, create the ability to implement one or more of the functions/acts specified in the flowchart and/or block diagram block or blocks or combinations thereof. It should be noted that, in some implementations, the functions noted in the block(s) may occur out of the order noted in the figures (or combined or omitted). For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In some implementations, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks or combinations thereof. In some implementations, the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed (not necessarily in a particular order) on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts (not necessarily in a particular order) specified in the flowchart and/or block diagram block or blocks or combinations thereof. Referring now to the example implementation ofFIG.1, there is shown recovery process10that may reside on and may be executed by a computer (e.g., computer12), which may be connected to a network (e.g., network14) (e.g., the internet or a local area network). Examples of computer12(and/or one or more of the client electronic devices noted below) may include, but are not limited to, a storage system (e.g., a Network Attached Storage (NAS) system, a Storage Area Network (SAN)), a personal computer(s), a laptop computer(s), mobile computing device(s), a server computer, a series of server computers, a mainframe computer(s), or a computing cloud(s). As is known in the art, a SAN may include one or more of the client electronic devices, including a RAID device and a NAS system. In some implementations, each of the aforementioned may be generally described as a computing device. In certain implementations, a computing device may be a physical or virtual device. In many implementations, a computing device may be any device capable of performing operations, such as a dedicated processor, a portion of a processor, a virtual processor, a portion of a virtual processor, portion of a virtual device, or a virtual device. In some implementations, a processor may be a physical processor or a virtual processor. In some implementations, a virtual processor may correspond to one or more parts of one or more physical processors. In some implementations, the instructions/logic may be distributed and executed across one or more processors, virtual or physical, to execute the instructions/logic. Computer12may execute an operating system, for example, but not limited to, Microsoft® Windows®; Mac® OS X®; Red Hat® Linux®, Windows® Mobile, Chrome OS, Blackberry OS, Fire OS, or a custom operating system. (Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States, other countries or both; Mac and OS X are registered trademarks of Apple Inc. in the United States, other countries or both; Red Hat is a registered trademark of Red Hat Corporation in the United States, other countries or both; and Linux is a registered trademark of Linus Torvalds in the United States, other countries or both). In some implementations, as will be discussed below in greater detail, a recovery process, such as recovery process10ofFIG.1, may maintain a back pointer from a physical layer block (PLB) to a virtual layer block (VLB) in a multi-level hierarchical file system. A generation number may be maintained in the VLB, wherein the generation number may indicate when data is moved from the PLB to another PLB. An object may be reconstructed in the multi-level hierarchical file system based upon, at least in part, at least one of the back pointer and the generation number. In some implementations, the instruction sets and subroutines of recovery process10, which may be stored on storage device, such as storage device16, coupled to computer12, may be executed by one or more processors and one or more memory architectures included within computer12. In some implementations, storage device16may include but is not limited to: a hard disk drive; all forms of flash memory storage devices; a tape drive; an optical drive; a RAID array (or other array); a random access memory (RAM); a read-only memory (ROM); or combination thereof. In some implementations, storage device16may be organized as an extent, an extent pool, a RAID extent (e.g., an example 4D+1P R5, where the RAID extent may include, e.g., five storage device extents that may be allocated from, e.g., five different storage devices), a mapped RAID (e.g., a collection of RAID extents), or combination thereof. In some implementations, network14may be connected to one or more secondary networks (e.g., network18), examples of which may include but are not limited to: a local area network; a wide area network or other telecommunications network facility; or an intranet, for example. The phrase “telecommunications network facility,” as used herein, may refer to a facility configured to transmit, and/or receive transmissions to/from one or more mobile client electronic devices (e.g., cellphones, etc.) as well as many others. In some implementations, computer12may include a data store, such as a database (e.g., relational database, object-oriented database, triplestore database, etc.) and may be located within any suitable memory location, such as storage device16coupled to computer12. In some implementations, data, metadata, information, etc. described throughout the present disclosure may be stored in the data store. In some implementations, computer12may utilize any known database management system such as, but not limited to, DB2, in order to provide multi-user access to one or more databases, such as the above noted relational database. In some implementations, the data store may also be a custom database, such as, for example, a flat file database or an XML database. In some implementations, any other form(s) of a data storage structure and/or organization may also be used. In some implementations, recovery process10may be a component of the data store, a standalone application that interfaces with the above noted data store and/or an applet/application that is accessed via client applications22,24,26,28. In some implementations, the above noted data store may be, in whole or in part, distributed in a cloud computing topology. In this way, computer12and storage device16may refer to multiple devices, which may also be distributed throughout the network. In some implementations, computer12may execute a storage management application (e.g., storage management application21), examples of which may include, but are not limited to, e.g., a storage system application, a cloud computing application, a data synchronization application, a data migration application, a garbage collection application, or other application that allows for the implementation and/or management of data in a clustered (or non-clustered) environment (or the like). In some implementations, recovery process10and/or storage management application21may be accessed via one or more of client applications22,24,26,28. In some implementations, recovery process10may be a standalone application, or may be an applet/application/script/extension that may interact with and/or be executed within storage management application21, a component of storage management application21, and/or one or more of client applications22,24,26,28. In some implementations, storage management application21may be a standalone application, or may be an applet/application/script/extension that may interact with and/or be executed within recovery process10, a component of recovery process10, and/or one or more of client applications22,24,26,28. In some implementations, one or more of client applications22,24,26,28may be a standalone application, or may be an applet/application/script/extension that may interact with and/or be executed within and/or be a component of recovery process10and/or storage management application21. Examples of client applications22,24,26,28may include, but are not limited to, e.g., a storage system application, a cloud computing application, a data synchronization application, a data migration application, a garbage collection application, or other application that allows for the implementation and/or management of data in a clustered (or non-clustered) environment (or the like), a standard and/or mobile web browser, an email application (e.g., an email client application), a textual and/or a graphical user interface, a customized web browser, a plugin, an Application Programming Interface (API), or a custom application. The instruction sets and subroutines of client applications22,24,26,28, which may be stored on storage devices30,32,34,36, coupled to client electronic devices38,40,42,44, may be executed by one or more processors and one or more memory architectures incorporated into client electronic devices38,40,42,44. In some implementations, one or more of storage devices30,32,34,36, may include but are not limited to: hard disk drives; flash drives, tape drives; optical drives; RAID arrays; random access memories (RAM); and read-only memories (ROM). Examples of client electronic devices38,40,42,44(and/or computer12) may include, but are not limited to, a personal computer (e.g., client electronic device38), a laptop computer (e.g., client electronic device40), a smart/data-enabled, cellular phone (e.g., client electronic device42), a notebook computer (e.g., client electronic device44), a tablet, a server, a television, a smart television, a smart speaker, an Internet of Things (IoT) device, a media (e.g., video, photo, etc.) capturing device, and a dedicated network device. Client electronic devices38,40,42,44may each execute an operating system, examples of which may include but are not limited to, Android™, Apple® iOS®, Mac® OS X®; Red Hat® Linux®, Windows® Mobile, Chrome OS, Blackberry OS, Fire OS, or a custom operating system. In some implementations, one or more of client applications22,24,26,28may be configured to effectuate some or all of the functionality of recovery process10(and vice versa). Accordingly, in some implementations, recovery process10may be a purely server-side application, a purely client-side application, or a hybrid server-side/client-side application that is cooperatively executed by one or more of client applications22,24,26,28and/or recovery process10. In some implementations, one or more of client applications22,24,26,28may be configured to effectuate some or all of the functionality of storage management application21(and vice versa). Accordingly, in some implementations, storage management application21may be a purely server-side application, a purely client-side application, or a hybrid server-side/client-side application that is cooperatively executed by one or more of client applications22,24,26,28and/or storage management application21. As one or more of client applications22,24,26,28, recovery process10, and storage management application21, taken singly or in any combination, may effectuate some or all of the same functionality, any description of effectuating such functionality via one or more of client applications22,24,26,28, recovery process10, storage management application21, or combination thereof, and any described interaction(s) between one or more of client applications22,24,26,28, recovery process10, storage management application21, or combination thereof to effectuate such functionality, should be taken as an example only and not to limit the scope of the disclosure. In some implementations, one or more of users46,48,50,52may access computer12and recovery process10(e.g., using one or more of client electronic devices38,40,42,44) directly through network14or through secondary network18. Further, computer12may be connected to network14through secondary network18, as illustrated with phantom link line54. Recovery process10may include one or more user interfaces, such as browsers and textual or graphical user interfaces, through which users46,48,50,52may access recovery process10. In some implementations, the various client electronic devices may be directly or indirectly coupled to network14(or network18). For example, client electronic device38is shown directly coupled to network14via a hardwired network connection. Further, client electronic device44is shown directly coupled to network18via a hardwired network connection. Client electronic device40is shown wirelessly coupled to network14via wireless communication channel56established between client electronic device40and wireless access point (i.e., WAP)58, which is shown directly coupled to network14. WAP58may be, for example, an IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel56between client electronic device40and WAP58. Client electronic device42is shown wirelessly coupled to network14via wireless communication channel60established between client electronic device42and cellular network/bridge62, which is shown by example directly coupled to network14. In some implementations, some or all of the IEEE 802.11x specifications may use Ethernet protocol and carrier sense multiple access with collision avoidance (i.e., CSMA/CA) for path sharing. The various 802.11x specifications may use phase-shift keying (i.e., PSK) modulation or complementary code keying (i.e., CCK) modulation, for example. Bluetooth™ (including Bluetooth™ Low Energy) is a telecommunications industry specification that allows, e.g., mobile phones, computers, smart phones, and other electronic devices to be interconnected using a short-range wireless connection. Other forms of interconnection (e.g., Near Field Communication (NFC)) may also be used. In some implementations, various I/O requests (e.g., I/O request15) may be sent from, e.g., client applications22,24,26,28to, e.g., computer12. Examples of I/O request15may include but are not limited to, data write requests (e.g., a request that content be written to computer12) and data read requests (e.g., a request that content be read from computer12). Data Storage System: Referring also to the example implementation ofFIGS.2-3(e.g., where computer12may be configured as a data storage system), computer12may include storage processor100and a plurality of storage targets (e.g., storage targets102,104,106,108,110). In some implementations, storage targets102,104,106,108,110may include any of the above-noted storage devices. In some implementations, storage targets102,104,106,108,110may be configured to provide various levels of performance and/or high availability. For example, storage targets102,104,106,108,110may be configured to form a non-fully-duplicative fault-tolerant data storage system (such as a non-fully-duplicative RAID data storage system), examples of which may include but are not limited to: RAID 3 arrays, RAID 4 arrays, RAID 5 arrays, and/or RAID 6 arrays. It will be appreciated that various other types of RAID arrays may be used without departing from the scope of the present disclosure. While in this particular example, computer12is shown to include five storage targets (e.g., storage targets102,104,106,108,110), this is for example purposes only and is not intended limit the present disclosure. For instance, the actual number of storage targets may be increased or decreased depending upon, e.g., the level of redundancy/performance/capacity required. Further, the storage targets (e.g., storage targets102,104,106,108,110) included with computer12may be configured to form a plurality of discrete storage arrays. For instance, and assuming for example purposes only that computer12includes, e.g., ten discrete storage targets, a first five targets (of the ten storage targets) may be configured to form a first RAID array and a second five targets (of the ten storage targets) may be configured to form a second RAID array. In some implementations, one or more of storage targets102,104,106,108,110may be configured to store coded data (e.g., via storage management process21), wherein such coded data may allow for the regeneration of data lost/corrupted on one or more of storage targets102,104,106,108,110. Examples of such coded data may include but is not limited to parity data and Reed-Solomon data. Such coded data may be distributed across all of storage targets102,104,106,108,110or may be stored within a specific storage target. Examples of storage targets102,104,106,108,110may include one or more data arrays, wherein a combination of storage targets102,104,106,108,110(and any processing/control systems associated with storage management application21) may form data array112. The manner in which computer12is implemented may vary depending upon e.g., the level of redundancy/performance/capacity required. For example, computer12may be configured as a SAN (i.e., a Storage Area Network), in which storage processor100may be, e.g., a dedicated computing system and each of storage targets102,104,106,108,110may be a RAID device. An example of storage processor100may include but is not limited to a VPLEX™ system offered by Dell EMC™ of Hopkinton, MA. In the example where computer12is configured as a SAN, the various components of computer12(e.g., storage processor100, and storage targets102,104,106,108,110) may be coupled using network infrastructure114, examples of which may include but are not limited to an Ethernet (e.g., Layer 2 or Layer 3) network, a fiber channel network, an InfiniBand network, or any other circuit switched/packet switched network. As discussed above, various I/O requests (e.g., I/O request15) may be generated. For example, these I/O requests may be sent from, e.g., client applications22,24,26,28to, e.g., computer12. Additionally/alternatively (e.g., when storage processor100is configured as an application server or otherwise), these I/O requests may be internally generated within storage processor100(e.g., via storage management process21). Examples of I/O request15may include but are not limited to data write request116(e.g., a request that content118be written to computer12) and data read request120(e.g., a request that content118be read from computer12). In some implementations, during operation of storage processor100, content118to be written to computer12may be received and/or processed by storage processor100(e.g., via storage management process21). Additionally/alternatively (e.g., when storage processor100is configured as an application server or otherwise), content118to be written to computer12may be internally generated by storage processor100(e.g., via storage management process21). As discussed above, the instruction sets and subroutines of storage management application21, which may be stored on storage device16included within computer12, may be executed by one or more processors and one or more memory architectures included with computer12. Accordingly, in addition to being executed on storage processor100, some or all of the instruction sets and subroutines of storage management application21(and/or recovery process10) may be executed by one or more processors and one or more memory architectures included with data array112. In some implementations, storage processor100may include front end cache memory system122. Examples of front end cache memory system122may include but are not limited to a volatile, solid-state, cache memory system (e.g., a dynamic RAM cache memory system), a non-volatile, solid-state, cache memory system (e.g., a flash-based, cache memory system), and/or any of the above-noted storage devices. In some implementations, storage processor100may initially store content118within front end cache memory system122. Depending upon the manner in which front end cache memory system122is configured, storage processor100(e.g., via storage management process21) may immediately write content118to data array112(e.g., if front end cache memory system122is configured as a write-through cache) or may subsequently write content118to data array112(e.g., if front end cache memory system122is configured as a write-back cache). In some implementations, one or more of storage targets102,104,106,108,110may include a backend cache memory system. Examples of the backend cache memory system may include but are not limited to a volatile, solid-state, cache memory system (e.g., a dynamic RAM cache memory system), a non-volatile, solid-state, cache memory system (e.g., a flash-based, cache memory system), and/or any of the above-noted storage devices. Storage Targets: As discussed above, one or more of storage targets102,104,106,108,110may be a RAID device. For instance, and referring also toFIG.3, there is shown example target150, wherein target150may be one example implementation of a RAID implementation of, e.g., storage target102, storage target104, storage target106, storage target108, and/or storage target110. An example of target150may include but is not limited to a VNX™ system offered by Dell EMC™ of Hopkinton, MA Examples of storage devices154,156,158,160,162may include one or more electro-mechanical hard disk drives, one or more solid-state/flash devices, and/or any of the above-noted storage devices. It will be appreciated that while the term “disk” or “drive” may be used throughout, these may refer to and be used interchangeably with any types of appropriate storage devices as the context and functionality of the storage device permits. In some implementations, target150may include storage processor152and a plurality of storage devices (e.g., storage devices154,156,158,160,162). Storage devices154,156,158,160,162may be configured to provide various levels of performance and/or high availability (e.g., via storage management process21). For example, one or more of storage devices154,156,158,160,162(or any of the above-noted storage devices) may be configured as a RAID 0 array, in which data is striped across storage devices. By striping data across a plurality of storage devices, improved performance may be realized. However, RAID 0 arrays may not provide a level of high availability. Accordingly, one or more of storage devices154,156,158,160,162(or any of the above-noted storage devices) may be configured as a RAID 1 array, in which data is mirrored between storage devices. By mirroring data between storage devices, a level of high availability may be achieved as multiple copies of the data may be stored within storage devices154,156,158,160,162. While storage devices154,156,158,160,162are discussed above as being configured in a RAID 0 or RAID 1 array, this is for example purposes only and not intended to limit the present disclosure, as other configurations are possible. For example, storage devices154,156,158,160,162may be configured as a RAID 3, RAID 4, RAID 5 or RAID 6 array. While in this particular example, target150is shown to include five storage devices (e.g., storage devices154,156,158,160,162), this is for example purposes only and not intended to limit the present disclosure. For instance, the actual number of storage devices may be increased or decreased depending upon, e.g., the level of redundancy/performance/capacity required. In some implementations, one or more of storage devices154,156,158,160,162may be configured to store (e.g., via storage management process21) coded data, wherein such coded data may allow for the regeneration of data lost/corrupted on one or more of storage devices154,156,158,160,162. Examples of such coded data may include but are not limited to parity data and Reed-Solomon data. Such coded data may be distributed across all of storage devices154,156,158,160,162or may be stored within a specific storage device. The manner in which target150is implemented may vary depending upon e.g., the level of redundancy/performance/capacity required. For example, target150may be a RAID device in which storage processor152is a RAID controller card and storage devices154,156,158,160,162are individual “hot-swappable” hard disk drives. Another example of target150may be a RAID system, examples of which may include but are not limited to an NAS (i.e., Network Attached Storage) device or a SAN (i.e., Storage Area Network). In some implementations, storage target150may execute all or a portion of storage management application21. The instruction sets and subroutines of storage management application21, which may be stored on a storage device (e.g., storage device164) coupled to storage processor152, may be executed by one or more processors and one or more memory architectures included with storage processor152. Storage device164may include but is not limited to any of the above-noted storage devices. As discussed above, computer12may be configured as a SAN, wherein storage processor100may be a dedicated computing system and each of storage targets102,104,106,108,110may be a RAID device. Accordingly, when storage processor100processes data requests116,120, storage processor100(e.g., via storage management process21) may provide the appropriate requests/content (e.g., write request166, content168and read request170) to, e.g., storage target150(which is representative of storage targets102,104,106,108and/or110). In some implementations, during operation of storage processor152, content168to be written to target150may be processed by storage processor152(e.g., via storage management process21). Storage processor152may include cache memory system172. Examples of cache memory system172may include but are not limited to a volatile, solid-state, cache memory system (e.g., a dynamic RAM cache memory system) and/or a non-volatile, solid-state, cache memory system (e.g., a flash-based, cache memory system). During operation of storage processor152, content168to be written to target150may be received by storage processor152(e.g., via storage management process21) and initially stored (e.g., via storage management process21) within front end cache memory system172. As noted above, a storage system appliance may have a single fault domain, where any singular fault can bring down the entire appliance. For instance, an example single fault domain layout400is shown inFIG.4. This common practice is to reduce the fault domain by partitioning the resources across different fault domains. This partitioning may have an impact on data reduction efficiency. As will be discussed below, the present disclosure may maintain a singular fault domain, but reduce the impact of the fault by recovering it inline. In particular, the present disclosure may be applicable at least to the resource allocation domain. An example file system may be used that has a hierarchical resource allocator where the faults in different levels of the hierarchy may be recovered inline without impacting any volumes hosted on that appliance, and therefore reducing downtime and enabling the file system to be operational while portions of it are under repair. This may be accomplished, at least in part, by keeping redundant information along with the data (e.g., in the physical layer block), quarantining the affected region, and then repairing it by scanning the region and rebuilding the corrupted block. The Recovery Process: As discussed above and referring also at least to the example implementations ofFIGS.5-9, recovery process10may maintain500a back pointer from a physical layer block (PLB) to a virtual layer block (VLB) in a multi-level hierarchical file system. Recovery process10may maintain502a generation number in the VLB, wherein the generation number may indicate when data is moved from the PLB to another PLB. Recovery process10may reconstruct504an object in the multi-level hierarchical file system based upon, at least in part, at least one of the back pointer and the generation number. In some implementations, recovery process10may maintain500a back pointer from a physical layer block (PLB) to a virtual layer block (VLB) in a multi-level hierarchical file system, and in some implementations, recovery process10may maintain502a generation number in the VLB, wherein the generation number may indicate when data is moved from the PLB to another PLB. For instance, an example and non-limiting hierarchical physical layout600is shown in an example implementation ofFIG.6, and an example and non-limiting hierarchical resource allocator700is shown in the example implementation ofFIG.7. In the example and non-limiting file system, there may be a multi-level (e.g., four-level) hierarchical resource allocator where the top-level may be called (for example purposes only) the Uber, the next level below may be called (for example purposes only) the SubUberSummary, the next level below may be called (for example purposes only) the UberPLBDesc, and the next level below may be called (for example purposes only) the PLBDesc page. The Uber may be divided into set of SubUber's where each SubUber may describe, e.g., 8 GB worth of data. The data structure to describe the SubUber may be called (for example purposes only) the UberPLBDesc. A summary of SubUber may be described in a SubUberSummary entry. Below the SubUber may be a Physical Large Block (PLB), which is a unit of, e.g., 2 MB of contiguous space. The metadata to describe the PLB may be called (for example purposes only) the PLBDesc. A set of SubUberSummary entries may be described in the SubUberSummaryPage and a set of PLBDesc entries may be described in the PLBDescPage. In some implementations, the generation number may be maintained in the PLB and the VLB, and in some implementations, the back pointer may be maintained at a header of compressed data. For example, and referring also at least to the example implementation ofFIG.8, a diagram of PLB back pointer organization800is shown. In the example, when either the SubUberSummaryPage, UberPLBDescPage or PLBDescPage is corrupted, recovery process10may recover the metadata inline by keeping two additional pieces of metadata: 1. maintaining a backpointer and generation number from the PLB to the VLB entry; and 2. maintaining a generation number in the VLB to indicate when the data is moved from one PLB to another, to provide contiguous writes in a log structured file system. In some implementations, recovery process10may reconstruct504an object in the multi-level hierarchical file system based upon, at least in part, at least one of the back pointer and the generation number. For example, when the PLBDescPage is corrupted, recovery process10may go to the corresponding PLB and use the back pointer stored at the header of each compressed data to get the VLB entry. In some implementations, reconstructing the object may include obtaining506an entry in the VLB with the back pointer in the PLB, wherein information in the VLB and the PLB may be used to reconstruct the object. That is, the VLB and PLB may now contain all the information to reconstruct the PLBDescPage. If during garbage collection, the system crashes, there is a possibility that two PLBs have a pointer to the same VLB entry. Thus, to break the tie when the data is moved from one PLB to another, the generation number in the PLB and VLB may be updated. Then, the generation number may be used to determine which PLB has the right data and which may be discarded. In some implementations, reconstructing the object may include quarantining508allocation of at least a portion of the object. For instance, if the UberPLBDescPage is lost, recovery process10may quarantine any allocations from this SubUber. Since the corresponding PLBDescPages for this SubUber may be known, recovery process10may reconstruct the UberPLBDescPage by keeping a scanning cursor (e.g., cursor902shown in the quarantine example900of the example implementation ofFIG.9). If the PLBDescPage is already scanned, then recovery process10may update the newly constructed UberPLBDescPage and the PLBDescPage when a resource is freed. Otherwise, resource process10may only update the PLBDescPage. The “X” labeled metadata blocks are marked quarantined and as the inline recovery method makes progress as depicted by the position of cursor902, and the metadata blocks not labeled with “X” are available for regular use and are no longer quarantined. In some implementations, reconstructing the object may further include scanning510the object, and in some implementations, reconstructing the object may further include updating512the object upon completion of scanning the object to end quarantining allocation of at least the portion of the object. For instance, the inline recovery of the SubUberSummaryPage may be similar to the recovery of the UberPLBDescPage. That is, a scan may be started for each UberPLBDescPage and if the scan is already done, then the SubUberSummaryPage entry may be updated. Otherwise, the UberPLBDescPage and the PLBDescPage may only be updated. Additionally/alternatively, in some implementations, recovery process10may quarantine508a subset of the hierarchical resource allocator objects while they are being repaired. For instance, recovery process10may allocate a new page for the object that is lost (or corrupted), scan all the next level objects below and reconstruct the lost object by storing the information in a new page, and maintain a cursor to track how much of the scan is done so as to update the counters in new page and the old page. In some implementations, recovery process10may only allow allocation of objects (pages) and those below the object once the entire page is scanned. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the language “at least one of A, B, and C” (and the like) should be interpreted as covering only A, only B, only C, or any combination of the three, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps (not necessarily in a particular order), operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps (not necessarily in a particular order), operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents (e.g., of all means or step plus function elements) that may be in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications, variations, substitutions, and any combinations thereof will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The implementation(s) were chosen and described in order to explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various implementation(s) with various modifications and/or any combinations of implementation(s) as are suited to the particular use contemplated. Having thus described the disclosure of the present application in detail and by reference to implementation(s) thereof, it will be apparent that modifications, variations, and any combinations of implementation(s) (including any modifications, variations, substitutions, and combinations thereof) are possible without departing from the scope of the disclosure defined in the appended claims. | 43,084 |
11861202 | DETAILED DESCRIPTION Various embodiments will be described hereinafter with reference to the accompanying drawings. In general, according to one embodiment, a memory system includes a nonvolatile memory and a controller. The controller receives a first write request associated with first data from a host. The first data has a size less than a first data unit that is a write unit to the nonvolatile memory. In response to a lapse of first time since the reception of the first write request, the controller starts a write process of second data to the nonvolatile memory. The second data includes at least the first data. The controller transmits a first response to the first write request to the host in response to completion of the write process. The first time is time obtained by subtracting second time from third time. The third time is designated by the host as a time limit of the transmission of the first response since the reception of the first write request. First, a configuration of an information processing system1that includes a memory system according to an embodiment will be described with reference toFIG.1. The information processing system1includes a host device2(hereinafter, referred to as host2) and a memory system3. The host2is an information processing apparatus. The host2may be a storage server that stores a large amount of various data in the memory system3, or may be a personal computer. The memory system3is a semiconductor storage device configured to write data to a nonvolatile memory, such as a NAND flash memory, and read data from the nonvolatile memory. The memory system is also referred to as a storage device. The memory system is realized as, for example, a solid state drive (SSD). The memory system3may be used as a storage of the host2. The memory system3may be provided inside the host2or may be connected to the host2via a cable or a network. An interface for connecting the host2and the memory system3conforms to standards such as small computer system interface (SCSI), serial attached SCSI (SAS), AT attachment (ATA), serial ATA (SATA), PCI Express (PCIe) (registered trademark), Ethernet (registered trademark), Fibre channel, or NVM Express (NVMe) (registered trademark). The memory system3includes, for example, a controller4, a NAND flash memory5, and a dynamic random access memory (DRAM)6. The controller4may be realized with a circuit such as a system-on-a-chip (SoC). The controller4may include a static random access memory (SRAM). The DRAM6may be provided inside the controller4. A RAM, such as the DRAM6, includes, for example, a storage area of a firmware (FW)21, a cache area of a logical-to-physical address conversion table22, and storage areas of a namespace response management table (NS response management table)23, a command response management table24, a zone descriptor25, and a write management table26. The FW21is a program for controlling an operation of the controller4. The FW21is loaded from the NAND flash memory5to the DRAM6, for example. The logical-to-physical address conversion table22manages mapping between each logical address and each physical address of the NAND flash memory5. The NS response management table23manages information on time within which a response to a write command (write request) needs to be transmitted to the host2, for example, for each namespace. The command response management table24manages time for forcibly starting processing for a received write command. The zone descriptor25includes information indicative of a configuration and a state of each zone. The write management table26manages information on a received write command. The NAND flash memory5includes multiple blocks B0to Bm−1. Each of the blocks B0to Bm−1 includes multiple pages (here, pages P0to Pn−1). The blocks B0to Bm−1 each function as a minimum data erase unit. The block may also be referred to as an erase block or a physical block. Each of the pages P0to Pn−1 includes memory cells connected to a single word line. The pages P0to Pn−1 each function as a unit of a data write operation and a data read operation. Note that a word line may function as a unit of a data write operation and a data read operation. The tolerable maximum number of program/erase cycles (maximum number of P/E cycles) for each of the blocks is limited. One P/E cycle of a block includes a data erase operation to erase data stored in all memory cells in the block and a data write operation to write data in each page of the block. The controller4functions as a memory controller configured to control the NAND flash memory5. The controller4may function as a flash translation layer (FTL) configured to execute data management and block management of the NAND flash memory5. The data management executed by the FTL includes (1) management of mapping data indicative of relationship between each logical address and each physical address of the NAND flash memory5, and (2) process to hide a difference between read/write operations executed in units of page and erase operations executed in units of block. The block management includes management of defective blocks, wear leveling, and garbage collection. The logical address is an address used by the host2for addressing the memory system3. The logical address is, for example, a logical block address (LBA). Hereinafter, a case where the LBA is used as the logical address will be mainly explained. Management of mapping between each LBA and each physical address is executed using the logical-to-physical address conversion table22. The controller4uses the logical-to-physical address conversion table22to manage the mapping between each LBA and each physical address with a certain management size. A physical address corresponding to an LBA indicates a physical memory location in the NAND flash memory5to which data of the LBA is written. The controller4manages multiple storage areas that are obtained by logically dividing the storage area of the NAND flash memory5, using the logical-to-physical address conversion table22. The multiple storage areas correspond to multiple LBAs, respectively. That is, each of the storage areas is specified by one LBA. The logical-to-physical address conversion table22may be loaded from the NAND flash memory5to the DRAM6when the memory system3is powered on. Data write into one page is executable only once in a single P/E cycle. Thus, the controller4writes updated data corresponding to an LBA not to an original physical memory location in which previous data corresponding to the LBA is stored but to a different physical memory location. Then, the controller4updates the logical-to-physical address conversion table22to associate the LBA with the different physical memory location than the original physical memory location and to invalidate the previous data. Data to which the logical-to-physical address conversion table22refers (that is, data associated with an LBA) will be referred to as valid data. Furthermore, data not associated with any LBA will be referred to as invalid data. The valid data is data to possibly be read by the host2later. The invalid data is data not to be read by the host2anymore. The controller4may include a host interface (host I/F)11, a CPU12, a NAND interface (NAND I/F)13, a DRAM interface (DRAM I/F)14, and a timer16. The host I/F11, the CPU12, the NAND I/F13, the DRAM I/F14, and the timer16may be connected via a bus10. The host I/F11functions as a circuit that receives various commands, for example, input/output (I/O) commands and various control commands from the host2. The I/O commands may include a write command, a read command, and a verify command. The control commands may include an unmap command (also referred to as a trim command or a de-allocate command), a format command, a setting command, and a confirmation command. The format command is a command for unmapping all the LBAs in the memory system3entirely. The setting command is a command for setting various parameters in the memory system3. The confirmation command is a command for confirming various parameters set in the memory system3. The NAND I/F13electrically connects the controller4and the NAND flash memory5. The NAND I/F13conforms to an interface standard such as a toggle double data rate (DDR) and an open NAND flash interface (ONFI). The NAND I/F13functions as a NAND control circuit configured to control the NAND flash memory5. The NAND I/F13may be connected to memory chips in the NAND flash memory5via multiple channels (Ch). By operating the memory chips in parallel, it is possible to broaden an access bandwidth between the controller4and the NAND flash memory5. The DRAM I/F14functions as a DRAM control circuit configured to control access of the DRAM6. The storage area of the DRAM6is allocated to areas for storing the FW21, the logical-to-physical address conversion table22, the NS response management table23, the command response management table24, the zone descriptor25, and the write management table26, and a buffer area used as a read/write buffer. The timer16measures time. The timer16may provide the measured time to each unit in the controller4. The CPU12is a processor configured to control the host I/F11, the NAND I/F13, the DRAM I/F14, and the timer16. The CPU12performs various processes by executing the FW21loaded from the NAND flash memory5onto the DRAM6. The FW21is a control program including instructions for causing the CPU12to execute the various processes. The CPU12may perform command processes to execute various commands from the host2. The operation of the CPU12is controlled by the FW21. The function of each unit in the controller4may be realized by a dedicated hardware in the controller4or may be realized by the CPU12executing the FW21. The CPU12functions as, for example, a command reception module121, a forced response management module122, a write control module123, and a zone management module124. The CPU12functions as these modules, for example, by executing the FW21. Specific operations of these modules will be described later with reference toFIGS.11to13. Here, a namespace and a zone will be explained. The whole of a logical address space (an LBA space) used by the host2to access the memory system3may be divided into multiple subspaces. Each subspace may be referred to as a namespace. The controller4manages data stored in the NAND flash memory5, for example, for each zone. As a method for managing data stored in the NAND flash memory5in units of zones, the controller4may use, for example, Zoned Namespace (ZNS) defined in the NVMe standard. FIG.2illustrates an example of a configuration of the ZNS. In the ZNS, the whole of an LBA space corresponding to one namespace may be divided into a set of zones. Each of the zones may include multiple LBAs. Each of the zones, which is obtained by dividing the namespace, corresponds to an LBA range that includes contiguous and non-overlapping LBAs. Each zone is used as a unit for accessing the NAND flash memory5. In the example illustrated inFIG.2, the namespace corresponds to z LBAs (LBA0to LBA (z−1)) and includes x zones (zone0to zone (x−1)). LBA0is the lowest LBA of zone0. Further, LBA (z−1) is the highest LBA in zone (x−1). Writing in a zone is sequentially executed. That is, the writing in a zone is executed such that LBAs are contiguous. A zone may correspond to any physical unit in the NAND flash memory5. For example, a zone corresponds to a block in the NAND flash memory5. In this case, data in the block is accessed using contiguous LBAs included in an LBA range allocated to the zone. States that each zone may be in include an opened state, a full state, and a closed state. A zone in the opened state is a zone in which data is writable. A zone in the full state is a zone in which data has been written fully. A zone in the closed state is a zone in which data writing is interrupted. Note that the zone descriptor25corresponding to a zone includes, for example, information such as a start LBA of the zone (that is, the lowest LBA), the writable capacity of data to the zone, and a state of the zone. In a case where there is a write command whose elapsed time since reception exceeds a threshold while the controller4performs writing according to the delayed write completion, the controller4forcibly performs processing in accordance with the write command and responds to the host2. Hereinafter, forcibly starting processing in accordance with a write command whose elapsed time since reception exceeds the threshold is also referred to as triggering a forced response. A write command is a command that requests writing of user data associated with the write command into the NAND flash memory5. The processing in accordance with the write command is a process of writing user data associated with the write command to the NAND flash memory5. The delayed write completion is a mechanism in which writing in accordance with a received write command is not immediately executed but writing in accordance with multiple write commands is executed after the total amount of user data requested to be written in a zone by the write commands reaches a write unit, and then responding to the host2is executed. The write unit corresponds to the amount of data that is writable to the NAND flash memory5in a single data write operation. The write unit corresponds to, for example, the amount of data of one page. In a case where multiple bits of data is stored in one memory cell, the write unit may correspond to the amount of data of multiple pages. Data may be written into the NAND flash memory5under a quad-level cell (QLC) method. In the QLC method, 4-bit data is stored per memory cell. In the QLC method, a write operation having multiple steps may be performed. The write operation having the multiple steps is, for example, a foggy-fine write operation. The foggy-fine write operation is a write operation in which reading of data that has been written in one page is enabled after writing of data to one or more pages that are included in the same block as the one page and are subsequent to the one page. The foggy-fine write operation includes multiple write operations for a set of memory cells connected to a single word line. The first write operation is a write operation for coarsely setting a threshold voltage of each memory cell, and is referred to as a foggy write operation. The second write operation is a write operation for adjusting the threshold voltage of each memory cell, and is referred to as a fine write operation. The foggy-fine write operation can reduce the influence of program disturb. In a case where data is written into the NAND flash memory5by the foggy-fine write operation in the QLC method, the write unit corresponds to the data amount of four pages. Triggering of a forced response in the memory system3will be specifically described with reference toFIGS.3and4. FIG.3illustrates an example of a sequence in a case where processing in accordance with a write command is executed by triggering the forced response in the memory system3. Hereinafter, a case where triggering of the forced response is managed for each namespace will be described. Note that triggering of the forced response may be managed in units of various storage areas without being limited to the units of namespaces. The controller4receives a write command issued by the host2. When time A has elapsed since the reception of the write command, the controller4forcibly starts processing in accordance with the write command (that is, triggers a forced response). As a result, the controller4can complete the response to the write command by time B since the reception of the write command. FIG.4illustrates an example of parameters used to control triggering of the forced response in the memory system3. The parameters used to control triggering of the forced response include, for example, forced_completion_time and process_time. The forced_completion_time is a parameter for determining whether to trigger the forced response and determining an upper limit of time between reception of a write command by the controller4and completion of a response to the write command (corresponding to the time B inFIG.3) in a case where the forced response is to be triggered. The upper limit of time between reception of the write command and completion of the response to the write command is also referred to as forced response completion time. In the memory system3, whether to set a value to the forced_completion_time is freely determined. A value of the forced_completion_time m indicates the forced response completion time set by a user. The user may set the value of the forced_completion_time m by issuing a forced response setting command through the host2. The forced response setting command is a command for requesting setting of time for triggering the forced response. In the forced response setting command, identification information for identifying a namespace (namespace ID) and the value of the forced_completion_time m are designated. The forced response setting command is realized, for example, as a Set Features command defined in the NVMe standard. Hereinafter, the forced response setting command is also referred to as a setting command. For example, in a case where the value of the forced_completion_time m is zero, the controller4does not trigger the forced response to a write command. Note that the controller4does not trigger the forced response to a write command when the setting according to the setting command has not been performed (that is, in a default state). In a case where the value of the forced_completion_time m is one or more, the controller4determines the forced response completion time by using the value of the forced_completion_time m. The forced response completion time indicates an upper limit of response time to a write command expected by the host2. The forced response completion time is, for example, m×100 milliseconds. That is, the controller4completes a response to a write command within m×100 milliseconds since reception of the write command. The controller4commonly manages a value of the forced_completion_time m for each of zones included in a corresponding namespace. That is, a value of the forced_completion_time m is commonly set for zones included in a namespace. The process_time is a parameter indicating time in which the processing in accordance with a write command is executed and used for determining time between reception of a write command by the controller4and forcible start of processing in accordance with the write command (corresponding to the time A inFIG.3) in a case where the forced response is to be triggered. The time between reception of the write command and forcible start of processing in accordance with the write command is also referred to as forced response trigger time. In the memory system3, whether a value is set to the process_time is freely determined. Further, a value of the process_time n may be set, for example, before shipment of the memory system3. In a case where a value of the process_time n is set, the controller4determines the forced response trigger time by using the value of the forced_completion_time m and the value of the process_time n in accordance with reception of the setting command. The forced response trigger time is, for example, (m−n)×100 milliseconds. That is, when (m−n)×100 milliseconds have elapsed since reception of a write command, the controller4forcibly starts processing in accordance with the write command. Note that n×100 milliseconds corresponds to, for example, time within which corresponding user data becomes readable from the NAND flash memory5(hereinafter, also referred to as process time) since forcible start of processing (write process) in accordance with the write command. Accordingly, the forced response trigger time is obtained by subtracting the process time from the forced response completion time. The write process includes a process of transferring user data from the controller4to the NAND flash memory5and a process of programming the user data to memory cells in the NAND flash memory5. The write process is completed when the user data becomes readable from the NAND flash memory5. In this way, in the memory system3, triggering of a forced response to a received write command can be controlled on the basis of the value of the forced_completion_time m set by the user. More specifically, the controller4can determine the forced response completion time and the forced response trigger time by using the value of the forced_completion_time m and the value of the process_time n. When the forced response trigger time has elapsed since reception of a write command from the host2, the controller4forcibly starts writing of user data corresponding to the write command into the NAND flash memory5. As a result, the controller4can respond to the write command within the forced response completion time expected by the host2. Next, several tables used in the memory system3will be described with reference toFIGS.5to8. FIG.5illustrates an example of a configuration of the logical-to-physical address conversion table22. The logical-to-physical address conversion table22manages mapping between each LBA and each physical address of the NAND flash memory5. The controller4may convert an LBA into a physical address using the logical-to-physical address conversion table22. Further, the controller4may convert a physical address into an LBA using the logical-to-physical address conversion table22. In the example illustrated inFIG.5, a physical address “X”, a physical address “Y”, and a physical address “Z” are mapped to an LBA “0”, an LBA “1”, and an LBA “2”, respectively. FIG.6illustrates an example of a configuration of the NS response management table23. The NS response management table23may include one or more entries that correspond to one or more namespaces, respectively. Each of the entries includes, for example, a namespace ID field, a forced response completion time field, and a forced response trigger time field. In an entry corresponding to a certain namespace, the namespace ID field indicates identification information assigned to the certain namespace (namespace ID). The controller4is capable of specifying a corresponding namespace using the namespace ID. The forced response completion time field indicates an upper limit of time within which the controller4responds to a write command (that is, the forced response completion time) in a case where the corresponding namespace includes a write destination of the write command. Note that a namespace includes a write destination of a write command when an LBA designated in the write command is included in an LBA space of the namespace. The controller4is configured to transmit a response to the write command to the host2within the time indicated in the forced response completion time field since reception of the write command. In the forced response completion time field, the time is set in units of milliseconds, for example. The forced response trigger time field indicates time when the controller4starts a process in accordance with a write command (that is, the forced response trigger time) in a case where the corresponding namespace includes the write destination of the write command. That is, when the time indicated in the forced response trigger time field has elapsed since the reception of the write command, the controller4starts processing in accordance with the write command. In the forced response trigger time field, the time is set in units of milliseconds, for example. In the example illustrated inFIG.6, the forced response completion time “20000” and the forced response trigger time “15000” are associated with a namespace ID “1”. In the following description regarding the NS response management table23, a value indicated in the namespace ID field is also simply referred to as a namespace ID. The same applies to values indicated in the other fields of the NS response management table23and values indicated in the fields of the other tables. FIG.7illustrates an example of a configuration of the command response management table24. The command response management table24may include one or more entries that correspond to one or more write commands, respectively. Each entry includes, for example, a command ID field and a time until triggering field. In an entry corresponding to a certain write command, the command ID field indicates identification information assigned to the certain write command. Identification information assigned to a write command is also referred to as a command ID. The controller4is capable of specifying a corresponding write command using the command ID. The time until triggering field indicates remaining time until a process in accordance with the corresponding write command is forcibly started. Specifically, for example, the forced response trigger time set for a namespace that includes a write destination of the write command is set as an initial value of the time until triggering field. The controller4decreases the time set in the time until triggering field according to a lapse of time measured by the timer16, for example. When the time set in the time until triggering field becomes zero, the process in accordance with the write command is forcibly started. In the time until triggering field, the time is set in units of milliseconds, for example. In the example illustrated inFIG.7, the time until triggering “1200” is associated with a command ID “11”, and the time until triggering “10000” is associated with a command ID “12”. FIG.8illustrates an example of a configuration of the write management table26. The write management table26may include one or more entries that correspond to one or more write commands, respectively. Each entry includes, for example, a command ID field, an LBA field, a data length field, a data buffer information field, and a zone field. The command ID field indicates a command ID of a corresponding write command. The LBA field indicates an LBA designated in the corresponding write command. The LBA indicates a start LBA of an LBA range in which user data is to be written in accordance with the write command. The data length field indicates a data length designated in the corresponding write command. The data length indicates the length of the user data that is to be written in accordance with the write command. Accordingly, the LBA range in which the user data is to be written in accordance with the write command is specified using the LBA and the data length designated in the write command. The data buffer information field indicates data buffer information designated in the corresponding write command. The data buffer information indicates a location in the host2where the user data that is to be written in accordance with the write command is stored. That is, the controller4transfers the user data from a storage location in the host2, which is indicated by the data buffer information, to the memory system3. The zone field indicates a zone that includes an LBA designated in the corresponding write command. The zone is represented by, for example, a start LBA (that is, the lowest LBA) of an LBA range allocated to the zone. The write management table26can manage information on write commands for each zone by using the zone represented in the zone field. Note that the controller4may use multiple write management tables26for the respective zones, instead of providing the zone field in each entry. (Operation of Memory System According to Comparative Example) Here, an example of an operation in accordance with a write command will be described using memory systems according to two comparative examples. FIG.9is a block diagram illustrating an example of an operation in accordance with a write command in a memory system3A according to a first comparative example. The memory system3A of the first comparative example has, for example, a system configuration similar to that of the memory system3of the embodiment. The memory system3A is configured to perform a process in accordance with a write command whenever receiving a write command issued by a host2A, and transmit a response to the host2A. A CPU of the memory system3A functions as a command reception module121A and a write control module123A. The host2A includes, for example, a submission queue (SQ)401A, a completion queue (CQ)402A, and a data buffer403A. The submission queue401A includes multiple slots to which the host2A writes commands, respectively, which are to be issued to the memory system3A. A location in the submission queue401A (that is, a slot) to which the host2A should write a command is indicated by an SQ Tail pointer. A location in the submission queue401A from which the memory system3A should fetch a command is indicated by an SQ Head pointer. The completion queue402A includes multiple slots to which the memory system3A writes responses to commands, respectively. A location in the completion queue402A to which the memory system3A should write a response is indicated by a CQ Tail pointer. A location in the completion queue402A from which the host2A should fetch a response is indicated by a CQ Head pointer. The data buffer403A is a storage area that temporarily stores user data to be written into a NAND flash memory5A of the memory system3A. An example of a specific operation of the memory system3A and the host2A will be described below. Here, a case where commands written to the submission queue401A are only write commands will be explained for easy understanding. First, the host2A stores user data, which is to be written into the NAND flash memory5A of the memory system3A, in the data buffer403A. Then, the host2A writes a write command to a location in the submission queue401A indicated by the SQ Tail pointer (that is, issues the write command). This write command is a write command for requesting writing of the user data stored in the data buffer403A. Next, the host2A adds one to the SQ Tail pointer. When the value obtained by adding one to the SQ Tail pointer reaches the number of slots in the submission queue401A (that is, the queue size), the host2A sets the SQ Tail pointer to zero. Then, the host2A writes the updated value of the SQ Tail pointer into an SQ Tail doorbell register of the memory system3A. The command reception module121A of the memory system3A fetches the write command from a location in the submission queue401A indicated by the SQ Head pointer ((1) inFIG.9). When there is a difference between the SQ Head pointer and the SQ Tail pointer, the command reception module121A fetches a write command from the submission queue401A. The command reception module121A adds one to the SQ Head pointer. When the value obtained by adding one to the SQ Head pointer reaches the number of slots in the submission queue401A, the command reception module121A sets the SQ Head pointer to zero. The command reception module121A sends the fetched write command to the write control module123A ((2) inFIG.9). The write control module123A transfers user data, which is to be written into the NAND flash memory5A, from the data buffer403A to a DRAM6A in accordance with the write command sent by the command reception module121A. The write control module123A writes (that is, programs) the transferred user data into the NAND flash memory5A ((3) inFIG.9). Then, when the written user data becomes readable, the write control module123A notifies the command reception module121A of completion of the processing in accordance with the corresponding write command ((4) inFIG.9). In accordance with the notification from the write control module123A, the command reception module121A writes a completion notification of the corresponding write command into a location in the completion queue402A indicated by the CQ Tail pointer, and issues an interrupt ((5) inFIG.9). The command reception module121A issues the interrupt to notify the host2A that a new completion notification to be processed has been stored in the completion queue402A. Further, the command reception module121A adds one to the CQ Tail pointer. When the value obtained by adding one to the CQ Tail pointer reaches the number of slots in the completion queue402A, the command reception module121A sets the CQ Tail pointer to zero. In accordance with the interrupt issued by the command reception module121A, the host2A fetches the completion notification from a location in the completion queue402A indicated by the CQ Head pointer. The host2A adds one to the CQ Head pointer. When the value obtained by adding one to the CQ Head pointer reaches the number of slots in the completion queue402A, the host2A sets the CQ Head pointer to zero. The host2A writes the updated value of the CQ Head pointer into a CQ Head doorbell register of the memory system3A. The host2A clears the interrupt received from the memory system3A. Then, the host2A releases an area in the data buffer403A in which the user data that has been written is stored on the basis of the fetched completion notification. Through the above operation, whenever receiving a write command from the host2A, the memory system3A of the first comparative example performs the processing in accordance with the write command and transmits the response (completion notification) to the host2A. As a result, the memory system3A of the first comparative example can respond to a write command within time expected by the host2A. FIG.10is a block diagram illustrating an example of an operation in accordance with a write command in a memory system3B according to a second comparative example. The memory system3B of the second comparative example has, for example, a system configuration similar to that of the memory system3of the embodiment. The memory system3B is configured to write user data of a write unit into a NAND flash memory5B after the total amount of the user data requested to be written for a zone by write commands reaches the write unit, and transmit responses corresponding to the respective write commands to the host2B. A CPU of the memory system3B functions as a command reception module121B and a write control module123B. A configuration and an operation of the host2B are similar to those of the host2A in the first comparative example described above with reference toFIG.9. Note that the host2B may issue a write command, and then issue a subsequent write command before obtaining a response to the preceding write command. This is because if the host2B is configured to issue the subsequent write command after waiting for the response to the preceding write command, there is a possibility that deadlock occurs between the host2B and the memory system3B that responds after the total amount of user data requested to be written by write commands reaches the write unit. Hereinafter, an operation of the memory system3B will be mainly described regarding differences from the memory system3A of the first comparative example. The command reception module121B of the memory system3B fetches a write command from a location in a submission queue401B indicated by an SQ Head pointer ((1) inFIG.10). The command reception module121B sends the fetched write command to the write control module123B ((2) inFIG.10). The write control module123B acquires a command ID, an LBA, a data length, and data buffer information, which are designated in the write command, from the write command sent by the command reception module121B. The write control module123B specifies a zone including the acquired LBA. The write control module123B updates a write management table26B using the acquired command ID, LBA, data length, and data buffer information, and the specified zone ((3) inFIG.10). The write control module123B uses the write management table26B to determine whether user data of the write unit for a zone is stored in a data buffer403B of the host2B. When user data of the write unit for a zone is not stored in the data buffer403B of the host2B, the write control module123B does not write the user data into the NAND flash memory5B. While user data of the write unit for a zone is not stored in the data buffer403B of the host2B, the command reception module121B may repeatedly fetch a write command from the submission queue401B. When user data of the write unit for a zone is stored in the data buffer403B, the write control module123B transfers the user data of the write unit from the data buffer403B to the NAND flash memory5B, and writes (programs) the transferred user data of the write unit into the NAND flash memory5B ((4) inFIG.10). Note that the user data of the write unit to be transferred may be once buffered in a DRAM6B. Then, when there is a write command for which the corresponding user data is readable from the NAND flash memory5B, the write control module123B notifies the command reception module121B of completion of processing in accordance with the write command ((5) inFIG.10). The write control module123B notifies the command reception module121B of completion of processing in accordance with each of the write commands that correspond to the user data of the write unit. In accordance with the notification from the write control module123B, the command reception module121B writes a completion notification of the corresponding write command into a location in the completion queue402B indicated by the CQ Tail pointer, and issues an interrupt ((6) inFIG.10). Operations performed by the command reception module121B and the host2B after issuing the interrupt are similar to the operations performed by the command reception module121A and the host2A of the first comparative example. Through the above operation, when the total amount of user data requested to be written for a zone by write commands from the host2B reaches the write unit, the memory system3B of the second comparative example writes the user data of the write unit into the NAND flash memory5B (more specifically, a storage area of the NAND flash memory5B corresponding to the zone). Then, the memory system3B transmits responses corresponding to the respective write commands to the host2B. The size of user data to be written in accordance with one write command is, for example, smaller than the write unit for the NAND flash memory5B. When user data of the write unit for a zone is stored in the data buffer403B, the memory system3B writes the user data of the write unit into the zone. As a result, the memory system3B can efficiently use the storage area of the NAND flash memory5B. The operation of the memory system3B corresponds to the delayed write completion. In the delayed write completion, writing and responding for each write command may be delayed until write commands corresponding to user data of the write unit are received. For example, in a case where the host2B issues a first write command requesting writing of user data less than the write unit and then does not issue a subsequent write command, user data of the write unit is not stored in the data buffer403B. As a result, there is a possibility that data writing in accordance with the first write command is not started and a response to the host2B is delayed. As the memory system3B does not immediately perform writing and responding for a write command in response to reception of the write command from the host2B, there is a possibility that the host2B is not able to obtain a response to the write command within expected time. In this case, there is a possibility that the write command stalls, and there may occur a problem that writing of user data from the host2B to the memory system3B does not function. However, immediately performing writing and responding for a write command in response to reception of the write command conflicts with the delayed write completion. Therefore, the memory system3of the embodiment is configured to respond to a write command within time expected by the host2while performing writing according to the delayed write completion. Specifically, when elapsed time since reception of a write command has reached the forced response trigger time, the controller4of the memory system3writes user data corresponding to the write command not into a storage area of the NAND flash memory5that corresponds to a zone including an LBA designated in the write command, but into a shared write buffer provided in the NAND flash memory5. Then, the controller4transitions the zone, which includes the LBA designated in the write command, to the closed state. The forced response trigger time is set such that processing in accordance with the write command can be completed within time expected by the host2(i.e., the forced response completion time). Therefore, the controller4can respond to the write command within the time expected by the host2. Further, when the total amount of user data that is to be written into a zone and is stored in the data buffer403reaches the write unit before elapsed time since reception of a corresponding write command reaches the forced response trigger time, the controller4writes the user data of the write unit into the zone. That is, the controller4performs writing according to the delayed write completion. Therefore, the controller4can efficiently use the storage area of the NAND flash memory5while responding to a write command within the time expected by the host2. As a result, it is possible to provide the memory system3at low cost by a combination of the delayed write completion and a flash memory that requires a write operation having multiple steps such as a QLC flash memory. In addition, compatibility with an existing software stack can also be maintained. Specific operation examples of the memory system3of the embodiment will be described with reference toFIGS.11to13. FIG.11is a block diagram illustrating an example of an operation in accordance with the forced response setting command (setting command) in the memory system3. The host2transmits (issues) the setting command to the memory system3((1) inFIG.11). For example, a namespace ID and a value of the forced_completion_time m are designated in the setting command. The command reception module121of the memory system3receives the setting command transmitted by the host2. Then, the command reception module121sends the setting command to the forced response management module122((2) inFIG.11). The forced response management module122updates the NS response management table23in accordance with the setting command ((3) inFIG.11). The forced response management module122determines the forced response completion time and the forced response trigger time by using the value of the forced_completion_time m designated in the setting command. Then, the forced response management module122generates an entry indicative of the forced response completion time and the forced response trigger time that are associated with the namespace ID designated in the setting command, and adds the entry to the NS response management table23. More specifically, the forced response management module122first acquires the namespace ID and the value of the forced_completion_time m that are designated in the setting command. Here, a namespace specified by the acquired namespace ID is referred to as a target namespace. When the value of the forced_completion_time m is equal to or greater than one, the forced response management module122updates the NS response management table23so as to respond to a write command whose write destination is the target namespace within the forced response completion time based on the value of the forced_completion_time m. More specifically, the forced response management module122calculates, for example, m×100 milliseconds as the forced response completion time. The forced response management module122acquires a value of the process_time n. The value of the process_time n is determined before shipment of the memory system3, for example. The value of the process_time n is stored in, for example, the NAND flash memory5. Then, the forced response management module122calculates, for example, (m−n)×100 milliseconds as the forced response trigger time. Note that an example in which m and n are designated as time in units of 100 milliseconds has been described here, m and n may be represented by values in any unit as long as the forced response completion time and the forced response trigger time can be calculated. Next, the forced response management module122determines whether the calculated forced response completion time and forced response trigger time are valid values. For example, when the calculated forced response completion time is shorter than program time tProg for the NAND flash memory5, the forced response management module122determines that the forced response completion time is an invalid value. For example, when the calculated forced response trigger time is a value of zero or less (that is, when m is n or less), the forced response management module122determines that the forced response trigger time is an invalid value. Further, for example, when the calculated forced response completion time exceeds Finish Recommended Limit (FRL) defined in the NVMe standard, the forced response management module122determines that the forced response completion time is an invalid value. The FRL is an internal parameter of the memory system3. The FRL indicates a time limit until a zone becomes the full state since the zone transitions to the opened state. When at least one of the calculated forced response completion time and forced response trigger time is an invalid value, the forced response management module122may notify the host2of an error of the setting command. When the calculated forced response completion time and forced response trigger time are valid values, the forced response management module122adds an entry indicating the acquired namespace ID and the calculated forced response completion time and forced response trigger time to the NS response management table23. Note that, in a case where the entry corresponding to the acquired namespace ID already exists in the NS response management table23, the entry is updated using the calculated forced response completion time and forced response trigger time. When the value of the forced_completion_time m designated in the setting command is zero, the forced response management module122updates the NS response management table23so as not to trigger the forced response. More specifically, when the value of the forced_completion_time m is zero and the entry corresponding to the namespace ID designated in the setting command already exists in the NS response management table23, the forced response management module122deletes the entry from the NS response management table23. Further, when the value of the forced_completion_time m is zero and no entry corresponding to the namespace ID designated in the setting command exists in the NS response management table23, the forced response management module122does not add an entry corresponding to the namespace ID to the NS response management table23. With the above configuration, the memory system3can control triggering of the forced response to a write command for each namespace in accordance with the setting command received from the host2. FIG.12is a block diagram illustrating an example of an operation in accordance with a forced response confirmation command in the memory system3. The forced response confirmation command is a command requesting to provide information on time that is to elapse until the forced response to a write command is triggered. The forced response confirmation command is realized as, for example, a Get Features command defined in the NVMe standard. Hereinafter, the forced response confirmation command is also referred to as a confirmation command. The host2transmits the confirmation command to the memory system3((1) inFIG.12). For example, a namespace ID is designated in the confirmation command. The command reception module121of the memory system3receives the confirmation command transmitted by the host2. Then, the command reception module121sends the confirmation command to the forced response management module122((2) inFIG.12). In accordance with the confirmation command, the forced response management module122acquires information on time for triggering the forced response from the NS response management table23((3) inFIG.12). Specifically, first, the forced response management module122acquires the namespace ID designated in the confirmation command. The forced response management module122specifies an entry in the NS response management table23that includes the acquired namespace ID. Then, the forced response management module122acquires at least one of the forced response completion time and the forced response trigger time from the specified entry. The forced response management module122generates information on time for triggering the forced response by using at least one of the forced response completion time and the forced response trigger time, and sends the information to the command reception module121((4) inFIG.12). This information indicates, for example, at least one of the forced response completion time and the forced response trigger time. Alternatively, this information may indicate information related to at least one of the forced response completion time and the forced response trigger time (for example, a value of the forced_completion_time m, a value of the process_time n). Then, the command reception module121transmits, to the host2, a response including the information on the time for triggering a forced response ((5) inFIG.12). With the above configuration, the memory system3can provide the host2with information on time for triggering the forced response corresponding to a designated namespace in accordance with a confirmation command received from the host2. FIG.13is a block diagram illustrating an example of an operation in accordance with a write command in the memory system3. The controller (see controller4ofFIG.1) of the memory system3is configured to forcibly start writing user data into the NAND flash memory5when elapsed time since reception of a corresponding write command reaches the forced response trigger time while performing writing according to the delayed write completion. When the written user data becomes readable, the controller transmits a response, which indicates that processing in accordance with the corresponding write command has been completed, to the host2. The controller manages at least one storage area in the NAND flash memory5corresponding to at least one zone (referred to as a NAND zone51) and at least one shared write buffer52. The NAND zone51is a storage area obtained by logically dividing the storage area of the NAND flash memory5. The shared write buffer52is a storage area in which user data to be written into the at least one NAND zone51is stored in a nonvolatile way. The at least one NAND zone51is, for example, associated with any of the at least one shared write buffer52. Further, one shared write buffer52may be associated with one or more NAND zones51. That is, a shared write buffer52may be shared by one or more NAND zones51. Information indicating a correspondence between the NAND zone51and the shared write buffer52is stored in, for example, the DRAM6. A configuration and an operation of the host2are similar to those of the host2A in the first comparative example and the host2B in the second comparative example. Hereinafter, an operation of the memory system3will be mainly described regarding differences from the memory system3A of the first comparative example and the memory system3B of the second comparative example. The command reception module121of the memory system3fetches a write command from a location in the submission queue401indicated by an SQ Head pointer ((1) inFIG.13). The write command is associated with user data to be written to any of at least one zone. More specifically, for example, a command ID, an LBA, a data length, data buffer information, and a namespace ID are designated in the write command. The command reception module121sends the fetched write command to the write control module123((2) inFIG.13). Hereinafter, the fetched write command is also referred to as a first target write command. The write control module123acquires the command ID, the LBA, the data length, and the data buffer information designated in the first target write command. The write control module123specifies a zone that includes the acquired LBA. The write control module123updates the write management table26using the acquired command ID, LBA, data length, and data buffer information and the specified zone ((3) inFIG.13). Then, the write control module123determines whether user data of the write unit for a zone is stored in a data buffer403of the host2by using the write management table26. When user data of the write unit for a zone is stored in the data buffer403, the write control module123transfers the user data of the write unit from the data buffer403to the NAND flash memory5, and writes (programs) the transferred user data of the write unit into the NAND zone51(here, a NAND zone511) in the NAND flash memory5((4) inFIG.13). Note that the user data of the write unit to be transferred may be once buffered in the DRAM6. The user data of the write unit is, for example, data obtained by combining multiple pieces of user data to be written in accordance with multiple write commands. The write commands include the first target write command. Then, when there is a write command for which corresponding user data becomes readable from the NAND flash memory5, the write control module123notifies the command reception module121of completion of processing in accordance with the write command ((5) inFIG.13). The write control module123notifies the command reception module121of completion of processing in accordance with each of the write commands that correspond to the user data of the write unit. Further, the write control module123deletes entries corresponding to the respective write commands from the write management table26((6) inFIG.13). Whenever receiving a notification from the write control module123, the command reception module121transmits a response indicating that processing in accordance with a corresponding write command is completed to the host2. More specifically, in accordance with a notification from the write control module123, the command reception module121writes a completion notification of the corresponding write command into a location in the completion queue402indicated by a CQ Tail pointer, and issues an interrupt ((7) inFIG.13). The notification by the write control module123includes, for example, a command ID of a write command for which the processing has been completed. Operations performed by the command reception module121and the host2after issuing the interrupt are similar to the operations performed by the command reception module121A and the host2A of the first comparative example. Note that, in a case where an error occurs in writing (programming) of user data, the write control module123and the command reception module121transmit, to the host2, a response that indicates that the error has occurred in processing in accordance with a corresponding write command. That is, the command reception module121writes an error notification for the write command at a location in the completion queue402indicated by the CQ Tail pointer. Further, when user data of the write unit for a zone is not stored in the data buffer403of the host2, the write control module123does not write the user data to the NAND flash memory5. Further, the write control module123instructs the forced response management module122to manage the first target write command ((8) inFIG.13). This instruction includes, for example, the command ID and the namespace ID designated in the first target write command. In accordance with the instruction from the write control module123, the forced response management module122specifies an entry in the NS response management table23that corresponds to the namespace ID in the instruction. The forced response management module122acquires the forced response trigger time from the specified entry ((9) inFIG.13). That is, the forced response management module122acquires the forced response trigger time for forcibly starting processing in accordance with the first target write command from the NS response management table23. Next, the forced response management module122adds an entry including the command ID in the instruction from the write control module123and the acquired forced response trigger time, to the command response management table24((10) inFIG.13). The forced response trigger time is used as an initial value of the remaining time until the processing in accordance with the first target write command is forcibly started (that is, the time until triggering). The forced response management module122decreases the time until triggering included in each entry in the command response management table24, for example, according to a lapse of time measured by the timer16. Then, when the time until triggering becomes zero (that is, when the forced response trigger time has elapsed since reception of the corresponding write command), the forced response management module122instructs the write control module123to forcibly start the processing in accordance with the corresponding write command ((11) inFIG.13). This instruction includes, for example, a command ID in an entry in which the time until triggering has become zero. Hereinafter, a write command indicated by the entry in which the time until triggering has become zero (that is, a write command for which the corresponding processing needs to be forcibly started) is referred to as a second target write command. In accordance with the instruction by the forced response management module122, the write control module123acquires an entry including the command ID in the instruction from the write management table26((12) inFIG.13). Hereinafter, the entry including the command ID in the instruction is referred to as a first entry. Further, it is assumed that a NAND zone51corresponding to a zone indicated by the first entry is a NAND zone512. The write control module123transfers user data, which is to be written in accordance with the second target write command, from the data buffer403to the NAND flash memory5using the first entry ((13) inFIG.13). Note that the user data to be buffered may be once buffered in the DRAM6. The write control module123writes the transferred user data with padding into the shared write buffer52(here, a shared write buffer521) of the NAND flash memory5((13) inFIG.13). The write control module123specifies the user data to be written in accordance with the second target write command by using the LBA, the data length, and the data buffer information included in the first entry. The shared write buffer521into which the user data is written is associated with the NAND zone51(here, the NAND zone512) in which the user data is originally to be written. Further, writing user data with padding means writing data of the write unit that includes the user data and data for padding. The QLC method may be used for writing data to the shared write buffer52. Alternatively, a single-level cell (SLC) method in which 1-bit data is stored per memory cell may be used for writing data to the shared write buffer52. Note that, in a case where the write management table26includes one or more entries including the same zone as the zone indicated by the first entry, the write control module123may transfer multiple pieces of user data, which are to be written in accordance with multiple write commands corresponding to the one or more entries, from the data buffer403to the NAND flash memory5using the first entry and the one or more entries, and write the transferred pieces of user data with padding into the shared write buffer521((13) inFIG.13). Here, an operation performed by the write control module123in a case where the NAND zone511different from the NAND zone512is also associated with the shared write buffer521will be described. In this case, the write control module123further uses one or more entries in the write management table26each including a zone corresponding to the other NAND zone511. The write control module123may specify one or more pieces of user data that are to be written, respectively, in accordance with one or more write commands corresponding to the one or more entries. Then, the write control module123may include the specified one or more pieces of user data in the above-described user data to be transferred from the data buffer403and written into the shared write buffer521. At that time, the time until triggering that corresponds to each of the one or more write commands managed by the command response management table24may be considered. For example, only user data corresponding to a write command for which the time until triggering is less than a threshold may be included in the above-described user data to be transferred from the data buffer403and written to the shared write buffer521. For example, a description will be given regarding a case where the command reception module121fetches a write command for a zone corresponding to the NAND zone511, which is different from the NAND zone512, from the submission queue401before the forced response trigger time elapses since reception of the second target write command. Both of the two NAND zones511and512are associated with the shared write buffer521. Hereinafter, the fetched write command for the zone corresponding to the NAND zone511is referred to as a third target write command. The third target write command is managed using the write management table26and the command response management table24. When the forced response trigger time has elapsed since the reception of the second target write command, the write control module123transfers user data to be written in accordance with the second target write command and user data to be written in accordance with the third target write command from the data buffer403to the NAND flash memory5. Then, the write control module123writes data including the transferred user data into the shared write buffer521. More specifically, in a case where the size of the transferred user data corresponds to the write unit, the write control module123writes the transferred user data into the shared write buffer521. Further, in a case where the size of the transferred user data is less than the write unit, the write control module123writes the transferred user data with padding into the shared write buffer521. In this manner, when the NAND zone51(here, the NAND zone512) corresponding to a zone targeted by the second target write command for which the forced response trigger time has elapsed and the other NAND zone51(here, the NAND zone511) corresponding to a zone targeted by the third target write command are commonly associated with the shared write buffer521, the user data corresponding to the third target write command can also be included in the above-described user data to be transferred from the data buffer403and written to the shared write buffer521. Next, when there is a write command for which corresponding user data becomes readable from the NAND flash memory5, the write control module123notifies the command reception module121and the forced response management module122of completion of processing in accordance with the write command ((14) inFIG.13), and the write control module123deletes an entry corresponding to each of one or more write commands from the write management table26((15) inFIG.13). The one or more write commands include the second target write command corresponding to the first entry. Whenever receiving a notification from the write control module123, the command reception module121transmits a response indicating that processing in accordance with a corresponding write command is completed to the host2. More specifically, in accordance with the notification from the write control module123, the command reception module121writes a completion notification of the corresponding write command into a location in the completion queue402indicated by a CQ Tail pointer, and issues an interrupt ((16) inFIG.13). Subsequent operations performed by the command reception module121and the host2are similar to the operations performed by the command reception module121A and the host2A of the first comparative example. Note that, in a case where an error occurs in writing (programming) of user data, the write control module123and the command reception module121transmit, to the host2, a response which indicates that the error has occurred in processing in accordance with a corresponding write command. That is, the command reception module121writes an error notification for the write command at a location in the completion queue402indicated by the CQ Tail pointer. Further, the forced response management module122deletes one or more entries that correspond to the one or more write commands, respectively, from the command response management table24in accordance with the notification from the write control module123((17) inFIG.13). The notification from the write control module123may include, for example, information indicating a command ID of each of the one or more write commands for which the processing has been completed and information indicating the NAND zone51(here, for example, the NAND zone512). The NAND zone512included in the notification is a NAND zone51into which user data corresponding to the one or more write commands was to be originally written. The forced response management module122instructs the zone management module124to transition a state of a zone corresponding to the NAND zone512included in the notification from the opened state to the closed state ((18) inFIG.13). The zone management module124transitions the zone corresponding to the NAND zone512from the opened state to the closed state in accordance with the instruction from the forced response management module122((19) inFIG.13). For example, the zone management module124updates the zone descriptor25so as to indicate that the zone corresponding to the NAND zone512is in the closed state. Then, the zone management module124notifies the host2that the state of the zone corresponding to the NAND zone512has been changed ((20) inFIG.13). Note that, in a case where user data for a zone corresponding to another NAND zone51(here, the NAND zone511) is also written into the shared write buffer521that is commonly associated with the NAND zone512, the write control module123, the forced response management module122, and the zone management module124operate to transition the zone corresponding to the NAND zone511from the opened state to the closed state in the same manner. Note that, in a case where user data of the write unit for a zone is not stored in the data buffer403of the host2and there is no write command whose elapsed time since reception has reached the forced response trigger time (that is, the time until triggering has become zero), the command reception module121, the write control module123, and the forced response management module122may repeat an operation of fetching a write command from the submission queue401and updating the write management table26and the command response management table24on the basis of the fetched write command. Through the above operation, in the memory system3of the embodiment, when the total amount of user data that is requested to be written into a zone by multiple write commands from the host2reaches the write unit, the controller4writes the user data of the write unit into the NAND zone51that corresponds to the zone. Then, the controller4transmits responses corresponding to the respective write commands to the host2. As a result, the memory system3can efficiently use the storage area of the NAND flash memory5. Further, when there is a write command whose elapsed time since reception has reached the forced response trigger time, the controller4writes corresponding user data into the shared write buffer52, instead of the NAND zone51. Then, the controller4transmits a completion notification for the write command to the host2. In this manner, the controller4switches a write destination of user data between (1) a case where user data of the write unit to be written in a zone of the write destination of a write command has been stored in the data buffer403before elapsed time since reception of the write command reaches the forced response trigger time and (2) a case where there is a write command whose elapsed time since reception has reached the forced response trigger time. In the case (1), writing of the corresponding user data to the NAND zone51is started before the elapsed time since reception of the write command reaches the forced response trigger time, and thus, the controller4can respond to the write command within the time expected by the host2. Further, in the case (2), the controller4can respond to the write command within the time expected by the host2by writing the corresponding user data to the shared write buffer52. Therefore, in the memory system3, the storage area of the NAND flash memory5can be efficiently used while responding to the write command within the time expected by the host2. Note that the above-described operation in the memory system3can be applied not only to a case where data stored in the NAND flash memory5is managed on a zone basis, but also to a case where data stored in the NAND flash memory5is managed on an LBA basis. In this case, the operations by the zone management module124(for example, transition of a zone to the closed state and notification of change of a zone state to the host2) are not performed. In this case, the controller4manages at least one storage area in the NAND flash memory5and at least one shared write buffer52. The at least one storage area is a storage area obtained by logically dividing the storage area of the NAND flash memory5. The at least one storage area corresponds to at least one LBA. Next, procedures of processes executed in the memory system3and the host2will be described with reference to flowcharts ofFIGS.14to20. (Process for Setting Triggering of Forced Response in Memory System3) FIG.14is a flowchart illustrating an example of the procedure of a setting process executed by the CPU12. The CPU12starts the setting process in accordance with reception of a setting command from the host2. The setting process is a process for setting time between reception of a write command by the CPU12and forcible start of processing in accordance with the write command by the CPU12. First, the CPU12acquires a namespace ID designated in the setting command (step S101). Further, the CPU12acquires a value of the forced_completion_time m designated in the setting command (step S102). For example, m is an integer of zero or more. Next, the CPU12determines whether the acquired value of the forced_completion_time m is zero (step S103). When the value of forced_completion_time m is zero (YES in step S103), the CPU12determines whether the NS response management table23includes an entry corresponding to the acquired namespace ID (step S104). When the NS response management table23includes an entry corresponding to the acquired namespace ID (YES in step S104), the CPU12deletes the entry corresponding to the acquired namespace ID from the NS response management table23(step S105), and ends the setting process. As a result, the CPU12sets processing and responding of a write command for the namespace that is associated with the acquired namespace ID not to be forcibly triggered. When the NS response management table23does not include an entry corresponding to the acquired namespace ID (NO in step S104), the CPU12ends the setting process. That is, the CPU12ends the setting process since processing and responding of a write command for the namespace associated with the acquired namespace ID have already been set so as not to be forcibly triggered. When the value of the forced_completion_time m is not zero (NO in step S106), that is, is one or more, the CPU12calculates m×100 milliseconds as the forced response completion time (step S106). Then, the CPU12acquires a value of the process_time n (step S107). The CPU12calculates (m−n)×100 milliseconds as the forced response trigger time (step S108). The CPU12determines whether the calculated forced response completion time and forced response trigger time are valid values (step S109). When at least one of the calculated forced response completion time and forced response trigger time is an invalid value (NO in step S109), the CPU12notifies the host2of an error (step S110), and ends the setting process. When the calculated forced response completion time and forced response trigger time are valid values (YES in step S109), the CPU12determines whether the NS response management table23includes an entry corresponding to the acquired namespace ID (step S111). When the NS response management table23includes an entry corresponding to the acquired namespace ID (YES in step S111), the CPU12updates the entry using the calculated forced response completion time and forced response trigger time (step S112), and ends the setting process. As a result, the CPU12can change the forced response completion time and the forced response trigger time that are associated with the designated namespace in accordance with the setting command. On the other hand, when the NS response management table23does not include an entry corresponding to the acquired namespace ID (NO in step S111), the CPU12adds an entry including the acquired namespace ID and the calculated forced response completion time and forced response trigger time to the NS response management table23(step S113), and ends the setting process. As a result, the CPU12can set the forced response completion time and the forced response trigger time that are associated with the designated namespace in accordance with the setting command. Through the above setting process, according to the setting command, the CPU12can set (or change) whether to trigger the forced response when a write command for the designated namespace is received. Further, according to the setting command, the CPU12can set the time between reception of a write command and forcible start of processing in accordance with the write command. (Process for Confirming Triggering of Forced Response in Memory System3) FIG.15is a flowchart illustrating an example of the procedure of a confirmation process executed by the CPU12. The CPU12starts the confirmation process in accordance with reception of a confirmation command from the host2. The confirmation process is a process for providing the host2with information on time for triggering the forced response set for a namespace. First, the CPU12acquires a namespace ID designated in the confirmation command (step S201). The CPU12specifies an entry in the NS response management table23that corresponds to the acquired namespace ID (step S202). The CPU12acquires the forced response completion time from the specified entry (step S203). Then, the CPU12transmits the acquired forced response completion time to the host2(step S204). The CPU12may transmit information on the forced response completion time to the host2. The information on the forced response completion time is, for example, a value obtained by converting the forced response completion time into units of 100 milliseconds, that is, a value of the forced_completion_time m (i.e., the forced response completion time/100). Further, the CPU12may acquire the forced response trigger time from the specified entry and transmit the forced response trigger time to the host2. Alternatively, the CPU12may transmit information on the forced response trigger time to the host2. The information on the forced response trigger time is, for example, a value obtained by converting the forced response trigger time into units of 100 milliseconds, that is, a difference between the value of the forced_completion_time m and a value of the process_time n (i.e., m−n). Through the above confirmation process, in accordance with the confirmation command, the CPU12can provide the host2with the information on at least one of the forced response completion time and the forced response trigger time set for the namespace. (Process for Writing User Data to Memory System3in Host2) FIG.16is a flowchart illustrating an example of the procedure of a write request process executed in the host2. The write request process is a process in which the host2requests the memory system3to write user data. The write request process is executed by, for example, a processor provided in the host2executing a program. The host2stores, in the data buffer403, user data to be written into the NAND flash memory5of the memory system3(step S301). Then, the host2writes a write command into a location in the submission queue401indicated by an SQ Tail pointer (step S302). This write command is a write command requesting writing of the stored user data. Next, the host2adds one to the SQ Tail pointer (step S303). When a value obtained by adding one to the SQ Tail pointer reaches the number of slots in the submission queue401, the host2sets the SQ Tail pointer to zero. Then, the host2writes the updated SQ Tail pointer into the SQ Tail doorbell register of the memory system3(step S304). Through the above write request process, the host2can request the memory system3to write the user data stored in the data buffer403. The host2issues the write command to the memory system3via the submission queue401, thereby requesting the memory system3to write the user data. FIG.17is a flowchart illustrating an example of the procedure of a response reception process executed in the host2. The response reception process is a process in which the host2receives a response to a write command from the memory system3. The response reception process is executed by, for example, a processor provided in the host2executing a program. The host2starts execution of the response reception process in accordance with reception of an interrupt that is issued by the memory system3. First, the host2fetches a completion notification from a location in the completion queue402indicated by a CQ Head pointer (step S401). This completion notification is a response indicating that writing of corresponding user data to the NAND flash memory5has been completed in accordance with a write command issued by the host2. Next, the host2adds one to the CQ Head pointer (step S402). When a value obtained by adding one to the CQ Head pointer reaches the number of slots in the completion queue402, the host2sets zero to the CQ Head pointer. The host2writes the updated CQ Head pointer into the CQ Head doorbell register of the memory system3(step S403). Then, the host2clears the interrupt received from the memory system3(step S404). Next, the host2releases an area in the data buffer403in which the user data which has been written is stored, on the basis of the fetched completion notification (step S405). Through the above response reception process, the host2can release the area in the data buffer403in which the user data is stored when the writing of the user data corresponding to the issued write command is completed. Note that the host2can further receive from the memory system3in the response reception process a notification indicating that a zone has transitioned to the closed state. More specifically, when writing of user data corresponding to an issued write command is performed in accordance with triggering of the forced response, the host2may receive a notification indicating that a zone in which the user data was originally to be written has transitioned to the closed state. In accordance with this notification, the host2stops, for example, issuing a write command requesting writing of user data to the zone. (Process for Writing User Data in Memory System3) FIG.18is a flowchart illustrating an example of the procedure of a write control process executed by the CPU12. The write control process is a process in which the CPU12receives a write command issued by the host2and controls writing of user data corresponding to write commands that have been received. Here, a case where the CPU12receives only write commands from the submission queue401of the host2will be described for easy understanding. First, the CPU12determines whether an SQ Head pointer is equal to an SQ Tail pointer (step S501). That is, the CPU12determines whether there is a write command to be fetched in the submission queue401using the SQ Head pointer and the SQ Tail pointer. When the SQ Head pointer is different from the SQ Tail pointer (NO in step S501), a write command to be fetched exists in the submission queue401, and thus, the CPU12executes a command reception process (step S502). The command reception process is a process in which the CPU12receives a write command and acquires information for managing the received write command. More specifically, in the command reception process, the write management table26is updated in accordance with the received write command. The write management table26manages information on the received write command, for example, for each zone in which corresponding user data is to be written. A specific procedure of the command reception process will be described later with reference to a flowchart ofFIG.19. Next, the CPU12determines whether user data of the write unit for a zone is stored in the data buffer403of the host2(step S503). When user data of the write unit for a zone is stored in the data buffer403(YES in step S503), the CPU12transfers the user data of the write unit from the data buffer403to the DRAM6of the memory system3(step S504). Hereinafter, the zone in which the user data of the write unit to be written is referred to as a first target zone. Further, a NAND zone51corresponding to the first target zone is also referred to as a first target NAND zone51. The CPU12transfers the transferred user data to the NAND flash memory5and writes the user data in the first target NAND zone51(step S505). Note that the CPU12may transfer the user data of the write unit from the data buffer403to the NAND flash memory5without buffering the user data in the DRAM6. That is, the CPU12may skip the step S504. Next, the CPU12determines whether there is a write command for which corresponding user data becomes readable (step S506). When there is no write command for which corresponding user data becomes readable (NO in step S506), the processing by the CPU12returns to step S506. That is, since the CPU12has not notified the host2of completion of writing of user data in accordance with a write command yet, the processing by the CPU12returns to step S506. When there is a write command for which corresponding user data becomes readable (YES in step S506), the CPU12executes a response process (step S507). The response process is a process for notifying the host2that writing of user data in accordance with a write command has been completed and updating information for managing the write command. A specific procedure of the response process will be described later with reference to a flowchart ofFIG.20. Then, in step S508, the CPU12determines whether responses have been transmitted for all write commands corresponding to the user data of the write unit written in step S505. When no response has been transmitted for at least one of the write commands corresponding to the user data of the write unit (NO in step S508), the processing by the CPU12returns to step S506. On the other hand, when the responses have been transmitted for all the write commands corresponding to the user data of the write unit (YES in step S508), the processing by the CPU12proceeds to step S501. That is, the CPU12continues a process for receiving a new write command from the host2and controlling writing of user data corresponding to write commands that have been received. Further, when user data of the write unit for a zone is not stored in the data buffer403of the host2(NO in step S503), in step S509, the CPU12updates the command response management table24using the write command received in step S502, and the processing by the CPU12proceeds to step S510. Specifically, the CPU12acquires the forced response trigger time from the NS response management table23using a namespace ID designated in the write command received in step S502. Then, the CPU12adds an entry which includes the command ID designated in the write command and the acquired forced response trigger time, to the command response management table24. The forced response trigger time in the added entry is used as an initial value of time until processing in accordance with the write command is forcibly started (that is, the time until triggering). The CPU12decreases the time until triggering included in each entry in the command response management table24, for example, according to a lapse of time measured by the timer16. When the SQ Head pointer is equal to the SQ Tail pointer (YES in step S501), the processing by the CPU12proceeds to step S510. Next, the CPU12determines whether there is a write command whose elapsed time since reception has reached the forced response trigger time (step S510). Specifically, for example, when the command response management table24includes an entry indicating a write command in which the time until triggering becomes zero, the CPU12determines that there is a write command whose elapsed time since reception has reached the forced response trigger time. When there is no write command whose elapsed time since reception has reached the forced response trigger time (NO in step S510), the processing by the CPU12proceeds to step S501. When there is a write command whose elapsed time since reception has reached the forced response trigger time (YES in step S510), the CPU12specifies a zone into which user data corresponding to the write command is to be written (hereinafter, referred to as a second target zone) (step S511). The CPU12specifies the second target zone using, for example, the write management table26. Then, the CPU12transfers the user data to be written into the second target zone from the data buffer403of the host2to the DRAM6of the memory system3(step S512). More specifically, the CPU12specifies at least one piece of user data to be written into the second target zone, for example, using the write management table26. The at least one piece of user data corresponds to at least one write command, respectively. The CPU12transfers the at least one specified piece of user data from the data buffer403to the DRAM6. The size of the transferred user data is smaller than the write unit to the NAND flash memory5. Next, the CPU12writes the transferred user data with padding into the shared write buffer52(step S513). That is, the CPU12adds padding data to the user data, thereby writing the data of the write unit to the shared write buffer52. The shared write buffer52into which the user data is written is associated with the NAND zone51corresponding to the second target zone. Note that the CPU12may transfer the user data from the data buffer403to the NAND flash memory5without buffering the user data in the DRAM6. That is, the CPU12may skip the step S512. Next, the CPU12determines whether there is a write command for which corresponding user data becomes readable (step S514). When there is no write command for which corresponding user data becomes readable (NO in step S514), the processing by the CPU12returns to step S514. When there is a write command for which corresponding user data becomes readable (YES in step S514), the CPU12executes the response process (step S515). That is, the CPU12notifies the host2that the writing of the user data in accordance with the write command has been completed, and updates information for managing the write command. Then, in step S516, the CPU12determines whether responses have been transmitted for all write commands corresponding to the written user data in step S513. When no response has been transmitted for at least one of the write commands corresponding to the written user data (NO in step S516), the processing by the CPU12returns to step S514. On the other hand, when the responses have been transmitted for all the write commands corresponding to the written user data (YES in step S516), the CPU12transitions the second target zone to the closed state (step S517). Then, the CPU12notifies the host2of the change of the state of the second target zone (step S518), and the processing by the CPU12proceeds to step S501. Note that the CPU12may transition the second target zone to the full state, instead of transitioning to the closed state. For example, in step S513, the CPU12writes the transferred user data and padding data into the shared write buffer52so that the second target zone becomes in a state where data has been written in the whole of the second target zone (i.e., full state). In this case, after responding to all the write commands corresponding to the written user data, the CPU12notifies the host2that, for example, the second target zone is in the full state. Then, the processing by the CPU12proceeds to step S501. Through the above write control process, the CPU12can receive a write command from the host2and control writing of user data corresponding to write commands that have been received. Specifically, when user data of the write unit to be written into the first target zone is stored in the data buffer403, the CPU12writes the user data of the write unit to the first target NAND zone51. Then, the CPU12transmits, to the host2, a completion notification for the write command corresponding to the user data that has been written. Further, when there is a write command whose elapsed time since reception has reached the forced response trigger time, the CPU12specifies the second target zone into which the corresponding user data is to be written. The CPU12writes the user data, which is to be written into the second target zone and is stored in the data buffer403, not into the NAND zone51but into the shared write buffer52. Then, the CPU12transmits, to the host2, a completion notification for the write command corresponding to the user data that has been written. As a result, the memory system3can efficiently use the storage area of the NAND flash memory5while responding to a write command within the time expected by the host2. FIG.19is a flowchart illustrating an example of the procedure of the command reception process executed by the CPU12. The command reception process is a process for receiving a write command and acquiring information for managing the received write command. The command reception process corresponds to step S502of the write control process described above with reference toFIG.18. First, the CPU12fetches a write command from a location in the submission queue401indicated by an SQ Head pointer (step S601). The CPU12adds one to the SQ Head pointer (step S602). When the value obtained by adding one to the SQ Head pointer reaches the number of slots in the submission queue401, the CPU12sets the SQ Head pointer to zero. Next, the CPU12acquires a command ID, an LBA, a data length, and data buffer information from the fetched write command (step S603). The CPU12specifies a zone that includes the acquired LBA (step S604). That is, the CPU12specifies a zone to which an LBA range including the acquired LBA has been allocated. Then, the CPU12updates the write management table26(step S605), and ends the command reception process. Specifically, the CPU12adds, to the write management table26, an entry indicating the acquired command ID, LBA, data length, and data buffer information, and the specified zone. Through the above command reception process, the CPU12can receive a write command from the host2and acquire information for managing the received write command. The CPU12updates the write management table26using the acquired information for managing the write command. The CPU12can manage user data to be written into the NAND flash memory5in accordance with each write command by using the write management table26. FIG.20is a flowchart illustrating an example of the procedure of the response process executed by the CPU12. The response process is a process for notifying the host2that writing of user data in accordance with a write command has been completed and updating information for managing the write command. The response process corresponds to each of steps S507and S515of the write control process described above with reference toFIG.18. First, the CPU12writes a completion notification of a target write command into a location in the completion queue402that is indicated by a CQ Tail pointer (step S701). The target write command is a write command for which writing of the corresponding user data to the NAND flash memory5has been completed. Next, the CPU12adds one to the CQ Tail pointer (step S702). When the value obtained by adding one to the CQ Tail pointer reaches the number of slots in the completion queue402, the CPU12sets the CQ Tail pointer to zero. Then, the CPU12issues an interrupt to the host2(step S703). The CPU12issues the interrupt to notify the host2that there is a new completion notification to be processed in the completion queue402. The CPU12updates the write management table26(step S704). Specifically, the CPU12deletes an entry corresponding to the target write command from the write management table26. Then, the CPU12updates the command response management table24(step S705), and ends the response process. Specifically, the CPU12deletes an entry corresponding to the target write command from the command response management table24. When there is no entry corresponding to the target write command in the command response management table24, the CPU12skips the procedure of step S705and ends the response process. For example, when the CPU12determines that user data of the write unit for a zone is stored in the data buffer403in accordance with reception of the target write command, an entry corresponding to the target write command is not added to the command response management table24. In this case, the CPU12skips the procedure of step S705. Through the above response process, the CPU12can notify the host2that writing of user data in accordance with a write command has been completed, and update information for managing the write command. As described above, according to the embodiment, the memory system3can respond to a write request within time expected by the host2. The controller4receives, from the host2, a first write request associated with first data having a size less than a first data unit which is a write unit to the NAND flash memory5. In response to a lapse of first time since the reception of the first write request, the controller4starts a write process of second data including at least the first data to the NAND flash memory5. The controller4transmits a first response to the first write request to the host2in response to completion of the write process. The first time is time obtained by subtracting second time from third time. The third time is designated by the host2as a time limit of the transmission of the first response since the reception of the first write request. In this manner, the controller4forcibly starts the write process of the second data, which includes at least the first data associated with the first write request, to the NAND flash memory5when the first time has elapsed since the reception of the first write request, and thus, can respond to the first write request within the time expected by the host2. Each of various functions described in the embodiment may be realized by a circuit (e.g., processing circuit). An exemplary processing circuit may be a programmed processor such as a central processing unit (CPU). The processor executes computer programs (instructions) stored in a memory thereby performs the described functions. The processor may be a microprocessor including an electric circuit. An exemplary processing circuit may be a digital signal processor (DSP), an application specific integrated circuit (ASIC), a microcontroller, a controller, or other electric circuit components. The components other than the CPU described according to the embodiment may be realized in a processing circuit. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 97,789 |
11861203 | DETAILED DESCRIPTION Many specific details are set forth in the following description to facilitate a full understanding of the disclosure. However, the disclosure can be implemented in many other manners than that is described herein, and those skilled in the art can make similar derivations without departing from the essence of the disclosure, and the disclosure is not limited by the specific implementations described below. The disclosure provides methods, apparatuses, and electronic devices for cloud service. The drawings in the following are some embodiments of the disclosure, which are described in detail below. FIG.1illustrates a process for cloud service migration according to some embodiments of the disclosure. As shown inFIG.1, the process of cloud service migration includes the following steps. In the illustrated embodiment, the methods described herein may be performed by a cloud computing platform or similar system. Step S101: obtain a migration request related to a cloud service hosted in a source cluster. As used herein, the cloud service refers to a cloud server (e.g., Elastic Compute Service (ECS)) that provides a basic cloud computing service. A user using the cloud server does not need to purchase hardware devices in advance. Instead, the user creates a required number of cloud server instances based on the service needs. During the use of the cloud server, the user may expand or reduce disk volumes, increase or decrease network bandwidth, and the like with regard to the cloud server based on the changes in the actual service. In some scenarios, the user may release the resources associated with the cloud servers that are no longer in use. As a virtual computing environment, a cloud server instance includes necessary server resource components such as a CPU, a memory, an operating system, a disk, a network bandwidth, etc. A cloud server instance is an operating entity provided by the cloud server to each user, each cloud server instance being a virtual machine. The user may, via management permissions associated with the cloud server instance, perform operations such as disk mounting, caching, image mirroring, environment deployment, and the like, on the cloud server instance. In general, a geographic region is configured with one or more availability zones. As used herein, an availability zone refers to one or more data centers within the same region, where infrastructure such as electricity supply, communication network, and so on are isolated from each other. As such, faults are isolated between availability zones, and the network delay of cloud server instances is less in the same availability zone. Whether to place cloud server instances in the same availability zone depends on the requirements of the disaster tolerability as well as network delay sensibility. If there are heightened requirements for disaster tolerability, cloud server instances are deployed in different availability zones of the same geographic region. If there are heightened requirements for network delay sensibility, cloud server instances are deployed in the same availability zone of the same geographic region. Generally, a data center in the availability zone is a cloud server cluster of a plurality of cloud servers. In one example, the plurality of cloud server clusters deployed in isolation from each other is in the same availability zone. According to various embodiments, the method for cloud service migration may be for migrating a cloud server across availability zones within the same geographic region, or for migrating a cloud server from an availability zone in one geographic region to an availability zone in another geographic region. In some embodiments, between two availability zones, a cloud server in a first cloud server cluster in a first availability zone is migrated to a second cloud server cluster in a second availability zone. In other words, the cloud server is migrated from a source cloud server cluster (source cluster) to a target cloud server cluster (target cluster). In migrating a cloud server from the source cloud server cluster to the target cloud server cluster, the number of cloud servers for migration is not fixed, determined based on the actual service requirements. For simplicity, only a single cloud server is illustrated for migration as an example, to which the migration of multiple cloud servers is similar and not described in detail herein. In this step, a migration request related to a cloud server hosted in the source cloud server cluster is obtained. In one embodiment, the migration request includes a scheduled migration time to migrate the cloud server from the source cloud server cluster to the target cloud server cluster. In other words, the cloud server is migrated in a pre-scheduled manner. For example, the scheduled migration time can be scheduled using a timer. Upon the timer being triggered (e.g., the lapse of time duration until the present time according to the timer reaches the scheduled point of time for migration), the migration operation is performed to migrate the cloud server from the source cloud server cluster to the target cloud server cluster. As shown inFIG.2, to migrate the cloud server from the source cloud server cluster to the target cloud server cluster at a specific point of time, the user schedules a migration operation (201) for the cloud server to be migrated. In one embodiment, a timer is configured to expire at a scheduled migration time based on the time specified by the user. When the timer is triggered (207) (e.g., the present time according to the timer reaches the scheduled migration time), the migration operation is started and performed (209,211) to migrate the cloud server. In some embodiments, after the migration operation related to the cloud server is scheduled (203) (e.g., the migration time is specified by the user), the scheduled migration time may need to be modified (205) due to actual service requirements. For example, given that a user service provided by the cloud service hosted by the cloud server instance is busy, the scheduled migration time needs to be postponed. In this scenario, the scheduled migration time can be accordingly modified to a later time specified by the user. Similarly, given that the promotion on the user service provided by the cloud service hosted by the cloud server instance exceeds the expected results, the original plan of deploying a cloud server in one new availability zone can be accordingly adjusted to deploying the cloud server in a plurality of new availability zones. Further, the migration task can also be modified to change the target cloud server cluster corresponding to the migration task from one target cloud server cluster to a plurality of target cloud server clusters, which is equivalent to adding a migration task of migrating the cloud server to the cloud server clusters in the newly allocated availability zones. Step S102: migrate, based on the scheduled migration time, disk data associated with an original instance of the cloud service to a disk where for servicing a new instance of the cloud service instantiated in the target cluster. In the above-described step S101, a migration request for migrating a cloud server from the source cloud server cluster to the target cloud server cluster is obtained, and a migration time is determined based on a scheduled migration time included in the migration request. In step S102, disk data associated with an original instance of the cloud server in the source cloud server cluster is migrated to a disk for servicing a new instance of the cloud server instantiated in the target cloud server cluster, based on the scheduled migration time. In some embodiments of the disclosure, before the disk data associated with the original instance of the cloud server in the source cloud server cluster is migrated to a disk for servicing a new instance of the cloud server instantiated in the target cloud server cluster, the following operations are performed. 1) Instantiate the New Instance of the Cloud Server in the Target Cloud Server Cluster After obtaining a migration request for migrating a cloud server in the source cloud server cluster to the target cloud server cluster via the above-described step S101, and during the process of migrating the cloud server from the source cloud server cluster to the target cloud server cluster, an instance (e.g., a new instance) of the cloud server is first instantiated in the target cloud server cluster. As shown inFIG.2, to migrate the cloud server from the source cloud server cluster to the target cloud server cluster at a specific scheduled migration time, after the migration request for the cloud server is obtained, the migration operation for the cloud server is performed (209,211) based on a trigger (207) associated with the scheduled migration time included in the migration request. In this example, the new instance of the cloud server is instantiated in the target cloud server cluster. 2) Configure the New Instance of the Cloud Server and the Resource Components Associated With the New Instance As described above, the cloud server instance is a virtual computing environment, including necessary server resource components such as a CPU, a memory, an operating system, a disk, network bandwidth, and the like. In one embodiment, resources used by the cloud server are classified into three categories: an instance, a network connection, and storage (e.g., disk). The following illustrates a migration process of the instance resources and network resources as an example to describe the configuration of in the new instance of the cloud server instantiated in the target cloud server cluster, and the configuration of resource components for of the new instance. a. New Instance of the Cloud Server As used herein, an instance of a cloud server refers to a collected utilization of CPU and memory resources, the configuration of which is illustrated through its configuration. For example, for a configuration of {core: 2, memory: 2048} indicates an instance being configured with a CPU of 2 cores and a memory of a size of 2 GB. The management of the cloud server is implemented by an upper-layer cloud control system, which includes the migration management on cloud servers as a necessary function. As such, migration of the instance of the cloud server is implemented by the cloud control system releasing the original instance of the cloud server in the source cloud server cluster, transmitting an instantiation request to the target cloud server cluster with the same configuration based on the configuration information of the original instance of the cloud server, and configuring the instantiated new instance with the same configuration in the target cloud server cluster. The migration of the instance of the cloud server is completed (213) upon completion of the configuration. In some embodiments, after completing the migration, the scheduled migration time is deleted (215). Further, in implementations, with the increasing scale of the user service to be provided by the cloud service hosted by the cloud server instance, the user intends the cloud service to support the user service on a larger scale. In other words, the new instance of the cloud server providing the cloud service needs to be configured with more resources. In such a case where the configuration of the new instance of the cloud server differs from that of the original instance, a configuration interface for configuring the new instance of the cloud server is provided. This way, the user submits the self-defined configuration information for the new instance via the configuration interface, after which the configuration information submitted by the user is obtained at the configuration interface. The new instance is configured based on the obtained configuration information to accommodate the requirements of the user service better. b. Network of the Cloud Server In one embodiment, network migration is implemented by a routing advertisement process, which allocates and releases network port(s) associated with the cloud server by the cloud control system. In one example, such process includes allocating a network port for the cloud server in the target cloud server cluster, and based on the network configuration information transmitted by the cloud control system, releasing the cloud service from the network port at the source cloud server cluster, configuring the cloud service to the network port allocated in the target cloud server cluster, and completing the routing update and Address Resolution Protocol (ARP) advertisement. This way, terminal ports are notified via broadcasting that the network port associated with the cloud server changes from the source cloud server cluster to the target cloud server cluster. In one embodiment, during network migration, the user keeps an IP address before and after the migration. As such, in a geographic region of a proprietary network, the IP address is reachable in all availability zones within the geographic region. Alternatively, the user may choose to change the IP address, for which only a new relationship needs to be established to map between the IP address before the migration and the IP address after the migration. However, sometimes accessing an IP address from across the geographic regions leads to degraded quality in network access. Thus, to ensure network access quality, in one example, the migration of the IP address is only performed when the IP addresses belong within the same geographic region so as to ensure high efficiency in accessing the service. 3) Terminate the Original Instance in the Source Cloud Server Cluster, and Start the New Instance Instantiated in the Target Cloud Server Cluster Based on the Resource Components In this step, after configuring the new instance and the resource components of the new instance in the above-described operations, the original instance of the cloud server is terminated at the source server cluster. The new instance of the cloud server is then started based on the resource components, the new instance of the cloud server being able to provide the service to users. In one embodiment, before migrating the disk data associated with an original instance of the cloud server in the source cloud server cluster to a disk for servicing a new instance of the cloud server instantiated in the target cloud server cluster based on the scheduled migration time, the following operations are performed. The operations include stopping data operation(s) (including read and write operations) of the cloud service for the disk for servicing the original instance. The operations further include allocating the disk for servicing or the new instance at the target cloud server cluster, based on which the disk data associated with the original instance of the cloud server in the source cloud server cluster is migrated to the disk for servicing the new instance of the cloud server instantiated in the target cloud server cluster. In implementations, the original instance of the cloud server and the new instance of the cloud server each has their own respective processing queues. Such processing queues include an original instance processing queue to which a data request (including a read or write request) including the data operation(s) of the cloud server for n the disk for servicing the original instance is to enqueued for processing; as well as a new instance processing queue to which a read or write request including the data operation(s) of the cloud server for the disk for servicing the new instance is enqueued for processing. Turning toFIG.3, a flow diagram illustrating the migration of a disk for servicing cloud services according to some embodiments of the disclosure. As shown inFIG.3, disk A (301) is allocated in the source server cluster to service the cloud server that is to be migrated. The cloud server is configured to read data from and write data to disk A. When an upper layer of the cloud system issues a migration instruction related to disk A (301), disk A (301) is configured to stop all of the read and write operations. Further, the processing queue of the original instance stops processing the read and write requests enqueued therein. And, disk A (301) is marked with a to-be-deleted state. A disk B (303) for servicing the cloud server is allocated in the target cloud server cluster, after which disk B (303) is configured to start providing a read and write service. For example, a read or write request including a read or write operation for disk B (303) is enqueued to the processing queue of the new instance, and read and write operations of the new instance are all performed for disk B (303). Once the processing queue of the original instance stops dequeuing read/write requests therein for processing, a read/write request that is in the processing queue of the original instance and not completed in processing is transferred to the processing queue of the new instance for processing. In some embodiments, the disk data associated with the original instance is copied to the disk for servicing the new instance via the execution of a background thread. In one example, a migration priority is configured for the disk data during migration, and migration of the disk data associated with the original instance is initiated with a configuration of a low migration priority. If a read/write operation is performed by the new instance during the migration, it is determined whether the data involved in the data request exists in the disk for servicing the new instance. In response to determining that the data does not exist on the disk for servicing the new instance of the cloud server, the migration priority of the data involved in the data request is set to a high migration priority. This way, in the process of migrating the disk data associated with the original instance to the disk for servicing the new instance of the cloud service instantiated in the target cluster, the migration is performed in descending order of the migration priorities. As shown inFIG.3, data on disk A (301) is copied to disk B (303) by the execution of a background thread. When a read or write operation is performed on the new instance during such copying, it is determined whether the data involved in a read or write request, including the read or write operation, exists on disk B (303). In response to the determination that the data does exist on disk B (303), the read or write operation included in the read or write request is performed for disk B. Otherwise, in response to the determination that the data associated with the new instance does not exist on disk B (303) (e.g., the required data has not yet been copied to disk B303), the migration priority of the data associated with the new instance is changed from a low migration priority to a high migration priority. Based on this, the data of the high migration priority (e.g., the data associated with the new instance) is copied from disk A (301) to disk B (303), upon the completion of which the read or write operation included in the read or write request is performed for disk B (303). This implementation is a basis for asynchronous data copying, which leads to no need to wait for the data on disk A (301) to be copied entirely to disk B (303) before the new instance of the cloud server can start providing the service, thereby reducing interruption time (e.g., downtime) associated with the user service. In one embodiment, the copying of the data on disk A (301) to disk B (303) is performed by splitting the data on disk A (301) associated with the original instance into at least one data block, and the data on disk A (301) associated with the original instance is copied to disk B (303) for servicing the new instance in the form of such data blocks. Additionally, the data on disk A (301) is copied to disk B (303) by image mirroring. In one example, the data one disk A (301) is split into at least one data block, then a data mirroring image corresponding to each data block is created, and the data on disk A (301) is copied to disk B (303) via data mirroring. Further, a flag bit marking whether a data block has been copied can be configured for the data block. In one example, if the data block has been copied to disk B (303), the flag bit of the data block is marked as1; if the data block has not been copied, the flag bit of the data block is marked as0. Based on this, in the process of copying the data on disk A (301) to disk B (303), already copied data is compared with the data on disk A (301) based on the flag bits of data blocks to determine a copying progress of the data on disk A (301). In implementations, the comparison used to determine the copying progress of the data on disk A (301) is encapsulated into a migration progress query interface, at which the user can obtain real-time copying progress of copying the data on disk A (301) to disk B (303). In general, the size of the data on a disk is the magnitude of hundreds of gigabytes or terabytes. On the assumption that disk data can be copied at a speed of about 100 GB per hour, techniques utilized by current systems take hours to copy the disk data, which is mirrored into mirror images for copying. In contrast, the technical solution of the disclosure provides a service interruption time of about two to three minutes, counting from the termination of the service provided by the original instance until the resuming of the provided service after the new instance of the cloud service is started. Thus, the technical solution of the disclosure provides more efficiency compared to hours of or longer service interruption time associated with current migration techniques. Now referring back toFIG.1, the process further includes step S103: configure a data operation of the cloud service for the disk for servicing the original instance as the data operation for the disk for servicing the new instance. As described above, the original instance of the cloud service and the new instance of the cloud server each are configured with their respective processing queues. In one example, a data request (e.g., a request including a read or write request) including a data operation of the cloud server for the disk for servicing the original instance is enqueued to the processing queue of the original instance for processing. A read or write request that includes the data operation of the cloud server for the disk for servicing the new instance is enqueued to the processing queue of the new instance for processing. Based on the above-described migrating of the disk data associated with the original instance of the cloud server from the source cloud server cluster to the disk for servicing a new instance of the cloud server instantiated in the target cloud server cluster based on the scheduled migration time, data operation(s) of the cloud server for a disk for servicing the original instance are configured as the data operation(s) for the disk for servicing the new instance such that to migrate the cloud service provided by the cloud server from the source cloud server cluster to the target cloud server cluster. Referring toFIG.3, the cloud server (305) (e.g., the original instance of the cloud server providing the cloud service) reads data from disk A (301) and writes data to disk A (301). When an upper layer of the cloud system issues a migration instruction relating to disk A (301), disk A (301) stops all the read and write operations, and the processing queue of the original instance stops processing the read request and write requests enqueued therein. Further, disk A (301) is marked with a to-be-deleted state, and disk B (303) for servicing the cloud server is allocated in the target cloud server cluster. Next, disk B (303) starts servicing read and write requests after being allocated, and a read request or a write request including a read operation or write operation for disk B (303) is enqueued to the processing queue of the new instance. The read operations and write operations of the new instance are all performed for disk B (303). After the processing queue of the original instance stops processing the read requests and write requests enqueued therein, the read requests and write requests, that are still in the processing queue of the original instance and the processing of which is not yet complete, are transferred to the processing queue of the new instance for processing. In some embodiments, the cloud server is migrated in a manner of “live migration” from the source cloud server cluster to the target cloud server cluster. In one example, compared to the above-described migration, before a read or write operation of the cloud server for the disk for servicing the original instance is stopped (e.g., before the service of the original instance is stopped), the new instance is instantiated, and the disk for servicing the instance is allocated in the target cloud server cluster. Also, the new instance and the disk for servicing the new instance are correspondingly configured, after the completion of which the new instance is started. After the new instance is started, the memory data associated with the original instance is copied to the memory associated with the execution of the new instance, and the execution state data related to the executing state(s) (e.g., CPU state, register state) of the original instance is further copied to the new instance until the execution state data related to the execution state of the original instance is less than a pre-configured threshold. Afterward, the read operation and write operations of the cloud server for the disk for servicing the original instance (e.g., disk A301) is stopped, the remaining execution state data is copied to the new instance, and data operations of the cloud server for the disk for servicing the original instance (e.g., disk A301) is configured as data operations for the disk for servicing the new instance (e.g., disk B303), thereby realizing the migration of the cloud server. Using live migration techniques to migrate the cloud server, service interruption times are shorter and can be controlled within the order of milliseconds or even microseconds, achieving more efficient migration of the cloud server. According to various embodiments, the cloud service migration solution provides that, in the process of migrating the cloud server from the source cloud server cluster to the target cloud server cluster, based on a scheduled migration time that is specified by a user and included in the migration request, the cloud server is migrated from the source cloud server cluster to the target cloud server cluster based on the scheduled migration time. Data operation(s) of the cloud service for a disk in the source cloud server cluster is re-configured as data operation(s) for a disk in the target cloud server cluster to provide the service by a new instance in the target cloud server cluster. As such, the cloud server is migrated from the source cloud server cluster to the target cloud server cluster with simple implementation, as well as reduced service interruption times caused by the interruption of the service provided by a cloud service instance, achieving more efficient and convenient migration. FIG.4is a block diagram illustrating an example apparatus for cloud service migration according to some embodiments of the disclosure. The cloud service migration apparatus is substantially similar to the method for cloud service migration described above, and the details are not repeated herein for simplicity. As shown inFIG.4, the apparatus (400) for service migration includes a migration request obtaining unit (401), a disk migrating unit (402), and a data operation configuring unit (403). The migration request obtaining unit (401) is configured to obtain a migration request related to a cloud service hosted in a source cluster, the migration request including a scheduled migration time to migrate the cloud service from the source cluster to a target cluster. The disk migrating unit (402) is configured to migrate, based on the scheduled migration time, the disk data associated with an original instance of the cloud service to a disk for servicing a new instance of the cloud service instantiated in the target cluster, based on a migration priority order of the disk data. The data operation configuring unit (403) is configured to configure data operation(s) of the cloud service for a disk for servicing the original instance as data operation(s) for the disk for servicing the new instance. In one embodiment, the apparatus (400) for cloud service migration further includes a new instance instantiating unit, a configuration unit, and a new instance starting unit. The new instance instantiating unit is configured to instantiate the new instance of the cloud service in the target server cluster. The configuration unit is configured to configure the new instance and resource components associated with the new instance. The new instance starting unit is configured to stop the original instance at the source server cluster; and start the new instance based on the resource components. In some embodiments, the configuration unit includes one or both of a first new instance configuring subunit and a second new instance configuring subunit. The first new instance configuring subunit is configured to configure, based on the configuration information of the original instance, the new instance in the target cluster using the same configuration as the configuration information. The second new instance configuring subunit is configured to configure the new instance based on the configuration information obtained through a pre-configured configuration interface. In some embodiments, the resource components include a disk. In some other embodiments, the resource components further include at least one of the following: a CPU, a memory, an operating system, and network connection. In some embodiments, the configuration unit includes a network port allocating subunit, and a network port migrating subunit. The network port allocating subunit is configured to allocate a network port in the target cluster. The network port migrating subunit is configured to release the cloud service from a network port at the source cluster; configure the cloud service to the network port allocating at the target cluster; and perform routing update. In some embodiments, the configuration unit includes a data operation stopping subunit, and a disk allocating subunit. The data operation stopping subunit is configured to stop the data operations of the cloud service for the disk for servicing the original instance. The disk allocating subunit is configured to allocate the disk for servicing the new instance in the target cluster. In some embodiments, the data operation configuring unit (403) includes one or both of a first copying subunit and a second copying subunit. The first copying subunit is configured to split the disk data associated with the original instance into at least one data block, and to copy the disk data associated with the original instance to the disk for servicing the new instance in the form of data blocks. The second copying subunit is configured to split the disk data associated with the original instance into at least one data block, create a data mirroring image corresponding to each data block, and copy the disk data associated with the original instance to the disk for servicing the new instance via data mirroring. In some embodiments, the data block is configured with a flag bit for marking whether the data block has been copied. In the process of copying the disk data associated with the original instance to the disk for servicing the new instance, already-copied data is compared with the disk data associated with the original instance based on the flag bit to determine a copying progress for the disk data associated with the original instance. In some embodiments, the original instance and the new instance are each configured with their respective processing queues. A data request including a data operation of the cloud service for the disk for servicing the original instance is enqueued to the processing queue of the original instance; and a data request including a data operation of the cloud service for the disk for servicing the new instance is enqueued to the processing queue of the new instance. In some embodiments, after the data operation configuring unit (403) is started into execution, a data request including a data operation of the cloud service for the disk for servicing the new instance is enqueued to the processing queue of the new instance. Further, a data request, that remains in the processing queue of the original instance and the processing for which is not completed, is transferred to the processing queue of the new instance for processing. In some embodiments, a migration priority of the disk data is determined by the following steps. First, determining whether the data involved in the data request exists in the disk for servicing the new instance. In response to determining that the data involved in the data request does not exist in the disk for servicing the new instance, configuring a migration priority of the data involved in the data request to a high migration priority. Second, in the process of migrating the disk data associated with the original instance to the disk for servicing the new instance of the cloud service instantiated in the target cluster, performing the migration in descending order of migration priorities. In some embodiments, the apparatus (400) further includes a determining subunit, which is executed in the process of processing a data request in the processing queue of the new instance. The determining subunit is configured to determine whether the data involved in the data request exists in the disk for servicing the new instance. In response to determining that the data involved in the data request exists in the disk for servicing the new instance, the determining subunit is configured to perform a data operation included in the data request based on the disk for servicing the new instance. Otherwise, in response to determining that the data involved in the data request does not exist in the disk for servicing the new instance, the determining subunit is configured to preferentially migrate the data of the high migration priority from the disk for servicing the original instance to the disk for servicing the new instance, and perform a data operation included in the data request based for the disk for servicing the new instance upon the completion of the migration. In some embodiments, the source cluster and the target cluster are configured in different availability zones. In some embodiments, the cloud service is configured as a cloud server for providing a cloud computing service. In some embodiments, the source cluster is configured as a cloud server cluster of at least one cloud server, and the target cluster is configured as a cloud server cluster of at least one cloud server. FIG.5is a block diagram of an example electronic device for cloud service migration according to some embodiments of the present disclosure. As the functionalities of the electronic device are substantially similar to the above-described methods for cloud service migration, the details of which are not repeated herein. As shown inFIG.5, the electronic device for cloud service migration includes a memory (501), and a processor (502). The memory (501) is configured to store computer-executable instructions. The processor (502) is configured to read and execute the computer-executable instructions stored in the memory (501) to cause the electronic device (500) to perform the operations including the following steps. Step one: obtaining a migration request related to a cloud service hosted in a source cluster, the migration request including a scheduled migration time to migrate the cloud service from the source cluster to a target cluster. Step two: migrating, based on the scheduled migration time, the disk data associated with an original instance of the cloud service to a disk for servicing a new instance of the cloud service instantiated in the target cluster, the migration of the disk data being performed based on a migration priority order of the disk data. Step three: configuring data operation(s) of the cloud service for a disk for servicing the original instance as data operation(s) for the disk for servicing the new instance. In some embodiments, before configuring data operation(s) of the cloud service for a disk for servicing the original instance as data operation(s) for the disk for servicing the new instance, the processor (502) is further configured to execute the following computer-readable instructions: instantiating the new instance in the target cluster; configuring the new instance and resource components associated with the new instance; stopping the original instance at the source cluster, and starting the new instance based on the resource components. In some embodiments, the configuring of the new instance and resource components associated with the new instance is implemented by configuring, based on the configuration information of the original instance, the new instance in the target cluster using the same configuration as the configuration information; or configuring the new instance based on the configuration information obtained through a pre-configured configuration interface. In some embodiments, the resource components include a disk. In other embodiments, the resource components further include at least one of the following: a CPU, a memory, an operating system, and a network connection. In some embodiments, the configuring of the new instance and resource components associated with the new instance includes: allocating a network port in the target cluster; releasing the cloud service from a network port of the source cluster; configuring the cloud service to the network port allocated at the target cluster; and performing routing update. In some embodiments, the configuring of the new instance and resource components associated with the new instance includes: stopping the data operation of the cloud service for the disk for servicing the original instance; allocating the disk for servicing the new instance in the target cluster; and after the allocation of the disk for servicing the new instance is completed, executing the instructions to migrate, based on the scheduled migration time, the disk data associated with an original instance of the cloud service to a disk for servicing a new instance of the cloud service instantiated in the target cluster, and to configure data operation(s) of the cloud service for a disk for servicing the original instance as data operation(s) for the disk for servicing the new instance. In some embodiments, the migrating of the disk data associated with an original instance of the cloud service to a disk for servicing a new instance of the cloud service instantiated in the target cluster is implemented by splitting the disk data associated with the original instance into at least one data block, and copying the disk data associated with the original instance to the disk for servicing the new instance in the form of data blocks; or splitting the disk data associated with the original instance into at least one data block, creating a data mirroring image corresponding to each data block, and copying the disk data of the original instance to the disk of the new instance via data mirroring. In some embodiments, the data block is configured with a flag bit for marking whether the data block has been copied. In the process of copying the disk data associated with the original instance to the disk for servicing the new instance, already-copied data is compared with the disk data associated with the original instance based on the flag bit to determine copying progress for the disk data associated with the original instance. In some embodiments, the original instance and the new instance are each configured with their respective processing queues. A data request including a data operation of the cloud service for the disk for servicing the original instance is enqueued to the processing queue of the original instance, and a data request including a data operation of the cloud service for the disk for servicing the new instance is enqueued to the processing queue of the new instance. In some embodiments, after the configuring of the data operation(s) of the cloud service for a disk for servicing the original instance as the data operation(s) for the disk for servicing the new instance, a data request including a data operation of the cloud service for the disk for servicing the new instance is enqueued to the processing queue of the new instance, and a data request that remains in the processing queue of the original instance and the processing of which is not yet complete is transferred to the processing queue of the new instance for processing. In some embodiments, a migration priority of the disk data is determined by determining whether the data involved in the data request exists in the disk for servicing the new instance; in response to determining that the data involved in the data request does not exist in the disk for servicing the new instance, configuring a migration priority of the data involved in the data request to a high migration priority; and in the process of migrating the disk data associated with the original instance to the disk for servicing the new instance of the cloud service instantiated in the target cluster, performing the migration in a descending order of migration priorities. In some embodiments, the processing of a data request in the processing queue of the new instance includes the following operations: determining whether the data involved in the data request exists in the disk for servicing the new instance; in response to determining that the data involved in the data request exists in the disk for servicing the new instance, performing a data operation included in the data request for the disk for servicing the new instance; in response to determining that the data involved in the data request does not exist in the disk for servicing the new instance, migrating the data of the high migration priority from the disk for servicing the original instance to the disk for servicing the new instance, and performing a data operation included in the data request for the disk for servicing the new instance upon the completion of the migration. In some embodiments, the source cluster and the target cluster are configured in different availability zones. In some embodiments, the cloud service is configured as a cloud server for providing a cloud computing service. In other embodiments, the source cluster includes a cloud server cluster of at least one cloud server, and the target cluster includes a cloud server cluster of at least one cloud server. The disclosure has been disclosed above through preferred embodiments but is not intended to be limited thereto. Possible variations and modifications can be made by those skilled in the art without departing from the spirit and scope of the disclosure. Therefore, the protection scope of the disclosure shall be defined by the claims of the disclosure. In a typical configuration, the computing device includes one or a plurality of processors (CPUs), input/output interfaces, network interfaces, and memories. The memory may include a computer-readable medium in the form of a non-permanent memory, a random access memory (RAM) or non-volatile memory or the like, such as read-only memory (ROM) or a flash memory (flash RAM). The memory is an example of a computer-readable medium. The computer-readable medium includes permanent and non-permanent, movable and non-movable media that can achieve information storage by means of any methods or techniques. The information may be computer-readable instructions, data structures, modules of programs, or other data. Examples of a storage medium of a computer include, but are not limited to, a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), other types of random access memories (RAMs), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disk read-only memory (CD-ROM), a digital versatile disc (DVD) or other optical storages, a cassette tape, a magnetic tape/magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, and can be used to store information accessible by a computing device. According to the definitions herein, the computer-readable medium does not include non-transitory computer-readable media (transitory media), such as a modulated data signal and a carrier wave. Those skilled in the art should understand that embodiments of the disclosure may be provided as a method, a system, or a computer program product. Therefore, the disclosure may use the form of a full hardware embodiment, a full software embodiment, or an embodiment combining software and hardware. Moreover, the disclosure may use the form of a computer program product implemented on one or a plurality of computer-usable storage media (including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and so on) containing computer-usable program code therein. | 45,926 |
11861204 | DESCRIPTION OF EMBODIMENTS The following describes technical solutions in the embodiments of the application with reference to the accompanying drawings. Network architectures and service scenarios described in the embodiments of the application are intended to describe the technical solutions in the embodiments of the application more clearly, and constitute no limitation to the technical solutions provided in the embodiments of the application. A person of ordinary skill in the art may know that: With evolution of the network architectures and the emergence of new service scenarios, the technical solutions provided in the embodiments of the application are also applicable to similar technical problems. A storage system provided in an embodiment includes a computing node cluster and a storage node cluster. The computing node cluster includes one or more computing nodes100(FIG.1shows two computing nodes100, but is not limited to the two computing nodes100). The computing node100is a computing device on a user side, such as a server or a desktop computer. In terms of hardware, a processor and a memory (which are not shown inFIG.1) are disposed in the computing node100. In terms of software, an application101(which is app for short) and a client program102(which is client for short) are run on the computing node100. The application101is a general term of various applications presented to a user. The client102is configured to: receive a data access request triggered by the application101, interact with a storage node20, and send the data access request to the storage node20. The client102is further configured to: receive data from the storage node, and forward the data to the application101. It may be understood that, when the client102is a software program, a function of the client102is implemented by a processor included in the computing node100by running the program in a memory. The client102may alternatively be implemented by a hardware component located inside the computing node100. Any client102in the computing node cluster may access any storage node20in the storage node cluster. The storage node cluster includes one or more storage nodes20(FIG.1shows three storage nodes20, but are not limited to the three storage nodes20), and the storage nodes20may be interconnected. The storage node may be, for example, a server, a desktop computer, a controller of a storage array, or a disk enclosure. Functionally, the storage node20is mainly configured to calculate or process data. In addition, the storage node cluster further includes a management node (which is not shown inFIG.1). The management node is configured to create and manage a memory pool. One storage node is selected from the storage nodes20, to serve as a management node. The management node may communicate with any storage node20. In terms of hardware, as shown inFIG.1, the storage node20includes at least a processor, a storage device, and an IO controller201. The processor202is a central processing unit (CPU), and is configured to process data from a device outside the storage node20or data generated inside the storage node20. The storage device is an apparatus configured to store data, and may be a memory or a hard disk. The memory is an internal storage device that directly exchanges data with the processor. The memory may read and write data at any time at a very high speed, and is used as a temporary data storage device of an operating system or another running program. Memories have at least two types. For example, the memory may be a random access memory, or may be a read-only memory (ROM) memory. For example, the random access memory may be a dynamic random access memory (DRAM), or may be a storage class memory (SCM). The DRAM is a semiconductor memory. Like most random access memories (RAM), the DRAM is a volatile memory device. The SCM is a compound storage technology that combines features of a conventional storage apparatus and a storage device. The storage class memory can provide a higher read/write speed than a hard disk, but provides a lower operation speed and lower costs than the DRAM. However, the DRAM and the SCM are merely examples for description in this embodiment, and the memory may further include another random access memory, for example, a static random access memory (SRAM). For example, the read-only memory may be a programmable read-only memory (PROM), or an erasable programmable read-only memory (EPROM). In addition, the memory may also be a dual in-line memory module (DIMM), that is, a module including a dynamic random access memory (DRAM). InFIG.2and the following descriptions, a DRAM and an SCM are used as an example for description, but it does not indicate that the storage node20does not include another type of storage devices. The storage device in this embodiment may alternatively be a hard disk. A difference from the memory203lies in that a data read/write speed of the hard disk is lower than that of the memory, and the hard disk is usually configured to persistently store data. A storage node20ais used as an example, and one or more hard disks may be disposed inside the storage node20a. Alternatively, a disk enclosure (which is shown inFIG.2) may be mounted outside the storage node20a, and a plurality of hard disks are disposed in the disk enclosure. Regardless of a deployment manner, these hard disks may be considered as hard disks included in the storage node20a. The hard disk is a solid-state disk, a mechanical hard disk, or another type of hard disk. Similarly, other storage nodes in the storage node cluster, such as a storage node20band a storage node20c, may also include various types of hard disks. A storage node20may include one or more storage devices of a same type. The hard disk included in the memory pool in this embodiment may also have a memory interface, and the processor may directly access the memory interface. FIG.2is a schematic diagram of an internal structure of a storage node20. In an actual application, the storage node20may be a server or a storage array. As shown inFIG.2, in addition to a processor and a storage device, the storage node20further includes an IO controller. Because a memory access latency is very low, operating system scheduling overheads and software overheads may become a bottleneck of data processing. To reduce software overheads, a hardware component, namely, the IO controller, is introduced in this embodiment, to implement hardware-based IO access and reduce impact of CPU scheduling and a software stack. Firstly, the storage node20has its own IO controller22that is configured to communicate with a computing node100and is further configured to communicate with another storage node. Specifically, the storage node20may receive a request from the computing node100through the IO controller22or send a request to the computing node100through the IO controller22, or the storage node20may send a request to a storage node30through the IO controller22, or receive a request from a storage node30through the IO controller22. Secondly, memories in the storage node20may communicate with each other through the IO controller22, or may communicate with the computing node100through the IO controller22. Finally, if hard disks included in the storage node20are located inside the storage node20, these hard disks may communicate with each other through the IO controller22, or may communicate with the computing node100through the IO controller22. If the hard disks are located in a disk enclosure externally connected to the storage node20, an IO controller24is disposed in the disk enclosure. The IO controller24is configured to communicate with the IO controller22. The hard disk may send data or an instruction to the IO controller22through the IO controller24or receive, through the IO controller24, data or an instruction sent by the IO controller22. In addition, the storage node20may further include a bus (which is not shown inFIG.2) configured for communication between components in the storage node20. FIG.3is a schematic structural diagram of an IO controller. An IO controller22is used as an example, and the IO controller22includes a communication unit220and a computing unit221. The communication unit provides a high network transmission capability, and is configured for external or internal communication. Herein, a network interface controller (NIC) is used as an example. The computing unit is a programmable electronic part, and is configured to perform calculation processing and the like on data. In this embodiment, a data processing unit (DPU) is used as an example for description. The DPU has universality and programmability of a CPU, but is more dedicated than the CPU. The DPU can efficiently run for a network data packet, a storage request, or an analysis request. The DPU is distinguished from the CPU by a larger degree of parallelism (which requires a large quantity of requests to be processed). Optionally, the DPU herein may alternatively be replaced by a processing chip such as a graphics processing unit (GPU) or an embedded neural-network processing unit (NPU). The DPU is configured to provide data offloading services specific to a memory pool, for example, an address indexing or query function, a partitioning function, and performing an operation such as filtering or scanning on data. After an IO request enters the storage node20through the NIC, the IO request is directly processed by the computing unit221without using a CPU and an operating system that are inside the storage node20. In this way, a depth of the software stack is thinned, and impact of CPU scheduling is reduced. An example in which an IO read request is executed is used. After the NIC receives the IO read request sent by the computing node100, the DPU may directly query an index table for information corresponding to the IO read request. In addition, the IO controller22further includes a DRAM222. The DRAM222is physically consistent with the DRAM described inFIG.2, but the DRAM222herein is a memory belonging to the IO controller22. The DRAM222is configured to temporarily store data or an instruction that passes through the IO controller22, and is not a part of the memory pool. In addition, the IO controller22may further map the DRAM222to the computing node100, so that space of the DRAM222is visible to the computing node100. In this way, IO access is converted into memory semantic access. A structure and a function of the IO controller24are similar to those of the IO controller22, and details are not described herein again. The following describes the memory pool provided in this embodiment.FIG.4is a schematic architectural diagram of a memory pool. The memory pool includes a plurality of different types of storage devices, and each type of storage devices may be considered as a tier of storage devices. Performance of each tier of storage devices is different from performance of another tier of storage device s. Performance of the storage device in this application is mainly considered from aspects such as an operation speed and/or an access latency.FIG.5is a schematic diagram of each tier of storage devices included in a memory pool according to an embodiment. As shown inFIG.5, the memory pool includes storage devices in storage nodes20. A DRAM of each storage node is located at a first tier in the memory pool because the DRAM has highest performance in all types of storage devices. Performance of an SCM is lower than that of the DRAM. Therefore, an SCM of each storage node is located at a second tier in the memory pool. Further, performance of a hard disk is lower than that of the SCM. Therefore, a hard disk in each storage node is located at a third tier in the memory pool.FIG.5shows only three types of storage devices. However, a plurality of different types of storage devices may be deployed inside the storage node20in product practice based on the foregoing description, that is, all types of memories or hard disks may become a part of the memory pool, and a same type of storage devices located in different storage nodes belong to a same tier in the memory pool. A type of a storage device included in the memory pool and a quantity of tiers are not limited in this application. The tiers of the memory pool are internal division and are not sensed by an upper-layer application. It should be noted that, although a same type of storage devices in each storage node are at a same tier, for a specific storage node, performance of a local DRAM of the storage node is higher than that of a DRAM of another storage node. Similarly, performance of a local SCM of the storage node is higher than that of an SCM of another storage node, and so on. Therefore, memory space will be preferentially allocated from a local space of the storage node unless the local space is insufficient. If the local space is not enough, it will be allocated from another storage node at the same tier. The memory pool shown inFIG.4orFIG.5includes all types of storage devices in the storage node. However, in another implementation, as shown inFIG.6, the memory pool may include only some types of storage devices. For example, the memory pool includes only storage devices with relatively high performance such as a DRAM and an SCM, and excludes storage devices with relatively low performance such as a hard disk. For a network architecture of another memory pool provided in an embodiment, refer toFIG.7. As shown inFIG.7, in this network architecture, a storage node and a computing node are integrated into a same physical device. In this embodiment, the integrated device is collectively referred to as a storage node. Applications are deployed inside the storage node20. Therefore, the applications may directly trigger a data write request or a data read request through a client in the storage node20to be processed by the storage node20or sent to another storage node20for processing. In this case, the data read/write request sent by the client to the local storage node20is specifically a data access request sent by the client to the processor. In addition, components included in the storage node20and functions of the components are similar to those of the storage node20inFIG.6. Details are not described herein again. Similar to the memory pool shown in any one ofFIG.4toFIG.6, the memory pool in this network architecture may include all types of storage devices in the storage node, or may include only some types of storage devices. For example, the memory pool includes only storage devices with relatively high performance such as a DRAM and an SCM, and excludes storage devices with relatively low performance such as a hard disk (as shown inFIG.7). In addition, in the memory pools shown inFIG.4toFIG.7, not every storage node in a storage node cluster needs to contribute storage space to the memory pool, and the memory pool may cover only some storage nodes in the cluster. In some application scenarios, two or more memory pools may be further created in the storage node cluster. Each memory pool covers a plurality of storage nodes, and these storage nodes provide storage space for the memory pool. The storage nodes occupied by different memory pools may be the same or may not be the same. In conclusion, the memory pool in this embodiment is established in at least two storage nodes, and storage space included in the memory pool comes from at least two different types of storage devices. When the memory pool includes storage devices with relatively high performance (such as a DRAM and an SCM) in the storage cluster, the management node may further use storage devices with relatively low performance (such as a hard disk) in the storage node cluster to construct a storage pool. InFIG.8, the network architecture shown inFIG.6is used as an example to describe a storage pool. Similar to a memory pool, the storage pool shown inFIG.8also crosses at least two storage nodes, and storage space of the storage pool includes one or more types of hard disks in the at least two storage nodes. When a storage node cluster includes both a memory pool and a storage pool, the storage pool is configured to persistently store data, especially less frequently accessed data, and the memory pool is configured to temporarily store data, especially more frequently accessed data. Specifically, when a data volume of the data stored in the memory pool reaches a specified threshold, some data in the memory pool is written into the storage pool for storage. It may be understood that the storage pool may also be established in the network architecture shown inFIG.7, and an implementation principle of the storage pool is similar to that described above. However, the storage pool is not a focus of discussion in this application, and the memory pool continues to be discussed in the following. Creation of the memory pool is described. Each storage node20periodically reports status information of storage devices to a management node through a heartbeat channel. One or more management nodes may be deployed. The management node may be deployed as an independent node in a storage node cluster, or may be deployed together with a storage node20. In other words, one or more specific storage nodes20serve as the management node. The status information of the storage devices includes but is not limited to: a type and a health status of various types of storage devices included in the storage node, and a total capacity and an available capacity of each type of storage devices. The management node creates the memory pool based on collected information. The creation means that storage space provided by the storage nodes20is gathered as the memory pool for unified management. Therefore, physical space of the memory pool comes from various types of storage devices included in the storage nodes. However, in some scenarios, the storage node20may selectively provide a storage device for the memory pool based on a status of the storage node20, for example, a health status of the storage device. In other words, some storage devices in some storage nodes are not a part of the memory pool. After collecting the information, the management node needs to perform unified addressing on the storage space that is included in the memory pool. Through unified addressing, each segment of space in the memory pool has a unique global address. The space indicated by the global address is unique in the memory pool, and each storage node20knows a meaning of the address. After physical space is allocated to a segment of space in the memory pool, a global address of the space includes a physical address corresponding to the global address, and the physical address indicates a specific storage device in which the space indicated by the global address is actually located in a specific storage node and an offset of the space in the storage device, that is, a location of the physical space. Each segment of space refers to a “page”, which will be described in detail in the following. In an actual application, to ensure data reliability, an erasure coding (erasing coding, EC) parity mechanism or a multi-copy mechanism is usually used to implement data redundancy. The EC parity mechanism means that data is divided into at least two data fragments, and parity fragments of the at least two data fragments are calculated according to a specific parity algorithm. When one data fragment is lost, another data fragment and the parity fragments may be used for data restoration. Therefore, a global address of the data is a set of a plurality of fine-grained global addresses, and each fine-grained global address corresponds to a physical address of one data fragment/parity fragment. The multi-copy mechanism means that at least two identical data copies are stored, and the at least two data copies are stored in two different physical addresses. When one data copy is lost, another data copy can be used for restoration. Therefore, the global address of the data is also a set of a plurality of finer-grained global addresses, and each finer-grained global address corresponds to a physical address of one data copy. The management node may allocate physical space to each global address after creating the memory pool, or may allocate, when receiving a data write request, physical space to a global address corresponding to the data write request. A correspondence between each global address and a physical address of the global address is recorded in an index table, and the management node synchronizes the index table to each storage node20. Each storage node20stores the index table, so that a physical address corresponding to the global address is queried according to the index table when data is subsequently read or written. In some application scenarios, the memory pool does not directly expose storage space of the memory pool to the computing node100, but virtualizes the storage space into a logical unit (LU) for the computing node100to use. Each logical unit has a unique logical unit number (LUN). Because the computing node100can directly sense the logical unit number, a person skilled in the art usually directly uses the LUN to refer to the logical unit. Each LUN has a LUN ID, where the LUN ID is used to identify the LUN. In this case, the memory pool provides storage space for the LUN with a granularity of a page. In other words, when the storage node20applies to the memory pool for space, the memory pool allocates the space to the storage node20by a page or an integer multiple of a page. A size of a page may be 4 KB, 8 KB, or the like. A size of a page is not limited in this application. A specific location of data in a LUN may be determined based on a start address and a length of the data. A person skilled in the art usually refers to the start address as a logical block address (LBA). It can be understood that three factors such as the LUN ID, the LBA, and the length identify a determined address segment, and an address segment can be indexed to a global address. To ensure that data is evenly stored in each storage node20, the computing node100usually performs routing in a distributed hash table (DHT) manner, and evenly divides a hash ring into several parts in the distributed hash table manner. Each part is referred to as a partition, and the partition corresponds to one of the foregoing address segments. All data access requests sent by the computing node100to the storage node20are located to an address segment, for example, data is read from the address segment, or data is written into the address segment. In the foregoing application scenario, the computing node100and the storage node20communicate with each other by using LUN semantics. In another application scenario, the computing node100communicates with the storage node20by using memory semantics. In this case, the IO controller22maps DRAM space of the IO controller22to the computing node100, so that the computing node100can sense the DRAM space (which is referred to as virtual space in this embodiment), and access the virtual space. In this scenario, a data read/write request sent by the computing node100to the storage node20no longer carries a LUN ID, an LBA, and a length, but carries other logical addresses, for example, a virtual space ID, and a start address and a length of the virtual space. In another application scenario, the IO controller22may map space in a memory pool managed by the IO controller22to the computing node100, so that the computing node100can sense the space and obtain a global address corresponding to the space. For example, an IO controller22in a storage node20ais configured to manage storage space, provided by the storage node20a, in the memory pool. An IO controller22in a storage node20bis configured to manage storage space, provided by the storage node20b, in the memory pool. An IO controller22in a storage node20cis configured to manage storage space, provided by the storage node20c, in the memory pool. Therefore, the entire memory pool is visible to the computing node100. In this case, when sending the to-be-written data to the storage node, the computing node100may directly specify a global address of the data. The following describes a space allocation process by using an example in which an application applies to a memory pool for storage space. In a case, the application refers to an internal service of a storage node. For example, a memory application instruction is generated inside the storage node20a, and the memory application instruction includes a size of applied space and a type of a memory. For ease of understanding, it is assumed herein that the applied space is 16 KB, and the memory is an SCM. In short, a size of the applied space is determined by a size of the stored data, and the type of the applied memory is determined by frequency information of the data. The storage node20aobtains a segment of free global addresses from the stored index table. For example, an address range is [000001-000004], where space whose address is 000001 is a page. The free global address means that the global address is not occupied by any data. Then, the storage node20aqueries whether a local SCM has 16 KB free space. If the local SCM has 16 KB free space, the storage node20aallocates space locally to the global address; or if the local SCM does not have 16 KB free space, the storage node20acontinues to query whether an SCM of another storage node20includes 16 KB free space. This step may be implemented by sending a query instruction to the other storage node20. Because there is a distance between the storage node20and the storage node20a, to reduce a latency, when the storage node20acannot allocate 16 KB free space locally, the storage node20amay preferentially perform a query on a closer storage node20. After obtaining the physical address, the storage node20arecords a correspondence between the global address and the physical address in the index table, and synchronizes the correspondence to another storage node. After determining the physical address, the storage node20amay use space corresponding to the physical address to store data. In another case, the application refers to an application101in the computing node100. In this case, a memory application instruction is generated by the computing node100and then sent to the storage node20a. In this case, the user may specify, by using the computing node100, a size of applied space and a type of a storage device. A function of the foregoing index table is mainly to record a correspondence between a global address and a partition ID, and a correspondence between a global address and a physical address. In addition, the index table may be further used to record attribute information of data, for example, frequency information or a data residence policy of data whose global address is 000001. Subsequently, data may be migrated between various storage devices or attributes may be set, based on the attribute information. It should be understood that the attribute information of the data is merely an option of the index table, and is not necessarily recorded. When a new storage node is added to the storage node cluster, the management node collects node update information, adds the new storage node into the memory pool, performs addressing on storage space included in the storage node to generate a new global address, and updates a correspondence between a partition and a global address (because a total quantity of partitions remains unchanged regardless of scaling out or scaling in). Scaling out is also applicable to a case in which a memory or a hard disk is added to some storage nodes. The management node periodically collects status information of a storage device included in each storage node. If a new storage device is added, the new storage device is added to a memory pool, addressing is performed on new storage space, to generate a new global address, and the correspondence between the partition and the global address is updated. Similarly, the memory pool provided in this embodiment also supports scaling in, provided that the correspondence between the global address and the partition is updated. Each storage device in the memory pool provided in this embodiment provides a memory interface for the processor, so that the processor senses a segment of continuous space and can directly perform a read/write operation on the storage device in the memory pool. In a storage system in this embodiment, a memory pool is created based on storage devices with a plurality of types of performance, and these storage devices with the plurality of types of performance are located in different storage nodes, thereby implementing a cross-node memory pool that integrates storage devices with different performance. In this way, various types of storage devices (regardless of memories or hard disks) can serve as storage resources to provide storage services for upper-layer applications, thereby better using their performance advantages. Because the memory pool includes storage devices with different performance, data can be controlled to be migrated between the storage devices with different performance based on an access frequency of the data. The data can be migrated to a high-performance storage device when an access frequency of the data is relatively high, so as to improve data reading efficiency; and the data can be migrated to a low-performance storage device when the access frequency of the data is relatively low, so as to save storage space of the high-performance storage device. In addition, the memory pool in this application provides storage space for a computing node or a LUN, and changes a processor-centric architecture of a memory resource. The following describes a process of performing a data writing method.FIG.9is a schematic flowchart of performing the method according to an embodiment. As shown inFIG.9, the method includes the following steps: S101: A computing node100sends a data write request to a storage node, where the data write request carries to-be-written data and a logical address of the to-be-written data. In an application scenario of LUN semantics, the logical address includes a LUN ID, an LBA, and a length. In an application scenario of memory semantics, the logical address includes an ID, a start address, and a length of virtual space. After receiving the data write request, a communication unit220of the storage node stores the data write request in a DRAM222. S102: A computing unit221obtains the data write request from the DRAM222, uses the logical address as an input, and outputs a key according to a specific algorithm, where the key can be used to uniquely locate a partition ID. S103: The computing unit221queries an index table for a global address corresponding to the partition ID. S104: The computing unit221determines whether to allocate a physical address to the global address; and if determining not to allocate a physical address to the global address, performs S105: allocating physical space to the global address, and creating a correspondence between the global address and the physical address. For a specific allocation manner, refer to the foregoing space allocation procedure. If a determining result is that a physical address has been allocated to the global address, S106is performed. If a multi-copy mechanism is used to ensure data reliability, it indicates that a plurality of copies of the to-be-written data need to be stored in a storage node cluster, and each copy is stored in a different physical address. Processes of writing all the copies are similar. Therefore, an example in which one copy is written is used for description herein. S106: The computing unit221writes the to-be-written data into a location of the physical space indicated by the physical address. The physical address indicates a storage node in which the physical space is located, a storage device in the storage node, and an offset in the storage device. Therefore, an IO controller22can directly store the to-be-written data according to the address. For example, if the physical space indicated by the physical address is located in an SCM of the storage node, the IO controller22performs a data write action. If the physical space indicated by the physical address is located in a hard disk in the storage node, the computing unit221indicates the communication unit220to send the data write request to an IO controller24. The IO controller24performs a data write action. If the physical space indicated by the physical address is located in another storage node, the computing unit221indicates the communication unit220to send the to-be-written data to the other storage node, and indicates the node to write the to-be-written data into the location of the physical space indicated by the physical address. If an EC parity mechanism is used, in the foregoing procedure, the computing unit221obtains the to-be-written data in the data write request from the DRAM222, divides the to-be-written data into a plurality of data fragments, and calculates and generates parity fragments of the plurality of data fragments. Each data fragment or parity fragment has its own logical address, and the logical address is a subset of a logical address carried in the data write request. The computing unit221uses a logical address of each data fragment/parity fragment as an input, and outputs a key according to a specific algorithm. The key can be used to uniquely locate a partition ID. The computing unit221queries the index table for a global address corresponding to the partition ID, further obtains a physical address corresponding to the global address, and then stores each data fragment or parity fragment in the location of the space indicated by the physical address. An embodiment provides another data write method. In this method, an IO controller22in each storage node20provides a global address of a memory pool managed by the IO controller22to a computing node100, so that the computing node100can sense space of the memory pool and access the storage node20by using the global address. In this case, a data write request sent by the computing node100to the storage node20carries the global address instead of a logical address.FIG.10is a schematic flowchart of performing the method according to an embodiment. As shown inFIG.10, the method includes the following steps: S301: A computing node100sends a data write request to a storage node20, where the data write request carries to-be-written data and a global address of the to-be-written data. A bitmap about global addresses of a memory pool is stored in the computing node100. The bitmap records global addresses corresponding to several pages in the memory pool and usage of the pages. For example, if a record corresponding to a global address of a specific page is “1”, it indicates that the page has stored data. If a record corresponding to a global address of a page is “0”, it indicates that the page has not stored data and is a free page. Therefore, the computing node may learn, based on the bitmap, that storage space indicated by specific global addresses has stored data and that storage space indicated by specific global addresses is free. When sending the data write request, the computing node may select a global address of a free page, and include the global address in the data write request. Specifically, after completing execution of a data write request, the storage node20sends a response message to the computing node100. The computing node100may mark, in the bitmap based on the response message, a global address (set to “1”) of a page corresponding to the request. After receiving the data write request, a communication unit220of the storage node20stores the data write request in a DRAM222. In addition, as shown inFIG.1, the storage node cluster includes a plurality of storage nodes20. In this case, when sending the data write request, the computing node100needs to select a specific storage node20according to a global address. It can be learned from the foregoing description that the global address corresponds to a physical address. The physical address indicates a specific storage device in which space indicated by the global address is located in a specific storage node. Therefore, a specific storage node20can manage only a global address corresponding to a storage device of the storage node20, and perform a data write or read operation on the global address. If the storage node20receives data to be written into another storage node, the storage node20may forward the data to the other storage node. However, in this case, a processing latency is relatively large. To reduce an access latency, when addressing the memory pool, a management node may embed one or more bytes into the global address. The byte is used to indicate a specific storage node in which space indicated by the global address is located. Alternatively, addressing is performed according to a specific algorithm, so that each global address corresponds to a unique storage node. Therefore, the computing node100may identify a storage node corresponding to the global address, and directly send the data write request to the storage node for processing. S302: A computing unit221obtains the data write request from the DRAM222; determines whether to allocate a physical address to the global address, and if determining not to allocate a physical address to the global address, performs S303: allocating physical space to the global address, and creating a correspondence between the global address and the physical address. For a specific allocation manner, refer to the foregoing space allocation procedure. If a determining result is that a physical address has been allocated to the global address, S304is performed. S304: The computing unit221writes the to-be-written data into the physical space indicated by the physical address. For this step, refer to the description of S106inFIG.9. Details are not described herein again. In addition, similar to the process described inFIG.9, in this embodiment, a multi-copy mechanism or an EC parity mechanism may also be used to ensure data reliability. For this part, refer to the description inFIG.9. When data is initially written into a storage node cluster, the data is usually stored at a DRAM tier in a memory pool. As a data access frequency decreases or a space capacity of the DRAM tier decreases, the storage node cluster triggers data migration internally. A computing node cluster is unaware of this process. A data migration policy is stored in the management node, and the management node controls a data migration operation according to the data migration policy. The data migration policy includes but is not limited to: a trigger condition for performing the data migration operation, for example, periodically performing the data migration operation, or performing the data migration operation when a specific condition is met. The specific condition herein may be that an access frequency of the data is higher than or lower than a specified threshold, or may be that an available capacity of a storage device in which the data is located is higher than or lower than a specified threshold, or the like. The “control” means that the management node indicates each storage node20to monitor an access frequency of data stored in the storage node20and indicates the storage node20to migrate, between the storage devices, the data stored in the storage node20. In addition to periodically triggering the data migration operation, when the computing node100sends a data write request to the storage node, frequency information of the data (which is used to indicate an access frequency of the data) may be carried in the data write request. When the storage node executes the data write request, an execution manner is: first writing the data into a DRAM, and then immediately performing a data migration operation based on the frequency information, to migrate the data from the DRAM into a storage device matching the frequency information of the data. Alternatively, the storage node may obtain frequency information of the data based on a metadata structure, a logical address, or the like of the data, and then perform a data migration operation based on the frequency information. In another execution manner, the storage node directly determines, based on the frequency information, a storage device matching the frequency information of the data, and directly writes the data into the storage device through the IO controller. In addition, the computing node100may also specify a residence policy of the data in the data write request. The residence policy means that data of a specific type needs to be stored in a specific type of storage device for a long time. Once such data is stored in a specified storage device, no data migration operation is performed on the data regardless of whether an access frequency of the data is increased or decreased. Target data located at the DRAM tier is used as an example. Assuming that the target data is located in a storage node20a, the storage node20aperiodically collects statistics on an access frequency of the target data, and migrates the target data to an SCM tier or another tier when the access frequency is lower than an access threshold of the DRAM tier. In an optional solution, each tier of storage devices in the memory pool has an access threshold interval. When an access frequency of data is higher than a highest value of the interval or an access frequency of the data is lower than a lowest value of the interval, it indicates that the data needs to be migrated to a tier matching the access frequency of the data. In another optional solution, an access threshold interval of each tier of storage devices is not set, but only the access frequency is compared with the specified access threshold. When the access frequency is lower than the access threshold, it indicates that the data needs to be migrated to a tier with lower performance. The target data is still used as an example. If a current access frequency of the target data falls within an access frequency range of a hard disk tier, it is first determined whether a local hard disk of the storage node20ahas free space. If the local hard disk of the storage node20ahas free space, the target data is migrated to the local hard disk of the storage node20a; otherwise, the target data is sent to another storage node, for example, a storage node20b. The storage node20bis indicated to write the target data into a hard disk of the storage node20b. Before and after the migration, a global address of the target data does not change because an upper-layer application is unaware of the migration and a physical address of the target data changes. After the migration is completed, each storage node20updates a correspondence between the global address and the physical address of the target data in an index table of the storage node20. In addition to the data migration between the tiers based on the data access frequency (also referred to as a frequency), another migration policy is to migrate data based on an available capacity of each tier. It is known that a higher-tier storage device has better performance and requires higher costs, and its storage space is more precious than that of a lower-tier storage device. For example, when an available capacity of the DRAM tier is lower than a specified capacity threshold, the DRAM tier needs to migrate a part of data stored in the DRAM tier to the SCM tier or another tier, to release more space to accommodate newly written data. For a specific part of data that is selected and migrated to the lower-tier storage device, refer to an existing cache replacement algorithm. Details are not described herein. Similarly, the SCM tier or the other tier also has its own capacity threshold. When an available capacity of the tier is lower than the capacity threshold, a part of stored data is migrated to another tier. As mentioned above, the memory pool provides storage space externally with a granularity of a page. Therefore, statistics on an access frequency of the data may also be collected in pages; and correspondingly, the data migration between the tiers is also implemented in pages. However, in product practice, an application often needs to allocate an object with a finer granularity of, for example, a data item based on a page. If a size of a page is 4 KB, a size of a data item is 1 KB, 2 KB, or 3 KB (as long as the size is less than the size of the page). In this case, an access frequency with a granularity of a page is not accurate. Some data items on a page may be frequently accessed; however, other data items on this page are scarcely accessed. If the access frequency on pages is collected with a granularity of a page, this page will reside on a DRAM or SCM medium, thereby wasting a large amount of space. Therefore, in this embodiment, statistics on an access frequency with a granularity of a data item is further collected, data migration is performed with a granularity of a data item, and then cold and hot pages are aggregated. In this way, more efficient swap-in and swap-out performance can be implemented. The following describes a process of performing a data read request method.FIG.11is a schematic flowchart of performing a data read request method according to an embodiment. As shown inFIG.11, the method includes the following steps: S201: A computing node100sends a data read request to a storage node, where the data read request carries a logical address of to-be-read data, and an IO controller22of the storage node receives the data read request. In an application scenario of LUN semantics, the logical address includes a LUN ID, an LBA, and a length. In an application scenario of memory semantics, the logical address includes an ID, a start address, and a length of virtual space. After receiving the data read request, a communication unit220of the storage node stores the data write request in a DRAM222. S202: A computing unit221obtains the data read request from the DRAM222, uses the logical address as an input, and outputs a key according to a specific algorithm, where the key can be used to uniquely locate a partition ID. S203: The computing unit221queries an index table for a global address corresponding to the partition ID. S204: The computing unit221queries the index table for a physical address corresponding to the global address. S205: The computing unit221reads the to-be-read data from physical space indicated by the physical address, and the communication unit220returns the to-be-read data to the computing node100. The physical address indicates a storage node in which the physical space is located, a storage device in the storage node, and an offset in the storage device. Therefore, the computing unit221can directly read the to-be-read data according to the address. If the physical space indicated by the physical address is located in another storage node, the data read request is sent to the other storage node, and the node is indicated to read the data from the physical space indicated by the physical address. If a multi-copy mechanism is used to store data, the storage node may read any data copy according to the foregoing procedure, and send the data copy to the computing node100. If an EC parity mechanism is used, the storage node needs to read each data fragment and each parity fragment according to the foregoing procedure, combine the data fragment and the parity fragment to obtain the to-be-read data, verify the to-be-read data, and return the to-be-read data to the computing node100after verifying that the to-be-read data is correct. It may be understood that the data read method shown inFIG.11corresponds to the data write method shown inFIG.9. Therefore, the data read request in the method carries the logical address of the to-be-read data. An embodiment further provides another data read method. The method corresponds to the data write method shown inFIG.10. In the method, the data read request carries a global address of to-be-read data, and a computing unit221can directly query a physical address according to the global address, to obtain the to-be-read data. In addition, the memory pools shown inFIG.1toFIG.8further support a data prefetching mechanism. A person skilled in the art may understand that a speed of reading data from a storage device with relatively high performance is higher than a speed of reading data from a storage device with relatively low performance. Therefore, if the to-be-read data is hit in the storage device with relatively high performance, the to-be-read data does not need to be read from the storage device with relatively low performance. In this way, data reading efficiency is relatively high. To increase a data hit rate of a cache, a common practice is to pre-read a segment of data from the storage device with relatively low performance, and write the segment of data into the storage device with relatively high performance. In this case, when the computing node100sends a data read request to request to read the segment of data, because the data has been read in advance to the storage device with relatively high performance, the IO controller can directly read the data from the storage device with relatively high performance. For a segment of data with consecutive logical addresses, there is a relatively high possibility that the data is to be read together. Therefore, in practice, data is usually prefetched according to a logical address. Data prefetching modes include synchronous prefetching and asynchronous prefetching. Synchronous prefetching means that when a data read request is executed and to-be-read data is not hit in a higher-tier storage device, data whose logical address and a logical address of the to-be-read data are consecutive is read from a lower-tier storage device according to a logical address of the to-be-read data, and is written into a higher-tier storage device. Asynchronous prefetching means that when a data read request is executed and to-be-read data is hit in a higher-tier storage device, data whose logical address and a logical address of the to-be-read data are consecutive is read from a lower-tier storage device according to a logical address of the to-be-read data, and is written into a higher-tier storage device. With reference toFIG.11, the method for executing the data read request may further include: S206: The computing unit221migrates, to the higher-tier storage device, other data whose logical address and the logical address of the to-be-read data are consecutive. In S205, the computing unit221reads the to-be-read data from the physical space indicated by the physical address. The to-be-read data may be stored in a higher-tier storage device (for example, a DRAM), or may be stored in a lower-tier storage device (for example, an SCM). If the to-be-read data is stored in the DRAM, the computing unit221hits the to-be-read data in the DRAM. If the to-be-read data is stored in the SCM, the computing unit221does not hit the to-be-read data in the DRAM. In either case, the computing unit221can prefetch, to the DRAM, other data whose logical address and the logical address of the to-be-read data are consecutive. Specifically, the computing unit221first obtains a logical address whose logical address and the logical address of the to-be-read data are consecutive. For ease of description, the logical address of the to-be-read data is referred to as a logical address1, and a logical address consecutive to the logical address1is referred to as a logical address2. The computing unit221uses the logical address2as an input, and outputs a key according to a specific algorithm. The key can be used to uniquely locate a partition ID. Then, the computing unit221queries the index table for a global address corresponding to the partition ID and the physical address corresponding to the global address. Finally, the computing unit221reads the other data from the physical space indicated by the physical address. The other data may be located in a local storage node of the computing unit221, or may be located in another storage node. If the physical space indicated by the physical address is located in another storage node, the node reads the data from the physical space indicated by the physical address. Similarly, if the data read request sent by the computing node100to the storage node20carries a global address, during data prefetching, data stored in an address consecutive to the global address is read in advance to a higher-tier storage device according to the global address. FIG.12is a schematic structural diagram of a management node according to an embodiment. The management node includes a processor401and a storage device402. The storage device402stores a program403. The processor401, the storage device402, and an interface404are connected to and communicate with each other through a bus405. The processor401is a single-core or multi-core central processing unit or an application-specific integrated circuit, or may be configured as one or more integrated circuits for implementing this embodiment of the application. The storage device402may be a random access memory (Random Access Memory, RAM), or may be a non-volatile memory (non-volatile memory), for example, at least one hard disk memory. The storage device402is configured to store a computer-executable instruction. Specifically, the computer-executable instruction may include a program403. When the management node runs, the processor401runs the program403to perform the following method. For example, a memory pool is created to provide a service for storing data. The memory pool includes the first storage device and the at least two second storage devices. The processor is configured to control the data to be migrated from the first storage device to the second storage device, or to be migrated from the second storage device to the first storage device. Optionally, the method further includes: obtaining status information of the storage devices, where the status information includes a type and a capacity of the first storage device and a type and a capacity of the second storage device. Therefore, when creating the memory pool, the management node is specifically configured to create the memory pool based on the status information. Optionally, when controlling the data to be migrated from the first storage device to the second storage device, the management node is specifically configured to indicate the first storage node to obtain an access frequency of the data; and indicating the first storage node to migrate the data to the second storage device when the access frequency is lower than a specified frequency threshold. FIG.13is another schematic structural diagram of a management node according to an embodiment. The management node includes a creation module501and a control module502. The creation module501is configured to create a memory pool to provide a service for storing data. The memory pool includes the first storage device and the at least two second storage devices. The control module502is configured to control the data to be migrated from the first storage device to the second storage device, or to be migrated from the second storage device to the first storage device. Optionally, the creation module501is further configured to obtain status information of the storage devices, where the status information includes a type and a capacity of the first storage device and a type and a capacity of the second storage device. When creating the memory pool, the creation module501is specifically configured to create the memory pool based on the status information. Optionally, when controlling the data to be migrated from the first storage device to the second storage device, the control module502is specifically configured to: indicate a first storage node to obtain an access frequency of the data, and indicate the first storage node to migrate the data to the second storage device when the access frequency is lower than a specified frequency threshold. In practice, functions of both the creation module501and the control module502may be implemented by the processor401shown inFIG.12by executing the program403, or may be independently implemented by the processor401. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used for implementation, all or some of the embodiments may be implemented in a form of computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, all or some of the procedures or functions according to the embodiments of the application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a digital versatile disc (DVD)), or a semiconductor medium (for example, a solid-state disk (SSD)), or the like. A person of ordinary skill in the art may understand that all or some of the steps of the embodiments may be implemented by hardware or a program instructing related hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, or an optical disc. In the embodiments of this application, unless otherwise stated or there is a logical conflict, terms and/or descriptions between different embodiments are consistent and may be mutually referenced, and technical features in different embodiments may be combined based on an internal logical relationship thereof, to form a new embodiment. In this application, “at least one” means one or more, and “a plurality of” means two or more. The term “and/or” describes an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. A and B each may be singular or plural. In the text descriptions of this application, the character “/” indicates an “or” relationship between the associated objects. In a formula in this application, the character “/” indicates a “division” relationship between the associated objects. It may be understood that various numbers in the embodiments of this application are merely used for differentiation for ease of description, and are not used to limit the scope of the embodiments of this application. Sequence numbers of the foregoing processes do not mean execution sequences. The execution sequences of the processes should be determined based on functions and internal logic of the processes. The foregoing descriptions are embodiments provided in this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application should fall within the protection scope of this application. | 60,365 |
11861205 | DETAILED DESCRIPTION In the following description, “communication interface device” may be one or more communication interface device. The one or more communication interface devices may be one or more identical communication interface device (for example, one or more network interface cards (NICs)) or two or more different communication interface devices (for example, NIC and host bus adapter (HBA)). In the following description, “memory” is at least one memory device as one example of one or more storage device, and typically may be a main storage device. The at least one memory device of the memory may be a volatile memory device or a nonvolatile memory device. In the following description, “storage unit” is one example of a unit including one or more physical storage device. The physical storage device may be a persistent storage device. The persistent storage device may be typically a nonvolatile storage device (for example, auxiliary storage device), specifically, for example, may be a hard disk drive (HDD), a solid state drive (SSD), a non-volatile memory express (NVMe) drive, or a storage class memory (SCM). In the following description, “one or more drive box” means one example of the storage unit, and “drive” means one example of the physical storage device. In the following description, “processor” may be one or more processor device. The one or more processor device may typically be a microprocessor device such as a central processing unit (CPU), but may be another type of processor device such as a graphics processing unit (GPU). The one or more processor device may be a single-core or multi-core processor device. The one or more processor device may be a processor core. The one or more processor device may be a processor device in a broad sense, such as a circuit as an assembly of gate arrays with a hardware description language performing part or all of processing (for example, field-programmable gate array (FPGA), complex programmable logic device (CPLD), or application specific integrated circuit (ASIC)). In the following description, although such information that output is produced in response to input is sometimes described with expression of “xxx table”, the information may be data of any structure (for example, may be structured data or unstructured data), a neural network that produces an output in response to an input, or a learning model typified by a genetic algorithm or random forests. Hence, “xxx table” can be mentioned as “xxx information”. In the following description, a configuration of each table is merely one example, and one table may be divided into two or more tables, or all or some of two or more tables may be included in one table. In the following description, although processing may be described with “program” as the subject, since the program is executed by a processor to perform determined processing while appropriately using a memory and/or a communication interface device, the subject of the processing may be mentioned as a processor (or a device such as a controller having the processor). The program may be installed from a program source into a device such as a computer. The program source may be a program distribution server or a computer readable (for example, non-transitory) recording medium. In the following description, two or more programs may be implemented as one program, or one program may be implemented as two or more programs. In the following description, when identical elements are described interchangeably, a common sign (or reference sign) may be used among reference signs, and when identical elements are discriminated from each other, reference signs (or identifiers of the relevant elements) may be used. FIG.1illustrates an outline of a distributed storage system according to one embodiment of the invention. The distributed storage system of this embodiment has “separate-drive distributed storage configuration” in which a direct attached storage (DAS) for SDS or HCI is integrated in a drive box106such as FBOF connected to a general-purpose network104. Storage performance and storage capacity can be independently scaled by integrating data into the drive box106. In such a configuration, each server101can directly access a drive mounted in the drive box106, and each drive is shared between servers101. Hence, each server101can individually perform data protection for its data in charge (data written by the server101) without cooperation with another server101. Further, the servers101share metadata on a data protection method (for example, an RAID (Redundant Array of Independent Disk) configuration or a data arrangement pattern (arrangement pattern of data and parity) for each chunk group (group configured of two or more chunks each chunk being a drive region within a drive box (as described in detail later)). Consequently, when charge of data to be charged is changed between the servers101, information that maps data in charge to a chunk group as storage destination of the data in charge is copied to a change destination server101, thereby data protection can be continued without data copy via a network104. In this embodiment, one of the servers101configuring the distributed storage system is a representative server101, the representative server101determines an RAID configuration or a data arrangement pattern on each chunk of an expanded drive at drive expansion, the relevant metadata is shared between the servers101, and at least a chunk in the expanded drive is included in at least one chunk group (for example, one or more new chunk group and/or one or more existing chunk groups). When writing data into the chunk group, each server101associates data with a chunk group and independently performs data protection based on the above metadata without cooperation with another server101. When a server in charge of data to be charged is changed between the servers101, information indicating an association of the data to be charged with a chunk group, the information being owned by a source server101(server101having been in charge of the data to be charged) is copied to a destination server101(server101to be in charge of the data to be charged). After that, the destination server101individually performs data protection without cooperation with another server101based on the metadata indicating the chunk group of the data to be charged. The distributed storage system of this embodiment is configured of the plurality of servers101(for example,101A to101E) connected to the network104, the plurality of drive boxes106(for example,106A to106C) connected to the network104, and a management server105connected to the network104. The distributed storage system of this embodiment may be one example of a SDS/HCI system. A single storage control program103and a plurality of apps102(or a single app102) operatively coexist in each server101. However, all the servers101in the distributed storage system need not include both the apps102and the storage control program103, and some of the servers101may not include either the apps102or the storage control program103. Even if a server101including the apps102and no storage control program103or a server101including the storage control program103and no app102exists in a distributed storage system, such a distributed storage system is effective as the distributed storage system of this embodiment. The term “app” is abbreviation of application program. The term “storage control program” may be referred to as storage control software. The term “server101” may be abbreviation of node server101. A plurality of general-purpose computers may be established as software-defined anything (SDx) by each of the computers executing predetermined software. For example, software-defined storage (SDS) or software-defined datacenter (SDDC) may be used as the SDx. The server101is one example of the computer. The drive box106is one example of the storage unit. An execution framework of the app102probably include, but is not limited to, a virtual machine and a container. Data written from the app102is stored in any one of the drive boxes106A to106C connected to the network104via the storage control program103. A versatile network technique such as Ethernet or Fibre Chunnel can be used for the network104. The network104may connect the server101to the drive box106directly or via one or more switches. A versatile technique such as Internet⋅SCSI (iSCSI) or NVMe over Fabrics (NVMe-oF) can be used for a communication protocol. The storage control programs103of the respective servers101configure a distributed storage system with a plurality of servers101being bundled through cooperative operation. Thus, when a failure occurs in one server101, the storage control program103of another server101substitutes for relevant processing, and thus I/O can be continued. Each storage control program103can have a data protection function and a storage function such as snapshot. The management server105has a management program51. The management program51may be referred to as management software. For example, the management program51includes information indicating a configuration of a chunk group in the metadata. Processing performed by the management program51is described later. According to the distributed storage system of this embodiment, data as a protection object need not be transferred for protection between the servers101via the network104. When a failure occurs in a storage control program103, another storage control program103sharing metadata may access data stored in a chunk. When a failure occurs in a drive, the storage control program103may restore data in the failed drive using data that is stored in another drive without failure while being made redundant. FIG.2illustrates an exemplary hardware configuration including the server101, the management server105, and the drive box106in this embodiment. The server101includes a memory202, a network I/F203(one example of a communication interface device), and a processor201connected to them. The memory202, the network I/F203, and/or the processor201may be multiplexed (for example, duplexed). The memory202stores the app102and the storage control program103, and the processor201executes the app102and the storage control program103. Similarly, the management server105includes a memory222, a network I/F223(one example of a communication interface device), and a processor221connected to them. The memory222, the network I/F223, and/or the processor221may be multiplexed (for example, duplexed). The memory222stores a management program51, and the processor221executes the management program51. The drive box106includes a memory212, a network I/F213, a drive I/F214, and a processor211connected to them. The network I/F213and the drive I/F214are each one example of a communication interface device. The drive I/F214is connected to a plurality of drives204. The server101, the management server105, and the drive box106are connected to the network104via the network I/Fs203,223, and213and thus can communicate with one another. The drive204may be a versatile drive such as a hard disk drive (HDD) or a solid state drive (SSD). Naturally, the invention may use another type of drive without depending on a drive type or a form factor. FIG.3illustrates one example of partitioning of the distributed storage system of this embodiment. The distributed storage system may be partitioned into a plurality of domains301. In other words, the server101and the drive box106may be managed in units called “domains”. In this configuration, data written to a volume by the app102is stored via the storage control program103in any one of drive boxes106belonging to the same domain301as the server101in which the app102operates. For example, data as a write object generated in servers101(#000) and101(#001) belonging to a domain301(#000) are stored in one or both of drive boxes106(#000) and106(#001) via a subnetwork54A, and data as a write object generated in servers101(#002) and101(#003) belonging to a domain301(#001) are stored in a drive box106(#002). The distributed storage system is thus configured using the domains, so that when a failure occurs in the drive box106or the drive204, influence on server performance can be separated between the domains301. For example, according to the example shown inFIG.3, the network104includes the subnetworks54A and54B (one example of a plurality of sub communication networks). The domain301(#000) (one example of each of the plurality of domains) includes the servers101(#000) and101(#001) and the drive boxes106(#000) and106(#001) connected to the subnetwork54A corresponding to the domain301(#000), and does not include the servers101(#002) and101(#003) and the drive box106(#002) connected to the subnetwork54A via another subnetwork54B. Consequently, even if the subnetwork54A is disconnected from the subnetwork54B, data written to the drive box106can be still read in each of regions of the domains301(#000) and301(#001). FIG.4illustrates an exemplary configuration of a domain management table400. The domain management table400is to manage, for each domain301, a server group and a drive box group configuring the domain301. The domain management table400has records for each domain301. Each record holds information including a domain #401, a server #402, and a drive box #403. One domain301is exemplified (“object domain301” in description ofFIG.4). The domain #401indicates an identifier of the object domain301. The server #402indicates an identifier of a server101belonging to the object domain. The drive box #403indicates an identifier of a drive box106belonging to the object domain. FIG.5illustrates one example of drive region management of this embodiment. In this embodiment, a plurality of drives204mounted in the drive box106are managed while being divided into a plurality of fixed-size regions called “chunks”501. In this embodiment, a chunk group, which is a storage region as a combination of a plurality of chunks belonging to a plurality of different drives, has the RAID configuration. A plurality of data elements configuring a redundant data set are written into a relevant chunk group according to an RAID level (data redundancy or a data arrangement pattern) in accordance with the RAID configuration of the relevant chunk group. Data protection is performed using a typical RAID/EC technique according to the RAID configuration of the relevant chunk group. In description of this embodiment, terms on the storage region are defined as follows.“Volume region” is a partial storage region in the volume.“Chunk” is part of the entire storage region provided by one drive204, where one drive204provides a plurality of chunks.“Chunk group” is a storage region configured of two or more respective different chunks provided by two or more different drives204. Here, “two or more different drives204” providing one chunk group may be closed in one drive box106, or may straddle two or more drive boxes106.“Page” is a storage region configured of parts of the respective two or more chunks configuring the chunk group. Although the page may be the chunk group itself, one chunk group is configured of a plurality of pages in this embodiment.“Strip” is part of the entire storage region provided by one drive204. One strip stores one data element (user data element or parity). The strip may be a storage region of the minimum unit provided by one drive204. That is, one chunk may be configured of a plurality of strips.“Stripe” is a storage region configured of two or more different strips (for example, two or more strips of the same logical address) provided by two or more different drives204. one redundant data set may be written to one stripe. That is, two or more respective data elements configuring one redundant data set may be written to two or more strips configuring one stripe. The stripe may be the whole or part of a page. The stripe may be the whole or part of the chunk group. In this embodiment, one chunk group may be configured of a plurality of pages and one page may be configured of a plurality of stripes. The stripes configuring a chunk group may have the same RAID configuration as that of the chunk group.“Redundant configuration region” may be one example of the stripe, the page, or the chunk group.“Drive region” may be one example of a device region, specifically, for example, may be one example of the strip or the chunk.“Redundant data set” includes data made redundant, and may be configured of a plurality of data elements. Here, “data element” may be either “user data element” as at least part of data from the app102or “parity” generated based on two or more user data elements. For example, when data associated with a write request is made redundant according to the RAID level 5 (3D+1P), the redundant data set may be configured of four data elements (three user data elements and one parity). For example, the respective four data elements may be written to four different chunks existing in respective four different drives. FIG.6illustrates an exemplary configuration of a chunk group management table600. The chunk group management table600is to manage a configuration and a data protection method (RAID level) of each chunk group. The chunk group management table600is at least part of metadata170as described later. The chunk group management table600has a record for each group. Each record holds information including a chunk group #601, data redundancy602, and a chunk configuration603. One chunk group is exemplified (“object chunk group” in description ofFIG.6). The chunk group #601indicates an identifier of the object chunk group. The data redundancy602indicates data redundancy (data protection method) of the object chunk group. The chunk #603indicates an identifier of a chunk as a component of the object chunk group. As shown in the example ofFIG.6, a chunk group #000 is configured of four chunks (C11, C21, C31, and C41) and protected by RAID 5 (3D+1P). Such a chunk group management table600is shared as at least part of the metadata170by a plurality of servers101. Hence, even when any server101writes data to any chunk group, data protection can be performed in accordance with data redundancy of that chunk group. The data arrangement pattern is often determined depending on the data redundancy and thus not described. In this embodiment, at least one storage control program103(for example, a storage control program103in the representative server101) may dynamically (for example, depending on write quantity into the drive, i.e., depending on the amount of empty space of one or more configured chunk group), newly configure a chunk group and may add information of the newly configured chunk group to the chunk group management table600. Consequently, a chunk group of the optimum data redundancy is expectably configured in correspondence to a situation of the distribute storage system, i.e., data redundancy of the chunk group is expectably optimized. Specifically, for example, the following may be acceptable.A chunk management table may be prepared. The chunk management table may be shared by a plurality of storage control programs103. The chunk management table may indicate, for each chunk, a drive providing a relevant chunk, a drive box having the drive, and a state of the chunk (for example, whether the chunk is in an empty state in which the chunk is not a component of any chunk group).When a condition, under which a chunk group is newly created, is satisfied (for example, when the amount of the empty space of one or more created chunk group becomes less than a predetermined value), the storage control program103(or management program51) may newly create a chunk group configured of two or more different empty chunks provided by respective two or more different drives204. The storage control program103(or management program51) may additionally write information indicating a configuration of the chunk group to the chunk group management table600. The storage control program103may write one or more redundant data set according to data as a write object to the newly created chunk group. Consequently, a chunk group with the optimum data redundancy is expectably created while avoiding depletion of the chunk group.The storage control program103(or management program51) may determine data redundancy (RAID level) of a chunk group to be created according to a predetermined policy. For example, when the amount of empty space in the drive box is equal to or larger than a predetermined value, the storage control program103(or management program51) may determine the data redundancy of a chunk group, which is to be newly created, to be RAID 6 (3D+2P). When the amount of empty space in the drive box is less than the predetermined value, the storage control program103(or management program51) may determine the data redundancy of a chunk group, which is to be newly created, to be a data redundancy (for example, RAID 5 (3D+1P)) that is enabled by less chunks than the chunks in the case where the amount of empty space in the drive box is equal to or larger than the predetermined value. In this embodiment, a plurality of chunk groups may be beforehand configured based on all drives204in all drive boxes106. In this embodiment, as described later, a chunk group on the entire region in a drive may be configured when a drive is added. Such drive addition may be performed on the basis of a drive or of a drive box. FIG.7Aillustrates an exemplary configuration of a page mapping table700. As described above, in this embodiment, a write region is provided to the app102by a unit called volume. The region of each chunk group is managed by pages each being a smaller fixed size region than the chunk group, and is made corresponding to the volume region. The page mapping table700is to manage a correspondence relationship between the volume region and the page (partial region of the chunk group). Although a page is allocated to any region of a volume when the volume is created in this embodiment, the page may be dynamically allocated to a volume region as a write destination using a technique called Thin Provisioning. The page mapping table700has a record for each volume region. Each record holds information including a volume #701, a volume region beginning address702, a chunk group #703, and offset-within-chunk group704. One volume region is exemplified (“object volume region” in description ofFIG.7A). The volume #701indicates an identifier of a volume containing the object volume region. The volume region beginning address702indicates a beginning address of the object volume region. The chunk group #703indicates an identifier of a chunk group containing a page allocated to the object volume region. The offset-within-chunk group704indicates a position of a page allocated to an object region (a difference from a beginning address of the chunk group containing the page to a beginning address of the page). FIG.7Billustrates an exemplary configuration of an empty page management table710. The empty page management table710is for each server101to manage an empty page allocatable to a volume without communication with another server101. The empty page management table710has a record for each empty page. Each record holds information including a chunk group #711and an offset-within-chunk group712. One empty page is exemplified (“object empty page” in description ofFIG.7B). The chunk group #711indicates an identifier of a chunk group containing the object empty page. The offset-within-chunk group712indicates a position of the object empty page (a difference from a beginning address of a chunk group containing the object empty page to a beginning address of the object empty page). The empty page is allocated to each server101by a representative server101(or management server105), and information of the allocated empty page is added to the table710. A record of the empty page allocated to a volume created at volume creation is deleted from the table710. When a server101has insufficient empty pages, the representative server101(or management server105) forms a new chunk group, and a region in the chunk group is added as a new empty page to such a server101. Specifically, in this embodiment, for each server101, the empty page management table710held by the server101holds information on a page allocated to the server101as a page allocatable to a volume provided in the server101among a plurality of pages provided by all drive boxes106that can be accessed by the server101. Page allocation control at volume creation and details of a sequence of empty page control are not described. FIG.8illustrates an example of table arrangement in this embodiment. The server101A is exemplarily described as one server. The description on the server101A can be applied to any other server101(for example, server101B). First, the server101A may hold a domain management table400A indicating a plurality of domains as a plurality of partitions of the distributed storage system. The server101A has a page mapping table700A related to a volume used by an app102operating in the server101A, and an empty page management table710A holing information on an empty page allocated to the server101A as an empty page allocatable to a volume. In other words, the server101A may not have a full-page mapping table of all the servers101. This is because if the full-page mapping table of all the servers101is shared by all the servers101, the management data amount owned by each server101is enlarged, and scalability is affected thereby. However, the page mapping table700A may be backed up by another server101partially configuring the distributed storage system in order to respond to management data loss at a server failure. In this embodiment, “management data” is held by the storage control program103, and may include the domain management table400A, the page mapping table700A, the empty page management table710A, and the metadata170A. The metadata170A may include a chunk group management table600A. The page mapping table700A may exist for each volume while having information on one or more volume provided by a storage control program103A. Hereinafter, for a certain volume, a server having a page mapping table portion of the volume is referred to as owner server. The owner server can perform high-speed access to metadata on a volume, and can perform high-speed I/O. This embodiment is therefore described with a configuration where an app using the volume is located in the owner server. However, it is possible to locate the app in a server other than the owner server and perform I/O to/from the owner server. The chunk group management table600A synchronizes between servers101in each of which the storage control program operates. As a result, the same configuration information (the same content) can be referred in all the servers101. This eliminates the need of reconfiguring a user data element or parity (in other words, the need of data copy via the network104) when an app and a volume are migrated from the server101A to another server101B. Data protection can also be continued in a destination server of the app and the volume even without such a reconstruction (data copy). The storage control program103may refer to the domain management table400A and the chunk group management table600A and specify a chunk group, as a data write destination, provided from one or more drive box106in the same domain. The storage control program103may refer to the domain management table400A and the chunk group management table600A, specify two or more empty chunks provided from one or more drive boxes106in the same domain (two or more empty chunks provided from two or more different drives), configure a chunk group by such two or more empty chunks (at this time, for example, determine data redundancy of that chunk group depending on a situation of the distributed storage system), and add information of the chunk group to the chunk group management table600A. Which chunk is provided and which drive box106has a drive204that provides that chunk may be specified, for example, according to either of the following.Information of the drive204providing a relevant chunk and information of the drive box106having the drive204are added to the chunk group management table600for each chunk.An identifier of a chunk includes an identifier of the drive204providing the chunk and an identifier of the drive box106having the drive204. The storage control program103A (one example of each of two or more storage control programs103) manages the page mapping table700A (one example of mapping data) on a volume provided by the storage control program103A itself. For example, in this embodiment, when a volume is newly created, the storage control program103may specify empty pages in the number corresponding to the whole volume (pages that are still not allocated to any volume region, i.e., in an allocatable state) and allocate an empty page to the volume. The storage control program103A may register, in the page mapping table700A, that the page is allocated to the volume region. The storage control program103may write a redundant data set of data associated with a write request into a chunk group containing a page allocated to a volume region as a write destination. Alternatively, even if a volume is newly created, the storage control program103A may not allocate an empty page to that volume. When the storage control program103A receives a write request specifying the volume and identifies from the page mapping table700A that no page is allocated to the volume region specified by the write request, the storage control program103A may allocate a page to the volume region, register in the page mapping table700A that the page is allocated to the volume region, and write the redundant data set of the date associated with the write request to the chunk group containing the page. A failure is assumed to occur in any one of the servers101, for example, in the server101A. In such a case, for each of one or more volumes provided by the storage control program103A in the server101A, the storage control program103B in the server101B selected as a server101as a restoration destination of the relevant volume restores the volume based on a page mapping table700B on the volume (for example, a page mapping table received (copied) from the storage control program103A), and provides the restored volume to the app102B. The storage control program103B can refer to the page mapping table700B to read data according to one or more redundant data sets from a page allocated to a volume region in the restored volume. In other words, for each of one or more volumes provided by the storage control program103A, even if an owner server of the volume (server in charge of I/O to/from the volume) is changed from the server101A to the server101B, the server101B can access data of the volume without data migration via the network104. The distributed storage system of this embodiment is one example of the SDS/HCI system. The chunk group management table600is shared by a plurality of servers101, the page mapping table700(one example of domain mapping data) on a migration object volume is copied from a source server101(one example of a first computer) to a destination server101(one example of a second computer), and the ownership of the migration object volume (control that is an authority to perform input/output of data to/from the migration object volume) is migrated from the source server101to the destination server101. Consequently, a migration object volume can be migrated between the servers101without copying data written to the migration object volume between the servers101. Although so-called thin provisioning, in which a page (one example of a storage region in a physical storage region provided by one or more drives204) is dynamically allocated to a volume, is used in the above description, a storage region in a physical storage region may be beforehand allocated to a volume, and the domain mapping data, including the page mapping table700as one example, may be data indicating a correspondence relationship between a volume address and an address of the physical storage region. In this embodiment, not only for the above-described volume, but also for a volume to which a storage function is applied, a migration object volume can be migrated between the servers101without copying data written to the migration object volume between the servers101, and such migration can be performed while maintaining functionality of the storage function. In this embodiment, “storage function” is the following function: control data being metadata other than the page mapping table700is required for I/O of data to/from a volume in place of or in addition to the page mapping table700. One or more storage function may be provided for the distributed storage system. In this embodiment, two or more storage functions are provided. Examples of the storage function include an auto tiering function, a nonsynchronous remote copy function, and a snapshot function. In this embodiment, each server101includes programs executed by the processor201of the server101, including a hierarchy relocation program, a migration program, a journal transfer program, a snapshot acquisition program, and a restoration program as described later, in place of, or in addition to, the app102and/or the storage control program103. The hierarchy relocation program is for the auto tiering function. The journal transfer program is for the asynchronous remote copy function. The snapshot acquisition program and the restoration program are for the snapshot function. The migration program is required for any of the auto tiering function, the asynchronous remote copy function, and the snapshot function. When any one of the storage functions does not exist, the program for such a storage function may also not exist. At least one of the hierarchy relocation program, the migration program, the journal transfer program, the snapshot acquisition program, and the restoration program may be contained in the storage control program. As illustrated inFIG.27, a migration program2700identifies a storage function applied to a volume specified as a migration object. According to a result of such identification, the migration program2700determines a volume to be a migration object, in addition to control data to be copied to a destination server and a specified storage owner volume (a volume of which the ownership is owned by a source server and to which the storage function is applied). For example, when the storage function is the auto tiering function, the migration program2700performs processing as illustrated inFIG.12. When the storage function is the asynchronous remote copy function, the migration program2700performs processing as illustrated inFIG.17. When the storage function is the snapshot function, the migration program2700performs processing as illustrated inFIG.24. Consequently, ownership of a volume can be promptly migrated between the servers101(volume migration) without copying data written to the migration object volume between the servers101while maintaining functionality of the storage function appropriately depending on a type of the storage function. The storage function is now classified for detailed description of this embodiment. In the following description, a source server is referred to as “server101A” while a destination server is referred to as “server101B” for ease of explanation. In the following description, the source server101A is typically exemplified. Case of Storage Function Being Auto Tiering Function FIG.9illustrates an outline of an exemplary migration of an auto tiering owner volume. The term “auto tiering owner volume” means a storage owner volume to which the auto tiering function is applied (volume of which the ownership is owned by the server101A and to which the storage function is applied). The auto tiering function is a storage function of performing auto tiering on the storage owner volume of the server101A. The term “auto tiering” means that a drive204as a location destination of data written to a volume region is changed depending on an I/O frequency of the volume region at regular intervals or every time a predetermined event occurs. Specifically, the auto tiering function is the following function: when multiple types of drives204having different I/O performances, such as SSD204S and HDD204H, exist in the drive box106, on the basis of an I/O frequency of a volume region (page allocated to the volume region) of each volume, data in that page is relocated in a page based on an appropriate drive204for the I/O frequency of the volume region to optimize cost performance of the entire distributed storage system. In a configuration of a distributed storage system, in which each server101manages only a volume the ownership of which is owned by the server101itself, information indicating I/O statics (statics of I/O frequencies) of each volume, to which the auto tiering function is applied, is owned only by an owner server101of that volume. In such a configuration, while update of an I/O statistical table portion (part of a table showing the I/O statics) or relocation necessity determination based on the I/O statistical table portion can be immediately performed at low cost, when a volume is migrated to another server101, such a I/O statistical table portion (collection of records corresponding to the volume as a migration object) is also necessary to be appropriately copied. When that I/O statistical table portion is not copied, a destination server101must collect I/O statics again to determine an appropriate drive, which temporarily impairs functionality. In the example shown inFIG.9, a plurality of SSDs204S and a plurality of HDDs204H are located in the drive box106. Each server101uses, for each volume, an I/O statistical table portion (one example of I/O statistical data) corresponding to the volume to appropriately relocate data in the SSD204S or the HDD204H. However, location of the drive204usable in this embodiment is not limited to this. For example, in a usable configuration, a type of drive204(for example, SSD204S) is located in the drive box106, while another type of drive204(for example, HDD204H) is a built-in drive of the server101. In another usable configuration, respective different types of drives204are located in different drive boxes106. Although SSD and HDD are each used as a drive type in this embodiment, all types of drives having different I/O performances may be used in case of using the auto tiering function. For example, auto tiering can be performed between a NVMe device and a serial ATA (SATA) SSD device in a configuration. Further, although auto tiering between two types of drives is exemplified in this embodiment, the same effects can be exhibited in case of auto tiering between three or more types of drives. FIG.10illustrates an exemplary configuration of an I/O statistical table1000. The server101A holds an I/O statistical table1000in the memory202of the server101A. The I/O statistical table1000has a record (I/O statistical record) for each volume. Each record holds information, including a volume #1001, a page-within-volume #1002, IOPS1003, and a hierarchy determination result1004. One volume is exemplified (“object volume” in description ofFIG.10). The volume #1001indicates an identifier of an object volume. The page-within-volume #1002indicates an identifier of a page within an object volume (volume region corresponding to size of a page in an object volume). The IOPS1003indicates I/O per second (IOPS) as one example of I/O frequency in a page within an object volume, specifically indicates IOPS of the page within the object volume, which is calculated from the number of I/O received by the page within the object volume for a certain period in the past. The hierarchy determination result1004indicates a type of a drive204appropriate as a location destination of data within a page allocated to the page within the object volume. IOPS can be acquired by a typical method. FIG.11illustrates one example of a procedure of processing executed by the hierarchical redistribution program1100. The hierarchical redistribution program1100, based on IOPS of each page-within-a-volume of a volume, determines a drive type of an appropriate location destination of data in a page allocated to the page within the volume, and can perform relocation of data between the drives204as necessary. One auto tiering owner volume in the server101A is exemplified in description ofFIG.11. The hierarchy relocation program1100regularly executes the processing exemplified inFIG.11. First, the hierarchy relocation program1100specifies IOPS of all pages-within-volume of an auto tiering owner volume from all records corresponding to the auto tiering owner volume in the I/O statistical table1000, and sorts the pages-within-volume in descending order of IOPS (S1101). The maximum number of SSD pages (pages based on the SSD204S) allocatable to a relevant server101is assumed to be determined for each server101. The hierarchy relocation program1100is assumed to allocate SSD pages within a range of the maximum number of SSD pages beforehand allocated to the own server101(server101A) in descending order of IOPS, and thus sets a hierarchy determination result1004of the page-within-volume, to which the SSD page is to be allocated, to “SSD” (1102). The hierarchy relocation program1100sets the hierarchy determination result1004to “HDD” for each page-within-volume other than the respective pages-within-volume, to which the SSD pages corresponding to the maximum number of SSD pages are to be allocated (S1103). That is, a HDD page (page based on the HHD204H) is allocated to a page-within-volume to which the SSD page is not allocated. Finally, the hierarchy relocation program1100determines, for each page-within-volume of the auto tiering owner volume, whether a type of the drive204as a base of a page allocated to the page-within-volume is the same as a drive type indicated by the hierarchy determination result1004based on the page mapping table700A (S1104). The hierarchy relocation program1100performs data relocation for a page-within-volume of which the determination result of S1104is untrue (S1105). Specifically, the hierarchy relocation program1100reads data from a page allocated to such a page-within-volume, writes the page to a page based on a drive of the drive type indicated by the hierarchy determination result1004, and allocates such a destination page to the page-within-volume (updates the page mapping table700A). Consequently, for example, when a page in which the data is stored is an SSD page, and when the hierarchy determination result1004is “HDD”, data in such a SSD page is relocated in a HDD page. Such data relocation is unnecessary for a page-within-volume of which the determination result of S1104is true. FIG.12illustrates one example of a procedure of processing executed by the migration program2700to migrate the auto tiering owner volume. The migration program2700can migrate, for a specified auto tiering owner volume, ownership between the servers101without data copy while maintaining functionality of the auto tiering function. The migration program2700first determines a destination server101of the specified auto tiering owner volume (S1201). At this time, for a migration object volume, the destination server101may preferentially select a server101, in which empty pages the number of which is similar to the sum of the total number of pages, of each of which the hierarchy determination result1004is “SSD”, and the total number of pages, of each of which the hierarchy determination result1004is “HDD”, (the number of empty pages similar to the maximum number of allocatable pages for each drive type). The server101B is assumed to be selected as the migration object volume. Subsequently, the migration program2700copies a table portion (record group) corresponding to a migration object volume (specified auto tiering owner volume) in the page mapping table700A to the destination server101B, and copies a table portion (record group) corresponding to a migration object volume in the I/O statistical table1000to the destination server101B (S1202). The migration program2700migrates the ownership of the migration volume from the server101A to the server101B (S1203). As described above, with the auto tiering owner volume, the I/O statistical table portion corresponding to that volume is copied to the destination server101B. As a result, the destination server101B can reduce warmup time required for determining an appropriate drive type (time for acquiring appropriate I/O frequency statics) for each page-within-volume of a volume migrated to the server101B and migrate the ownership of the volume without copying data written to the volume between the servers101while maintaining functionality of auto tiering. Case of Storage Function Being Asynchronous Remote Copy Function FIG.13illustrates an outline of exemplary migration of a primary volume. The remote copy function creates a duplication of a volume on a primary site1301P in a server101of another storage cluster on a secondary site1301S. A plurality of volume duplications can be created in servers101of different storage clusters. The term “primary volume” means a copy source volume, and “secondary volume” means a copy destination volume. For example, in the primary site1301P, a plurality of servers101P are connected to the drive box106A via a network104A. Similarly, in the secondary site1301S, a plurality of servers101S are connected to the drive box106B via a network104B. The remote copy function includes “synchronous remote copy function” meaning that when a write request to a primary volume130P occurs, data is written to both the primary volume130P and a secondary volume130S and then a response is returned to the write request, and “asynchronous remote copy function” meaning that data is written to the primary volume130P and then a response is returned to the write request regardless of the data being written to the secondary volume130S. Herein, the asynchronous remote copy function is focused. In the asynchronous remote copy function, a journal containing data to be written to the secondary volume on the secondary site1301S is written into a so-called buffer region called journal volume130J, and then the data is transferred to the secondary site1301S asynchronously to processing of the write request. In a distributed storage system in which each server101manages only a volume the ownership of which is owned by the server101itself, the server101A needs to migrate the primary volume130P, to which the asynchronous remote copy function is applied, to the server101B while maintaining functionality of the asynchronous remote copy function without data copy between the servers101. To this end, the server101B needs to succeed from the server101A the journal containing the data written to the primary volume130P and transfer the data in the journal to a server101S having the secondary volume130S (server101S on the secondary site1301S). In this embodiment, each server101has one or more journal volumes130J. The server101stores a journal, which contains data written to a primary volume the ownership of which is owned by the server101itself, in the journal volume130J of the server101. With the journal volume130J, a relationship between a volume region (page-within-volume) and a page is managed by the page mapping table as shown inFIG.7A, and the journal is stored in a drive within the drive box106A. The journal written to the journal volume130JA (journal written to a drive through the journal volume130JA) can be read from the server101A having the ownership of the journal volume130JA. In other words, such a journal cannot be read from a server101having no ownership of the journal volume130JA. FIG.14illustrates an exemplary configuration of a remote copy management table1400. The server101A holds the remote copy management table1400in the memory202of the server101A. The remote copy management table1400is one example of remote copy management data indicating a relationship between the primary volume and the secondary volume, and has a record for each primary volume. Each record holds information, including a primary volume #1401, a storage cluster #1402, a server #1403, a secondary volume #1404, a remote copy state1405, and a journal volume #1406. One primary volume is exemplified (“object primary volume” in description ofFIG.14). The primary volume #1401indicates an identifier of the object primary volume. The storage cluster #1402, the server #1403, and the secondary volume #1404each show an identifier of a secondary volume pairing with the object primary volume. Specifically, the storage cluster #1402shows an identifier of a storage cluster (volume group) containing the object primary volume. The server #1403shows an identifier of a server having a secondary volume pairing with the object primary volume. The secondary volume #1404shows an identifier of a secondary volume in a storage cluster containing the object primary volume. The remote copy state1405indicates a state of remote copy in the pair of the object primary volume and the secondary volume. The remote copy state1405shows values including “Copy” (meaning that copy is being performed), “Pair” (meaning that copy is completed), and “Suspend” (meaning that copy is suspended). The journal volume #1406indicates an identifier of a journal volume with which the object primary volume is associated (journal volume as a write destination of a journal containing data to be written to the object primary volume). FIG.15illustrates an exemplary configuration of a journal mapping table1500. The server101A holds the journal mapping table1500(one example of journal mapping data) in the memory202of the server101A for each journal volume130JA owned by the server101A. One journal volume130JA is exemplified. The journal mapping table1500manages which secondary volume address on the secondary site a journal written to the journal volume130JA is to be written to. The journal mapping table1500has a record for each volume region configuring the journal volume130J. Each record holds information including a volume region #1501, a secondary volume #1502, a reflection destination address1503, and a transfer state1504. One volume region is exemplified (“object volume region” in description ofFIG.15). The volume region #1501indicates an identifier (address) of the object volume region. The secondary volume #1502indicates an identifier of a secondary volume as a reflection destination of the journal stored in the object volume region, and the reflection destination address1503indicates an address of a volume region in the secondary volume. In place of or in addition to the secondary volume #1502and the reflection destination address1503, an identifier of a primary volume, in which data containing a journal written to the object volume region is written, and an address of a write destination volume region in the primary volume may be stored in the journal mapping table1500. A reflection destination (copy destination) of data in the journal written to the object volume region may be specified from an identifier of a primary volume, an address of a write destination volume region in the primary volume, and the remote copy management table1400. The transfer state1504indicates a state of the object volume region. Values of the transfer state1504include “invalid”, “valid”, “handover/server identifier”, and “proxy/server identifier”. The term “invalid” means a state (empty state) where a journal can be written to the object volume region. The term “valid” means a state where a journal has been written to the object volume region, and data in the journal should be transferred to a secondary volume. The term “handover/server identifier” means a state where reflection of a journal written to the object volume region is handed over to a server101of the server identifier. The term “proxy/server identifier” means a state where reflection of a journal written to the object volume region is handed over from the server101of the server identifier. Here, “reflection of a journal”, may mean that a journal containing data written to a primary volume (for example, data in the journal) is transmitted to a computer having a secondary volume paring with the primitive volume, specifically, for example, may mean that the data in the journal is written to a secondary volume, or the journal is transmitted to a computer having a secondary volume and the computer stores the journal to a journal volume of the computer and writes the data in the journal to the secondary volume, or the computer receives the data in the journal and writes the data to the secondary volume. The journal may contain not only the data written to the primary volume, but also journal metadata (for example, data containing information, such as sequence number or timestamp, by which write order can be specified). The data in the journal may be written to the secondary volume in the write order of journals. FIG.16illustrates an example of a procedure of processing executed by the storage control program103A. The storage control program103A can store the received write data in the primary volume130P and create a journal containing such data. The storage control program103A receives a write request to the primary volume130P and write data (S1601). The storage control program103A refers to the page mapping table700A, and writes the received write data to a drive as a base of a page corresponding to a volume region as a write destination (S1602). Subsequently, the storage control program103A refers to the remote copy management table1400, and searches the journal volume130J corresponding to the primary volume130P of a write destination (S1603). The storage control program103A refers to a journal mapping table1500corresponding to a found journal volume130J and searches a record of a transfer state1504“invalid” (writable volume region) (S1604). The storage control program103A writes a journal containing data written in S1602to a found volume region (S1605). Finally, the storage control program103A updates the journal mapping table1500referred in S1604, specifically writes “valid” into the transfer state1504of a record corresponding to the found volume region (S1606). FIG.17illustrates an example of a procedure of processing executed by the migration program2700to migrate the primary volume130P. The migration program2700can migrate the ownership of the specified primary volume between the servers101without data copy between the servers101while maintaining functionality of the asynchronous remote copy function. The migration program2700first determines a destination server101of the specified primary volume130P (S1701). For example, the destination server101is determined from a server in the primary site1301P in which the server101A having the ownership of the primary volume130P exists. For example, the server101B is determined as the destination server. Subsequently, the migration program2700specifies a volume region (region within a journal volume), in which a journal containing the data written to the primary volume130P is written, based on the remote copy management table1400and/or the journal mapping table1500, and copies a journal mapping table portion (a record group of the journal mapping table1500, one example of region control data) corresponding to the specified volume region to a journal mapping table of the destination server101B (S1702). The migration program2700copies a record (remote copy management portion), which corresponds to the primary volume130P as a migration object, in the remote copy management table1400to the destination server101B (S1703). The migration program2700copies a page mapping table portion, which corresponds to the primary volume as a migration object (and a volume region indicated by the journal mapping table portion copied in S1702), in the page mapping table700A to the destination server101B (S1704), and migrates the ownership of the primary volume130P as the migration object to the server101B (S1705). In S1702, the migration program2700writes the transfer state1504“handover/server101B identifier” to a source record (record of the journal mapping table1500), and writes the transfer state1504“proxy/server101A identifier” to a destination record (record of the journal mapping table1500of the destination server101B). Consequently, the authority to reflect a journal is handed over from the server101A to the server101B, and the server101B reflects the journal by proxy of the server101A. FIG.18illustrates an example of a procedure of processing executed by the journal transfer program1800. The journal transfer program1800can refer to the transfer state1504of a journal region of a journal volume and reflect (transfer) an unreflected journal to a server101as a reflection destination. This program1800is executed asynchronously to processing performed in response to reception of a write request, and is continuously executed until any unreflected journal is eliminated (until the transfer state1504becomes “invalid” in any volume region in the journal volume), for example. The journal transfer program1800first refers to the journal mapping table1500, and searches a record in which the transfer state1504is a non-transferred state (“valid” or “proxy”) (S1801). If a record is found (S1802: YES), processing is passed to S1803. If no record is found (S1802: NO), processing is ended. Subsequently, the journal transfer program1800reads a journal from a volume region (volume region in the journal volume130JA) indicated by the found record, and transfers the data in the journal to a reflection destination (the storage cluster #1402, the server #1403) indicated by the record while designating a secondary volume #1404and a reflection destination address (address specified from the journal mapping table1500) (S1803). At this time, when the transfer state1504indicated by the record is “proxy/server identifier” (S1804: YES), the journal transfer program1800transmits a transfer completion notice designating a volume region #1501indicated by the record to a server101(server101of that server identifier) as a handover source of the journal (S1805). Upon reception of the transfer completion notice, the handover source server101sets “invalid” to the transfer state1504in a record of the volume region #1501designated by the transfer completion notice. The journal transfer program1800cancels the record that has been transferred (sets the transfer state1504of that record to “invalid”), and sleeps for a certain time (S1806). When another record, of which the transfer state1504is “valid” or “proxy”, exists, the journal transfer program1800performs S1801on that record. As described above, with a primary volume to which the asynchronous remote copy function is applied, location information (journal mapping table portion) of temporary buffer data (journal) to be reflected to the secondary volume is copied between the servers101. Even if a source server has the ownership of a journal volume in which the journal is stored, the authority of read and reflection (transfer) of a journal, which contains data written to a migrated primary volume, is handed over to a destination server of the primary volume. In place of the source server, the destination server can read such a journal and reflect the journal based on the copied journal mapping table portion (based on the handed-over authority) through a journal volume of the destination server. As a result, the ownership of the primary volume can be migrated between the servers101without copy of data written to the primary volume as a migration object between the servers101while maintaining functionality of the asynchronous remote copy function. Further, since the transfer completion notice, which specifies a volume region # of the reflected journal, is transmitted from the destination server to the source server, the source server can release a record of the transfer state1504“handover” to a record of the transfer state1504“invalid”. Case of Storage Function Being Snapshot Function FIG.19illustrates an outline of an exemplary migration of a member volume. The snapshot function is to acquire duplicate (snapshot) of a volume at a past certain time. The snapshot function manages a difference between data at the present time and data at the snapshot acquisition time of an acquisition source volume (parent volume) of a snapshot. The snapshot data can read/write from/to a host or the like as a volume (snapshot volume being a volume as a snapshot of the parent volume) different from the parent volume of the snapshot. In the example ofFIG.19, the server101A can acquire a plurality of snapshot volumes130Y (for example,130Y1and130Y2) assuming an owner volume (a volume the ownership of which is owned by the server101A) is a base volume130X. The server101A can further acquire a snapshot volume130Y3from the acquired snapshot volume130Y2. As a result, the inter-relation (configuration) of a volume group1901including the base volume130X and one or more snapshot volumes130Y can be represented in a form of a tree structure (snapshot tree), in which one or more snapshot volumes are each a node (intermediate node or leaf node) other than a root node while the base volume is defined as the root node. The snapshot volume130Y can be subjected to snapshot operation or restoration operation depending on a snapshot state. In this embodiment, “member volume” means a volume corresponding to a node in the snapshot tree, i.e., the base volume130X or the snapshot volume130Y. In the snapshot tree, the base volume (volume corresponding to the root node) is a volume as a direct or indirect base of one or more snapshot volumes. In a configuration of the distributed storage system in which each server101manages only a volume the ownership of which is owned by the server101itself, differential data (differential data between the snapshot volume and the parent volume) managed by the snapshot function and address information (information indicating a relationship between a volume region in a snapshot volume and a region as a reference destination of the volume region), i.e., control data (metadata) on the base volume130X or the snapshot volume130Y are owned only by the owner server101. In such a configuration, another server101cannot refer to such control data (for example, the differential information and the address information). In one comparative example, therefore, if ownership of some snapshot volume is migrated between servers, while the ownership of the snapshot volume is owned by a destination server, the ownership of a volume as a reference destination of that snapshot volume is owned by a source server, and thus the snapshot volume cannot be restored. As described above, migration of ownership of a snapshot volume between servers impairs functionality of the snapshot function. This embodiment makes it possible to solve such a problem, i.e., to migrate the ownership of the snapshot volume between servers101while maintaining functionality of the snapshot function. FIG.20illustrates an exemplary configuration of a snapshot management table2000. The server101A holds the snapshot management table2000in the memory202of the server101A. The snapshot management table2000is one example of snapshot management data, and indicates a snapshot tree (i.e., dependencies between member volumes). The snapshot management table2000has a record for each member volume. Each record holds information including a volume #2001, a parent volume #2002, snapshot time2003, and a snapshot state2004. One member volume is exemplified (“object member volume” in description ofFIG.20). The volume #2001indicates an identifier of an object member volume. The parent volume #2002indicates an identifier of a parent volume of the object member volume (volume of a snapshot acquisition source). A member volume for the parent volume (snapshot of the parent volume) is a child volume. The snapshot time2003indicates time at which a snapshot as the object member volume is created. The snapshot state2004indicates a state of the snapshot as the object member volume. A value of the snapshot state2004includes “being acquired” (meaning that a snapshot is being acquired) or “acquired” (meaning that a snapshot has been acquired). FIG.21illustrates an exemplary configuration of a snapshot mapping table2100. The snapshot mapping table2100is one example of snapshot mapping data, and indicates a correspondence relationship between a volume and snapshot data (data as a volume snapshot). The snapshot mapping table2100has a record for each volume region in the snapshot volume. Each record holds information including a volume #2101, an address-within-volume2102, a storage device #2103, and an address-within-storage device2104. One volume region is exemplified (“object volume region” in description ofFIG.21). The volume #2101indicates an identifier of a snapshot volume containing the object volume region. The address-within-volume2102indicates an address of the object volume region. The storage device #2103indicates an identifier of a storage device having a reference destination region of the object volume region (parent volume, volume higher than the parent volume, or drive214). The address-within-storage device2104indicates an address (volume region address or page address) of the reference destination region of the object volume region. FIG.22illustrates an example of a procedure of processing executed by the snapshot acquisition program2200. The snapshot acquisition program2200can acquire a snapshot on a specified volume. First, upon receiving a snapshot acquisition request (S2201), the snapshot acquisition program2200adds a record, of which the parent volume is the specified volume, to the snapshot management table2000, and sets the snapshot state2004of the record to “being acquired” (S2202). Subsequently, the snapshot acquisition program2200copies a snapshot mapping table portion (record group) corresponding to the parent volume of the specified volume (S2203). That is, a reference destination of a volume region in the specified volume is a volume region in the parent volume. Finally, the snapshot acquisition program2200sets the snapshot state2004of the added record in the snapshot management table2000to “acquired” (S2204). FIG.23illustrates an example of a procedure of processing executed by the storage control program103A. The storage control program103A can manage a difference between write data for a volume region in the snapshot volume and data in a reference destination region of the volume region. First, when receiving a write request specifying a member volume (S2301), the storage control program103A refers to the snapshot mapping table2100(and the page mapping table700A), and determines an empty drive region (storage region in a physical storage region) (S2302). The storage control program103A then stores data associated with the write request in the drive region (S2303), registers the drive region, as a reference destination of a write destination volume region, in the snapshot mapping table2100, and returns a completion response to the write request (S2304). FIG.24illustrates an example of a procedure of processing executed by the migration program2700to migrate a member volume. With a specified member volume, the migration program2700can migrate ownership of a member volume as a migration object between the servers101without copying data written to the member volume as the migration object while maintaining functionality of the snapshot function. With the specified member volume, the migration program2700selects either migration of the overall snapshot tree (overall migration) or migration of the ownership of only a specified member volume (single migration). Such selection may be performed according to a user instruction (for example, an instruction from the management server105) or according to a previously set certain policy, for example. It is to be noted that “single migration” may be not only migration of the ownership of only a specified member volume but also migration of ownerships of the specified member volume and a member volume lower than the specified member volume. First, the migration program2700determines a destination server101of the specified member volume (S2401). In this case, the server101B is assumed to be determined. Subsequently, the migration program2700selects a range to which a member volume as a migration object belongs (i.e., selects either single migration or overall migration) (S2402). When single migration is selected, the migration program2700defines only the specified member volume as the migration object. When overall migration is selected, the migration program2700refers to the snapshot management table2000, and defines any member volume in a dependence relationship with the specified member volume as the migration object (S2403). For each migration-object member volume, the migration program2700copies, to the destination server101B, at least a snapshot mapping table portion (page mapping table portion as necessary) between a snapshot management table portion and the snapshot mapping table portion corresponding to the member volume (S2404). Finally, the migration program2700migrates the ownership of any migration-object member volume to the destination server101B (S2405). FIG.25illustrates an example of a procedure of restoration processing. A restoration program of the destination server101B receives a restoration instruction specifying a snapshot volume as a restoration object (S2501). The restoration program of the destination server101B refers to the snapshot management table2000in the destination server101B, and determines whether a parent volume of the snapshot volume as the restoration object exists in the destination server101B (S2502). In case of the single migration, since a reference destination of the snapshot volume remains in a source server, a determination result is false in S2502. When the determination result of S2502is false (S2502: NO), the restoration program of the destination server101B allows the migration program2700of the destination server101B to return a restoration object volume to a server101A (source server101A of the restoration object volume) in which a parent volume (reference destination volume) of the restoration object volume exists (S2503). This means migration of the snapshot volume as the restoration object (migration of the ownership of the volume) from the server101B to the server101A. In the server101A or101B in which the snapshot volume as the restoration object exists, the restoration program restores the snapshot volume as the restoration object to a normal volume (S2504). The term “normal volume” means a volume in which a reference region of each volume region is a drive region. Specifically, in S2504, the restoration program copies, to a record (record in the snapshot mapping table) of each volume region in a restoration object volume, a record indicating a drive region as a reference destination of a parent volume of the volume so that a reference destination region of each volume region in the snapshot volume as the restoration object is a drive region. After S2504, if S2503has been performed (S2505: YES), the restoration program of the server101A allows the migration program of the server101A to return (migrate) the restored volume to the server101B (S2506). This means migration of the restored volume (migration of the ownership of that volume) from the server101A to the server101B. Not only for restoration but also for another operation associated with a snapshot, a single-migrated volume may be returned to the source server101A and subjected to predetermined processing in the source server101A and then returned to the destination server101B. As described above, when the server101A identifies that any one of member volumes is specified as a volume of the migration object from the server101A to the server101B, the server101A copies, to the server101B, portions of the snapshot mapping table2100, i.e., a snapshot mapping table portion (one example of region control data) on at least the specified member volume among all the member volumes (all volumes represented by the snapshot tree) including the specified member volume and a snapshot management table portion (one example of region control data). This makes it possible to migrate ownership of a volume while maintaining functionality of the snapshot function. When overall migration is selected, the server101A defines all member volumes, which are represented by the snapshot tree including the node corresponding to the specified member volume, as migration objects, and copies, to the server101B, the snapshot mapping table portion and the snapshot management table portion for each of all the member volumes. Since the ownership of any of the member volumes is migrated to the destination server101B, the server101B can refer to any of the snapshot volumes thereafter. When single migration is selected, the server101A defines, as migration objects, only some snapshot volumes containing a specified snapshot volume among all member volumes represented by the snapshot tree including a node corresponding to the specified snapshot volume, and copies, to the server101B, the snapshot mapping table portion and the snapshot management table portion for each of those some snapshot volumes. Such some snapshot volumes include the specified snapshot volume, or the specified snapshot volume and a lower snapshot volume than the specified snapshot volume in the snapshot tree. Since the snapshot relationship is relayed between the servers, ownership of a volume can be migrated while maintaining functionality of the snapshot function. When at least one snapshot volume in those some snapshot volumes is a restoration object, and when a reference destination of at least one volume region in the one snapshot volume is a volume in the server101A, the server101B returns the at least one snapshot volume to the server101A. When the reference destination of the volume region in the returned at least one snapshot volume is a volume region in a volume of the server101A (volume of which the ownership is owned by the server101A), the server101A changes the reference destination to a drive region based on the snapshot mapping table2100. Subsequently, the server101A returns the restored volume to the server101B. As a result, restoration of a snapshot volume can be achieved through migration of some of snapshot volumes even if the server101having the ownership of the migrated snapshot volume is different from a server having the ownership of a volume as a reference source of such a snapshot volume. Although one embodiment of the invention has been described hereinbefore, the invention is not limited thereto. Those skills in the art can easily modify, add, or transform each element of the above embodiment within the scope of the invention. For example, as illustrated inFIG.26, each server106may be configured of duplexed controllers2501each of which performs the storage control program103. The above-described configurations, functions, processing sections, and/or processing units may be partially or entirely implemented by hardware through design with an integrated circuit, for example. Information of a program, a table, or a file enabling each function can be stored in a storage device such as a nonvolatile semiconductor memory, a hard disc drive, and a solid state device (SSD), or a computer-readable non-transitory data storage medium such as an IC card, a secure digital (SD) card, and a digital versatile disc (DVD). LIST OF REFERENCE SIGNS 101: Server106: Drive box | 76,526 |
11861206 | DETAILED DESCRIPTION Disclosed are various approaches for performing garbage collection for data stored in object-based storage systems, such as those provided by public cloud storage systems. These approaches involve intelligently archiving or otherwise removing unused files or objects from object-based storage systems, such as cloud-based storage systems. As entities store data in cloud-based storage systems, costs increase as more data is stored. Accordingly, failure to remove data from the cloud-based storage system when there is no longer a need to store it in the cloud-based storage system consumes resources and incurs costs unnecessarily. Moreover, failing to remove data from the cloud-based storage system when there is no longer a need to store it creates security liabilities as the data could be accidentally disclosed in the event that the cloud-based storage provider suffers a security or data breach. Although individual objects stored in a cloud-based storage system can be deleted or archived automatically if they have not been accessed within a predefined period of time, this can corrupt larger data structures formed from collections of individual objects. Indeed, the larger data structure could still be actively in use, even if individual objects that form the data structure have not been accessed within the predefined period of time. Accordingly, various embodiments of the present disclosure identify objects that may be candidates for deletion or archival based on the last time they were accessed, but are related to larger data structures that are still in active use. As a result, various embodiments of the present disclosure are able to delete or archive unused objects stored in a cloud-based storage system without corrupting larger data structures that may include unused objects. This can both improve the efficiency of cloud-based storage systems by minimizing the amount of data stored on the cloud-based storage systems and improve the security of data stored in cloud-based storage systems by limiting the amount of time the data is stored in the cloud-based storage systems. In the following discussion, a general description of the system and its components is provided, followed by a discussion of the operation of the same. Although the following discussion provides illustrative examples of the operation of various components of the present disclosure, the use of the following illustrative examples does not exclude other implementations that are consistent with the principals disclosed by the following illustrative examples. FIG.1provides a visual illustration of the resulting operation of various embodiments of the present disclosure. A number of objects103can be stored in an object-based storage system. Each object103can include data itself, a variable amount of metadata, and an object identifier104such as a globally unique identifier or universally unique identifier (GUID or UUID). Objects103can represent any type of data. For example, an object103could represent an individual file. As another example, an object103could represent a row, column, or cell in a table of database. Although individual objects103can be stored in an object-based storage system as unstructured data, collections of objects103could together form a larger data structure, such as a projected data structure106. A projected data structure106is a data structure that is formed from, represented by, or mapped to a collection of individual objects103. This larger data structure can therefore be viewed as being projected onto the collection of objects103that underly the projected data structure106. As an example, if individual objects103represented individual rows in a table, then a respective projected data structure106would be the table formed by or represented by the collection of objects103. As another example, if individual objects103represented individual rows in partition of a table (e.g., a partition of an APACHE HIVE® table), then a respective projected data structure106could be the partition of the table or the table itself (including any objects103within other partitions of the table). As an object-based storage system is used, individual objects103may be read or otherwise accessed, as represented by individual objects103a. Other objects103, once stored in the object-based storage system, may not be accessed again or accessed for longer than a predefined period of time, such as objects103band objects103c. Accordingly, objects103band objects103ccould be candidates for removal from an object-based storage system. However, objects103b, though not recently accessed, could form a projected data structure106ain combination with recently accessed objects103a. As a result, removal of objects103bwould corrupt the projected data structure106a. In contrast, projected data structure106bexclusively comprises objects103c. Accordingly, objects103ccould all be safely removed from the object-based storage system without corrupting the projected data structure106b. FIG.2depicts a computing environment200according to various embodiments of the present disclosure. The computing200can include one or more applications that are hosted by or executed in the computing environment200. Examples of these applications include the retention application203and the object storage service206. The computing environment200can also be configured to host one or more data stores, such as the object metadata data store209, the retention data store213, the log data store216, and the object data store219. Each data store (e.g., can the object metadata data store209, the retention data store213, the log data store216, and the object data store219) can be representative of a plurality of data stores, which can include relational databases or non-relational databases such as object-oriented databases, hierarchical databases, hash tables or similar key value data stores, as well as other data storage applications or data structures. Moreover, combinations of these databases, data storage applications, and/or data structures may be used together to provide a single, logical, data store. The computing environment200can include one or more computing devices that include a processor, a memory, and/or a network interface. For example, the computing devices can be configured to perform computations on behalf of other computing devices or applications. As another example, such computing devices can host and/or provide content to other computing devices in response to requests for content. Moreover, the computing environment200can employ a plurality of computing devices that can be arranged in one or more server banks or computer banks or other arrangements. Such computing devices can be located in a single installation or can be distributed among many different geographical locations. For example, the computing environment200can include a plurality of computing devices that together can include a hosted computing resource, a grid computing resource or any other distributed computing arrangement. In some cases, the computing environment200can correspond to an elastic computing resource where the allotted capacity of processing, network, storage, or other computing-related resources can vary over time. The individual computing devices can communicate with each other within the computing environment200using a network. The network can include wide area networks (WANs), local area networks (LANs), personal area networks (PANs), or a combination thereof. These networks can include wired or wireless components or a combination thereof. Wired networks can include Ethernet networks, cable networks, fiber optic networks, and telephone networks such as dial-up, digital subscriber line (DSL), and integrated services digital network (ISDN) networks. Wireless networks can include cellular networks, satellite networks, Institute of Electrical and Electronic Engineers (IEEE) 802.11 wireless networks (e.g., WI-FI®), BLUETOOTH® networks, microwave transmission networks, as well as other networks relying on radio broadcasts. The network can also include a combination of two or more networks. Examples of networks can include the Internet, intranets, extranets, virtual private networks (VPNs), and similar networks. The retention application203can be executed to evaluate objects103managed by the object storage service206and stored in the object data store219to determine what action, if any, should be taken with respect to the objects103. This could include determining whether individual objects103should be deleted, retained, moved, and/or archived as specified by one or more applicable retention policies221. The object storage service206can be executed to provide an object store that manages data as objects103. Each object103stored by the object storage service206can include data about itself and an object identifier104that uniquely identifies the object103from other objects103stored by the object storage service206. Examples of object identifiers104include globally unique identifiers (GUIDs) and universally unique identifiers (UUIDs). Due to the object-based nature of the object storage service206, users can often store large amounts of unstructured data in a cost-effective manner. The object storage service206can offer object storage, access, and retrieval through the network (e.g., through the use of a web or network-based API). Moreover, the object storage service206can provide different levels, tiers, or classifications of storage for objects103. For instance, some objects103could be stored within a class of objects103that are always available and can be queried or retrieved immediately or nearly immediately in response to a request for the object103. Other objects103could be stored within another class of objects103that may require longer periods of time to retrieve (e.g., minutes or hours), but for a reduced cost. Examples of object storage services206can include AMAZON WEB SERVICES S3, AMAZON GLACIER, MICROSOFT AZURE BLOB STORAGE, GOOGLE CLOUD STORAGE, and RACKSPACE FILES. A retention policy221can represent a policy to be applied to individual objects103by the retention application203to determine how to process individual objects103stored in the object data store219and managed by the object storage service206. For example, a retention policy221could specify a retention action to be performed on an object103if the object103meets one or more criteria specified by the retention policy221. Examples of retention actions include deleting the object103, moving the object103(e.g., to another location, another object storage service206, or another tier or level of storage offered by the object storage service206), or retaining the object103in the object data store219(which may be characterized as “retaining” the object103or taking no action with respect to the object103). In some implementations, one or more retention policies221can be stored in the retention data store213for use by the retention application203. The retention policy221could also specify how frequently that objects103stored in the object data store219should be processed or evaluated by the retention application203. For example, the retention policy221could specify that read or accessed objects103should be identified by the retention application203on a periodic basis or at predefined intervals (e.g., daily, every other day, weekly, etc.). As another example, the retention policy221could specify how frequently that retention actions should be performed on objects103(e.g., every day, every other day, every third day, every week, every other week, every month, etc.). The accessed objects223can represent a set of objects103that the retention application203has identified as having been read or accessed within a predefined period of time based on an analysis of one or more access logs226. The accessed objects223can be represented as a list, array, set, or other data structure that includes the individual object identifiers104of the objects103identified by the retention application203. An access log226can represent a log file created by the object storage service206that stores a record of operations performed on individual objects103in the object data store219by the object storage service206. For example, each time the object storage service206reads an object103in the object data store219or writes an object103to the object data store219, a record of the read of the write could be saved to an access log226. Such a record could include the date and time that the read or write was performed. Other operations on objects103by the object storage service206could also be recorded in an access log226. The object metadata data store209can be used to store information about the relationships between individual objects103stored in the object data store219. For example, for each projected data structure106, the object metadata data store209could include or associate the object identifiers104of the objects103that are components of the projected data structure106. Information about projected data structures106and the object identifiers104of objects103included in or associated with the projected data structures106can be stored in the object metadata data store209from a variety of sources. For example, some applications may implement and manage their own object metadata data store209(e.g., the metastore implemented by an instance of APACHE HIVE®). In other examples, the object metadata data store209may be maintained and managed by the retention application203, and information about projected data structures106may be stored in the object metadata data store209by third-party applications. Next, a general description of the operation of the various components of the computing environment200is provided. Although the following description provides an example of the interactions between the various components of the computing environment200, other interactions are also possible. More detailed descriptions about the operation of the individual components of the computing environment200is provided in the discussion ofFIGS.3and4. To begin, one or more objects103are stored in the object data store219. These objects103could be used or processed as part of various workflows, applications, etc. Over time, the workflows or applications that utilize the objects103may cease to be executed or implemented. For instance, a data scientist training a machine-learning model may no longer use the training data uploaded to the object data store219. Likewise, an application that relies on a database that is implemented using multiple objects103may cease operation. While, in some instances, the user or application may remove the objects103that they used, there is no guarantee that the user or the application will remove the objects103that they have used after the user or application is finished with them. Moreover, this requires the user or the application to operate in a cooperative manner with the object data store219and/or the other users or applications. This cooperation can be incredibly burdensome for the user or the application depending on the number of other users or applications that are using or sharing the objects103. Using the example of a data scientist training a machine-learning model, the data scientist could upload a large amount of data into the object data store219. The data could be used to train a machine-learning model, such as a neural network. As the neural network classifies individual objects103in the object data store219, the neural network can receive feedback regarding whether it's classification or prediction was correct. The neural network can then update the weights of individual perceptrons within the neural network to improve the accuracy of future predictions. The neural network could then be presented with the objects103in the training data set again to further refine its predictions or classifications. After multiple rounds of training using the objects103in the training data set, the resultant neural network would be trained to accurately make predictions or classifications about individual objects103that it has not previously evaluated. Moreover, the same objects103in the training data set could be used to train different machine-learning models (e.g., different neural networks) to see which machine-learning model, once trained, offered the best accuracy or the best performance. Similarly, the objects103in the training data set could be used to train multiple revisions of the same machine-learning model, such as when layers of perceptrons or connections between perceptrons are adjusted, added, or removed in order to refine the underlying neural network of a machine-learning model. Accordingly, objects103may be accessed repeatedly within the training data to train multiple machine-learning models or multiple revisions of the same machine-learning model, but the individual objects103or groups of objects103may be accessed at different times or different intervals. Therefore, premature or inadvertent deletion of an object103from the training data set could impact the performance or accuracy of the machine-learning models being trained. Accordingly, the retention application203can analyze the objects103stored in the object data store219to determine which objects are candidates for archival (e.g., deletion, relocation, reclassification, etc.). To do this, the retention application203can identify candidate objects103that have not been read or otherwise accessed for more than an amount of time specified by a retention policy221and/or were created prior to a date specified by the retention policy221. The retention application can then identify the projected data structures106that the objects103are members of, if any. If none of the objects103that are components of a projected data structure106have been accessed or read within the specified period of time, then the retention application203can delete, move, reclassify, or otherwise perform a retention action on the objects103. However, if an object103that has not been accessed within the specified period of time is a member of a projected data structure106that contains other objects103that have been accessed within the specified period of time, then the retention application203can determine that the object103is still in use based on its relationship to the recently accessed object103. Such an object103would be retained in the object data store219. For example, if an object103had not been accessed within a specified period of time, but the object103was part of a table or partition of a table (e.g., an APACHE HIVE® partition) that contained objects103that had been accessed within the specified period of time, then the object103could be retained in the object data store219. Referring next toFIG.3, shown is a flowchart that provides one example of the operation of a portion of the retention application203. The flowchart ofFIG.3provides merely an example of the many different types of functional arrangements that can be employed to implement the operation of the depicted portion of the retention application203. As an alternative, the flowchart ofFIG.3can be viewed as depicting an example of elements of a method implemented within the computing environment200. Beginning with block303, the retention application203obtain copies of access logs226(e.g., log files) from the log data store216. This could be done on a periodic basis (e.g., daily, weekly, monthly, etc.). Upon obtaining the copies of the access logs226, the retention application203may also perform an action to prevent it from obtaining the same access logs226again. For example, the retention application203, after obtaining copies of the access logs226, could mark the access logs226as read, delete the access logs226, or perform some action. In other examples, the retention application203could instead copy or obtain access logs226or portions of access logs226that corresponded to a predefined period or interval of time. For example, if the process depicted inFIG.3were performed on a daily basis, then the retention application203could obtain or request access logs226, or portions of access logs226, for the current or previous day. Then at block306, the retention application203can analyze the access logs226to determine which objects103in the object data store219have been read. For example, the retention application203could search, sort, or otherwise filter the access logs226according to one or more criteria. For instance, the retention application203could use a regular expression to parse the access logs226to identify individual objects103which had been read. Next at block309, the retention application203can store the object identifiers104of objects103that have been identified at block306as having been read or otherwise accessed, as well as the date and/or time that the objects103were read or otherwise accessed. For example, the retention application203could store the object identifiers104(e.g., GUID or UUID) of the objects103that had been read or accessed in a list or set of accessed objects223in the retention data store213. These accessed objects223could then be used to identify projected data structures106when determining whether to perform a retention action on individual objects103in the object data store219or to retain the individual objects103in the object data store219for future use. Referring next toFIG.4, shown is a flowchart that provides one example of the operation of a portion of the retention application203. The flowchart ofFIG.4provides merely an example of the many different types of functional arrangements that can be employed to implement the operation of the depicted portion of the retention application203. As an alternative, the flowchart ofFIG.4can be viewed as depicting an example of elements of a method implemented within the computing environment200. Beginning with block403, the retention application203can identify objects103stored in the object data store219that were accessed within a predefined date range. For example, the retention application203could analyze the accessed objects223to identify all objects103with a respective date or timestamp that fell within the predefined date range. The predefined date range could be specified in a retention policy221, could be provided to the retention application203manually (e.g., as an argument provided at the beginning of the process depicted inFIG.4), or through other approaches. Then at block406, the retention application203can search for related objects103. For example, for each object103identified at block403, the retention application203could search the object metadata data store209for a projected data structure that includes the identified object103. This could be done by search for any projected data structures106that include an object identifier104that matched an object identifier104among the accessed objects223. The retention application203could then identify any other objects103in the projected data structure106as being related objects103. For example, if the projected data structure106were a table, then objects103in the list of accessed objects223(e.g., objects103a) could represent rows of the table that had been recently accessed (e.g., in response to a query), while the related objects103(e.g., objects103b) could include those rows in the table that had not been recently accessed (e.g., because they failed to match the parameters of the query). Similarly, if the projected data structure106were a partition of a table, then the objects103in the list of accessed objects223could represent rows of the partition of the table that had been recently accessed, while the related objects103could include those rows in the partition of the table or in other partitions of the table that had not been recently accessed (e.g., because they failed to match the parameters of the query). Next at block409, the retention application203can tag, mark, flag, or otherwise record the related objects103as having been accessed. As one example, the retention application203could add the object identifiers104of related objects103to the list object identifiers104of the accessed objects223. As another example, the retention application203could create a temporary data structure that includes the object identifiers104of both the accessed objects223and the related objects103from the projected data structure106that were previously identified at block406. Moving on to block413, the retention application203can identify all objects103in the object data store219that were created prior to a specified date. The specified date could be specified in the retention policy221, could be provided to the retention application203manually (e.g., as an argument provided at the beginning of the process depicted inFIG.4), or through other approaches. The specified date could also be a static date (e.g., Jan. 1, 2021) or could be a rolling date (e.g., anything older than one-week, one-month, three-months, six-months, one-year, etc.). To identify the objects103, the retention application203could provide the specified date to the object storage service206as part of a request for matching objects103. In response, the retention application203could receive from the object storage service206the object identifiers104of all objects103in the object data store219that were created prior to the specified date. Proceeding to block416, the retention application203can evaluate each object103identified at block413to determine whether that object103had been previously accessed. For example, the retention application203could determine whether unique identifier of the object103identified at block413matches the unique identifier of an object103identified at block403(e.g., is a previously accessed object223) or at block409(e.g., is included in the same projected data structure106as a previously accessed object223). If the object103has been previously accessed or is part of the same projected data structure106as a previously accessed object223, then the process for that object skips to block426. Otherwise, the process continues to block419for the object103. If the process continues to block419, then the retention application203can determine whether to retain the object103according to the retention policy221. For example, a retention policy221could specify that any objects103should not be retained that were identified at block413(e.g., older than a specified date) and have neither been recently accessed, as determined at block403, nor are a related object103that is part of a projected data structure106that includes an accessed object223, as determined at blocks406and409. As another example, the retention policy221could further specify other criteria (e.g., size, object type, owner, etc.) that provide a basis for the retention application203to determine whether to retain the object103in the object data store219. For each object103that the retention application203determines should be not be retained, the process proceeds to block423for that object103. However, for each object103that the retention application203determines should be retained in the object data store219, the process proceeds to block426. If the process proceeds to block423, then the retention application203can perform a retention action as specified by the retention policy221. For example, the retention application203could instruct the object storage service206to delete the object103from the object data store219. As another example, the retention application203could instruct the object storage service206to move the object103to a different level or tier of storage (e.g., a lower-cost storage tier that has significantly longer access times). Once the retention application203causes the object storage service206to perform the retention action, then the process can end. However, if the process proceeds to block426, then the retention application203can determine that the object103is to be retained. For example, the object103is either currently in active use or is part of an actively used projected data structure106. Accordingly, no further action is taken by the retention application203on the object103and the process can end. A number of software components previously discussed are stored in the memory of the respective computing devices and are executable by the processor of the respective computing devices. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by the processor. Examples of executable programs can be a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of the memory and run by the processor, source code that can be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory and executed by the processor, or source code that can be interpreted by another executable program to generate instructions in a random access portion of the memory to be executed by the processor. An executable program can be stored in any portion or component of the memory, including random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, Universal Serial Bus (USB) flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components. The memory includes both volatile and nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power. Thus, the memory can include random access memory (RAM), read-only memory (ROM), hard disk drives, solid-state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, or other memory components, or a combination of any two or more of these memory components. In addition, the RAM can include static random access memory (SRAM), dynamic random access memory (DRAM), or magnetic random access memory (MRAM) and other such devices. The ROM can include a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other like memory device. Although the applications and systems described herein can be embodied in software or code executed by general purpose hardware as discussed above, as an alternative the same can also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies can include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits (ASICs) having appropriate logic gates, field-programmable gate arrays (FPGAs), or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein. The flowcharts show the functionality and operation of an implementation of portions of the various embodiments of the present disclosure. If embodied in software, each block can represent a module, segment, or portion of code that includes program instructions to implement the specified logical function(s). The program instructions can be embodied in the form of source code that includes human-readable statements written in a programming language or machine code that includes numerical instructions recognizable by a suitable execution system such as a processor in a computer system. The machine code can be converted from the source code through various processes. For example, the machine code can be generated from the source code with a compiler prior to execution of the corresponding application. As another example, the machine code can be generated from the source code concurrently with execution with an interpreter. Other approaches can also be used. If embodied in hardware, each block can represent a circuit or a number of interconnected circuits to implement the specified logical function or functions. Although the flowcharts show a specific order of execution, it is understood that the order of execution can differ from that which is depicted. For example, the order of execution of two or more blocks can be scrambled relative to the order shown. Also, two or more blocks shown in succession can be executed concurrently or with partial concurrence. Further, in some embodiments, one or more of the blocks shown in the flowcharts can be skipped or omitted. In addition, any number of counters, state variables, warning semaphores, or messages might be added to the logical flow described herein, for purposes of enhanced utility, accounting, performance measurement, or providing troubleshooting aids, etc. It is understood that all such variations are within the scope of the present disclosure. Also, any logic or application described herein that includes software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as a processor in a computer system or other system. In this sense, the logic can include statements including instructions and declarations that can be fetched from the computer readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system. Moreover, a collection of distributed computer-readable media located across a plurality of computing devices (e.g, storage area networks or distributed or clustered filesystems or databases) may also be collectively considered as a single non-transitory computer-readable medium. The computer-readable medium can include any one of many physical media such as magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs. Also, the computer-readable medium can be a random access memory (RAM) including static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device. Further, any logic or application described herein can be implemented and structured in a variety of ways. For example, one or more applications described can be implemented as modules or components of a single application. Further, one or more applications described herein can be executed in shared or separate computing devices or a combination thereof. For example, a plurality of the applications described herein can execute in the same computing device, or in multiple computing devices in the same computing environment200. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., can be either X, Y, or Z, or any combination thereof (e.g., X; Y; Z; X and/or Y; X and/or Z; Y and/or Z; X, Y, and/or Z, etc.). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications can be made to the above-described embodiments without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. | 36,813 |
11861207 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to the management of erase suspend and resume operations in memory devices of a memory sub-system. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more memory components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A conventional memory sub-system can perform an erase operation to remove the data stored at the memory devices included in the memory sub-system. For example, in order to rewrite a set of pages contained in a block, the conventional memory sub-system can perform an erase operation that erases the data stored at the whole block. For some memory devices, blocks are the smallest area that can be erased. Each block is comprised of a set of pages. A die consists of a number of blocks. When an erase operation is in progress, no other operations are allowed to the same die. Performing such an erase operation for the whole block every time pages of memory are rewritten can utilize a large amount of time and can cause increased latency for other internal operations, as well as host-initiated operations such as read operations or write operations that will not begin until the erase operation for the whole block has completed. The increased latency of the overall memory device can adversely impact the level of quality of service (QoS) of the memory device and can result in inconsistent performance of the memory device due to the unpredictable latency that can be introduced by erase operations. Conventional memory sub-systems utilize a backend sub-system flash controller (e.g. hardware or firmware controller) to generate erase suspend and erase resume operations to a memory device to enable execution of input/output (IO) operations on the memory device in between an erase suspend and an erase resume operations. However, the repeated execution of erase suspend and erase resume operations by the controller can incur further latencies as the number of outstanding commands waiting on the erase completion increases. Further, the performance overhead of generating numerous suspend and resume operations on the firmware of the memory sub-system can be significant, which can further degrade the QoS of the memory sub-system. Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that enables deterministic execution of an erase operation by providing sequences of time periods for executing the erase operation (e.g. erase service time periods), interleaved with alternate time periods for executing memory access commands (e.g. command service time periods) on a memory device. In implementations, erase service time periods can be measured in terms of erase pulses. An erase pulse is the smallest operation under the erase lifecycle. The execution of erase service time periods and command service time periods can be managed internally by the memory device. In this case, the flash controller is only responsible for issuing read commands and write commands within the command service time periods, thus relieving the controller from the overhead of generating erase suspend and resume operations. An erase suspend operation can be executed within the erase service time periods to enable the memory device to execute the memory access commands while the erase operation is suspended. Similarly, an erase resume operation can be executed within the erase service time periods to resume the erase operation on the memory device when the execution of the memory access commands is complete. In one implementation, the memory sub-system receives a request to perform an erase operation on a memory device. The memory sub-system can execute a sequence of predetermined time periods to service the erase operation as well as memory access commands alternatively, until the execution of the erase operation is complete. In implementations, the erase service time periods can be measured in terms of erase pulses. In some implementations, the memory sub-system can define the number of erase service time periods within the sequence, the number of command service time periods within the sequence, the duration of the erase service time period, as well as the duration of the command service time period. In one example, the sequence can include an erase service time period of T1duration, followed by a command service time period T2, followed by a next erase service time period T1, and so on, as explained in more details herein below. During the erase service time period, the memory sub-system can execute sub commands including an erase resume operation (for example, to resume an erase operation that has been previously suspended), followed by executing at least a portion of the erase operation, followed by an erase suspend operation, or an erase complete operation. The memory sub-system can receive a notification when the suspend operation completes, thus indicating that the next time period for servicing commands can start. The memory sub-system can then start the command service time period, the memory device can receive memory access commands and can execute the commands within the command service time period. When the execution of the erase operation completes, the memory sub-system can determine the actual execution time of the erase operation and the memory access commands. The memory sub-system can then adjust the erase service time period and the command service time period accordingly. Advantages of the present disclosure include, but are not limited to, an improved quality of service for read operations and write operations for the memory sub-system, as an erase operation can be suspended during predetermined intervals to perform the read and write operations. For example, techniques of defining and executing erase service time periods and command service time periods described herein allow a memory sub-system to define deterministic slices of time for erase operations and command operations, thus enabling a deterministic QoS for the memory sub-system. The techniques further allow the memory sub-system to manage the erase operation lifecycle (e.g. resume, erase, and suspend) internally within the memory device, thus reducing the overhead of managing this lifecycle by the memory sub-system. Moreover, the techniques provide for a more predictable overall latency of the memory sub-system because of the ability to continue to service memory access commands (e.g. read, write, garbage collection commands) during adjustable time periods to accommodate variable queue depth of commands, while executing an erase operation. Additional details of these techniques are provided below with respect toFIGS.1-7. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and a non-volatile dual in-line memory module (NVDIMM). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to different types of memory sub-system110.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access the memory components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120. The memory devices can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) includes a negative-and (NAND) type flash memory and write-in-place memory, such as three-dimensional cross-point (“3D cross-point”) memory. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), and quad-level cells (QLCs), can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, or a QLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as 3D cross-point type and NAND type flash memory are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM). The memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processor (processing device)117configured to execute instructions stored in local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110may not include a memory sub-system controller115, and may instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, a memory device130is a managed memory device, which is a raw memory device combined with a local controller (e.g., local controller135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system110includes erase operation management component113that can be used for managing the erase suspend and erase resume time periods in the memory sub-system. In certain implementation, erase operation management component113can execute a sequence of predetermined time periods to service the erase operation as well as alternate time periods to service memory access commands until the execution of the erase operation is complete. In implementations, erase service time periods can be measured in terms of erase pulses. An erase pulse is the smallest operation under the erase lifecycle. In some implementations, erase operation management component113can define the number of erase service time periods within the sequence, the number of command service time periods within the sequence, the duration of the erase service time period, as well as the duration of the command service time period. In one example, the sequence can include an erase service time period of T1duration, followed by a command service time period T2, followed by a next erase service time period T1, and so on. Within the erase service time period, erase operation management component113can execute sub commands including an erase resume operation (for example to resume an erase operation that was previously suspended), followed by executing at least a portion of the erase operation, followed by an erase suspend operation. Erase operation management component113can then receive a notification from the memory device when the suspend operation completes, thus indicating that the next time period for servicing commands can start. Erase operation management component113can start the command service time period. The memory device can receive memory access commands from the controller and can execute the commands within the command service time period. When the command service time period elapses, the next erase service time period is initiated. Erase operation management component113can then execute an erase resume operation to resume the erase operation that was previously suspended. When the execution of the erase operation completes, the memory sub-system can send data related to the actual execution time of the erase operation and the memory access commands to the controller. The controller can then adjust the erase service time period and the command service time period accordingly, for example, to allow for higher degree of reclamation on the memory device or to accommodate a change in the command queue depth of the memory sub-system. Further details with regards to the operations of erase operation management component113are described below. FIG.2AandFIG.2Billustrate examples of an erase service time period in memory sub-system110for erase operation management in accordance with some embodiments of the present disclosure. Memory sub-system110can execute an erase operation over a number of time periods interleaved with alternate time periods for executing internal or external memory access operations. Memory access operations initiated external to the memory subsystem may refer to input/output (IO) operations (e.g. read, write). Internal memory access operations may refer to garbage collection and wear leveling, which initiates read or write operations that are internal to the memory subsystem. In one embodiment, erase service230A-B can contain sub-commands including resume, erase, and suspend if a portion of the erase operation is performed but the full erase operation has not completed yet. In other embodiments, erase service230A-B can contain sub-commands including resume, erase, and complete if the full erase operation is completed during the current erase service230A-B. Yet in another embodiment, erase service230A-B can contain sub-commands including erase and suspend if the current erase service is the first erase service time period to be executed, thus no suspended erase operation to be resumed. FIG.2Aillustrates an example of an erase service time period for executing a resume, erase, and suspend operations in memory sub-system110for erase operation management, in accordance with some embodiments of the present disclosure. In an embodiment, erase service230A can be executed by the processing logic within erase service time period235A to perform at least a portion of an erase operation. Erase service time period235A can contain resume erase operation231to resume an erase operation that was previously suspended. One the other hand, if erase service time period235A is the first time period to be executed for performing the erase operation, erase resume operation231can be omitted. In one implementation, if erase resume operation231is omitted, the processing logic can adjust erase service time period235A (e.g. by making it shorter than an erase resume time period where erase resume operation is performed), to improve the overall time spent in processing the erase operation. In another implementation, if erase resume operation231is omitted, the processing logic can allow the erase operation232to be performed for a longer time, thus keeping the erase resume time period235A unchanged. When erase resume operation231completes, memory sub-system110can execute at least a portion of the erase operation at232. In one implementation, the portion of time allocated for performing the erase operation232can be determined based on erase service time period235A, the time needed to erase resume231, if needed, and the time needed for erase suspend233. For example, if erase service time period235A is 5000 microseconds, time needed for erase resume231is 200 microseconds, and time needed for erase suspend233is 400 microseconds, then time allocated for erase operation232is 5000−200−400=4400 microseconds. When the time allocated to erase operation232elapses and the erase operation has not completed yet, memory sub-system110can execute erase suspend operation233in order to enable the next command service time period to be executed on memory device130. Erase suspend operation233can be performed by memory sub-system110to temporarily stop the execution of the current erase operation, in order to free memory device130for other memory access commands to be executed. In implementations, because memory device130can perform certain preparation tasks to go into erase suspend mode, the time erase suspend operation233takes can vary based on, for example, the age and health of memory device130. Accordingly, memory sub-system100can set the erase service time period235A to include the maximum time an erase suspend operation can take. Therefore, erase suspend operation233can send notification236to the controller when the suspend operation actually completes, to signal that the next time period can start. The suspend complete notification236can enable memory sub-system110to start the next command service time period without waiting for erase service time period235A to elapse, thus eliminating potential time waste between the actual completion of the suspend operation and the end of erase service time period235A. When memory sub-system110receives the suspend complete notification236, memory sub-system110can end erase service time period230A and can start the following command service time period, as explained in more details herein below. In implementations, the suspend complete notification236can be a message sent from memory device130to the controller, a change in a predetermined memory location that can be monitored by the controller, etc. FIG.2Billustrates an example of an erase service time period for executing a resume, erase, and erase complete operations in memory sub-system110for erase operation management, in accordance with some embodiments of the present disclosure. In an embodiment, erase service230B can be executed by the processing logic within erase service time period235B to perform at least a portion of an erase operation. Erase service time period235B can contain resume erase operation241to resume an erase operation that was previously suspended. One the other hand, if erase service time period235B is the first time period to be executed for performing the erase operation, erase resume operation241can be omitted. In one implementation, if erase resume operation241is omitted, the processing logic can adjust erase service time period235B (e.g. by making it shorter than an erase resume time period where erase resume operation is performed), to improve the overall time spent in processing the erase operation. In another implementation, if erase resume operation241is omitted, the processing logic can allow the erase operation242to be performed for a longer time, thus keeping the erase resume time period235B unchanged. When erase resume operation241completes, memory sub-system110can execute at least a portion of the erase operation at242. In this case the erase operation242can execute the last portion of the full erase operation, followed by an erase complete operation243. In one implementation, the portion of time allocated for performing the erase operation242can be determined based on erase service time period235B, the time needed for erase resume operation241, if needed, and the time needed to complete the rest of the erase operation at erase complete operation243. For example, if erase service time period235B is 5000 microseconds, time needed for erase resume241is 200 microseconds, and time needed for erase complete243is 200 microseconds, then time allocated for erase operation242is 5000−200−200=4600 microseconds. In this example, erase operation242can complete the full erase operation within the 4600, thus triggering the execution of erase complete operation243. In certain implementation, erase operation242can complete the overall erase operation in less time than the time allowed within erase service time period235B because the last portion of executing the overall erase operation can be shorter. In this case, memory sub-system110can trigger the execution of erase complete operation243sooner, causing the erase service time period235B to be shorter than the original time period that was configured by the controller, thus enabling for improved latency of the overall execution of the full erase operation. Erase complete operation243can be performed by memory sub-system110to finalize the erase operation (e.g., to do any cleanup operations that can be required following an erase operation). After completing erase complete operation243memory device130can be available for other memory access commands to be executed. In implementations, when erase complete operation243completes, it can send notification246to the controller, signaling that the current erase service time period235B can be terminated. When memory sub-system110receives the erase complete notification246, memory sub-system110can end erase service time period230B. In certain implementations, after the full erase operation is completed. memory sub-system110can send information related to the actual execution times of each time period to further optimize the configurable erase service time period and command service time period, as explained in more details herein below. In implementations, the erase complete notification246can be a message sent from memory device130to the controller, a change in a predetermined memory location that can be monitored by the controller, etc. FIG.3illustrates an example of an erase operation execution sequence in support of erase suspend and resume operations in memory devices management, in accordance with some embodiments of the present disclosure. Memory sub-system110can enable deterministic execution of an erase operation by providing sequences of erase service time periods310A-C for executing the erase operation, interleaved with command service time periods320A-B for executing internal or external memory access commands on memory device130. The execution of erase service time periods310A-C and command service time periods320A-B can be managed internally by memory device130, whereas the duration of each time period and the total number of time periods can be managed by controller330. Controller330can refer to memory sub-system controller115ofFIG.1. In an illustrated example, controller330can define 3 erase service time periods310A-C and two command service time periods320A-B. In other examples, controller330can define other numbers of erase service time periods and command service time periods, based on factors including the number of times an erase operation can be suspended and resumes, the total time consumed in a full erase operation execution, the health of memory device130, etc. In certain implementations, memory sub-system110can define a new command and include the sequence of erase service time periods310A-C and command service time periods320A-B within the new command. In this case, controller330can send the new command to memory device130for processing and memory device130can manage the execution of the time periods within the new command. In other implementations, controller330can send configuration data to memory device130indicating, for example, the number of erase service time periods310, the number of command service time periods320, the duration of an erase service time period T1and the duration of a command service time period T2. In this case, memory device130can manage the execution of alternating erase service time periods and command service time periods without intervention from controller330. In implementations, when memory sub-system110receives a request to perform an erase operation on memory device130, controller330can initiate an erase service time period310A at operation361. Erase service time period310A can have a predetermined duration T1assigned by the controller. Because this is the first erase service time period to be executed, erase service time period310A can start executing erase operation311, without first executing an erase resume operation. Erase operation311can execute at least a portion of the full erase operation. Following the execution of erase operation311, memory sub-system110can then execute erase suspend operation315within T1in order to enable the following command service time period320A to start. In one implementation, erase suspend operation315can send notification351when completed to controller330, to signal to the end of erase service time period310A and that the following command service time period320A can start. At operation362, controller330can initiate a command service time period320A with duration T2to execute memory access commands C321-322. Commands C321-322can be internal memory access commands (e.g., garbage collection operation, wear leveling operation, etc.). Commands C321-322can also be IO operations received by controller330(e.g., read operation, write operation). In implementations, memory device130can receive commands C321-322from controller330and can execute the commands within command service time period320A. In certain implementations, command C321can be queued during the erase service310A so that when notification351is received, C321can start executing. In implementation, if memory device130did not receive commands from controller330and did not have internal memory access commands to execute within a configurable time period (e.g., 20 microseconds), command service time period320A can be aborted and the next erase service time period310B can be initiated. Otherwise, memory device130can execute commands C321-322within T2. When T2elapses, controller330can initiate erase service time period310B at operation363. Similar to erase service time period310A, erase service time period310B can have a predetermined duration T1assigned by controller330. Erase service time period310B can start with erase resume operation318to resume the previously suspended erase operation. When erase service time period310B starts, internal and external memory access commands can no longer be processed until the following command service time period is initiated. Memory sub-system110can then execute erase operation312to execute at least another portion of the full erase operation. Following the execution of erase operation312, memory sub-system110can then execute erase suspend operation316within T1, in order to enable the following command service time period320B to start. In one implementation, erase suspend operation316can send notification352to controller330when completed, to signal to the end of erase service time period310B and that the following command service time period320B can start. At operation364, controller330can initiate a command service time period320B with duration T2to execute memory access commands C323-324. Commands C323-324can be internal memory access commands (e.g., garbage collection operation, wear leveling operation, etc.). Commands C321-322can also be IO operations initiated by controller330(e.g., read operation, write operation). Controller330determines the number of commands to be processed within command service time period320B based on a number of factors including, the time it takes to complete each type of memory access command, the age and health of memory device130, the quality of service level assigned to memory sub-system110, etc. In implementations, memory device130can receive commands C323-324from controller330and can execute the commands within T2. When T2elapses, controller330can initiate the last erase service time period310C at operation365. At operation365, erase service time period310C can have a predetermined duration T1assigned by controller330. Erase service time period310C can start with erase resume operation319to resume the previously suspended erase operation. When erase resume operation completes, memory sub-system110can execute erase operation313to execute the last portion of the full erase operation. When erase operation313completes, memory sub-system110can detect that the full erase operation is complete and can execute erase complete operation331within T1, signaling the completion of the full erase operation to controller330. In one implementation, erase complete operation331can send notification353to controller330when completed, to signal to the completion of the full erase operation. Further, in an implementation, memory sub-system110can send feedback355to controller330upon the completion of the erase operation. Feedback355can be data related to the actual execution time of the erase service time periods and the command service time periods. Controller330can then compare configured versus actual values for the erase service time period and the command service time period and adjust the configured values accordingly. For example, if configured T1for erase service time periods310A-C is 10 milliseconds but the actual time for completing erase service time periods310A-C was between 8 to 8.5 milliseconds, controller330can adjust the configured T1to be 8.5 milliseconds. The next erase operation can then be executed using the new T1configured value of 8.5 milliseconds. The ability to adjust service time periods enables the solution to accommodate for a deteriorating physical characteristics of memory device130, or to accommodate a change in the command queue depth of memory sub-system110, for example. FIG.4is a flow diagram of an example method of executing an erase operation and memory access commands in support of erase operation management in a memory sub-system, in accordance with some embodiments of the present disclosure. The method400can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by erase operation management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation410, the processing logic receives a request to perform an erase operation on memory device130. The erase operation can be suspended to enable the execution of memory access commands and then later resumed, as explained in more details above. At operation420, the processing logic can execute a portion of the erase operation during a first time period. As explained above, the first time period can be an erase service time period that is configurable by the controller. The portion of the erase operation and an erase suspend operation can be performed within the erase service time period. At operation430, the processing logic can execute an erase suspend operation to suspend the erase operation within the first time period. In certain implementations, the first time period can be configured to account for the maximum time an erase suspend operation can take, as explained in more details herein above. In implementations, when the suspend operation completes, the processing logic can send a notification to the controller indicating that the suspend operation is complete. At operation440, upon detecting that the suspend operation is complete, the processing logic can start processing a command service time period. During a command service time period, the processing logic can memory access commands directed to the memory device from the controller. At operation450, the processing logic can execute the memory access commands at the memory device during a second time period (e.g. a command service time period). In an implementation, the memory access commands can be read commands, write commands, garbage collection operation, etc., as explained in more details herein above. At operation460, the processing logic can detect that the second time period has expired. Consequently, the processing logic can execute an erase resume operation to resume execution of the suspended erase operation. In certain implementations, the processing logic can start an erase service time period, execute the erase resume operation, then execute the erase operation, as explained in more details herein above. FIG.5is a flow diagram of another example method of executing an erase operation and memory access commands in support of erase operation management in a memory sub-system, in accordance with some embodiments of the present disclosure. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500is performed by erase operation management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation510, the processing logic executes a first erase service time period. In embodiments, the processing logic can execute at least a portion of the full erase operation during the first erase service time period, followed by an erase suspend operation, as explained in more details herein above. At operation520, when the processing logic detects that the erase suspend operation is complete, the processing logic can start a command service time period to execute one or more memory processing commands. At operation530, if there are no internal commands for processing (e.g., garbage collection, wear leveling, etc.), the processing logic can wait for IO commands to be received from the controller. In implementations, the processing logic can wait for commands for a predetermined duration of time, as explained in more details herein above. At operation535, the processing logic can determine whether IO commands have been received from the controller for processing during the predetermined duration of time. If the duration of time elapses and no commands have been received, the processing logic, at operation550, can determine that no memory access commands are ready to be performed and can terminate the command service time period to preserve processing time. At operation560, the processing logic can further execute the next erase service time period, such that the next portion of the erase operation can be executed. On the other hand, at operation540, if the processing logic determines that IO commands have been received from the controller, the processing logic can execute the IO commands within the command service time period, as explained in more details herein above. FIG.6illustrates an example machine of a computer system600within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system600can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to erase operation management component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system600includes a processing device602, a main memory604(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory606(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system618, which communicate with each other via a bus630. Processing device602represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device602is configured to execute instructions626for performing the operations and steps discussed herein. The computer system600can further include a network interface device608to communicate over the network620. The data storage system618can include a machine-readable storage medium624(also known as a computer-readable medium) on which is stored one or more sets of instructions626or software embodying any one or more of the methodologies or functions described herein. The instructions626can also reside, completely or at least partially, within the main memory604and/or within the processing device602during execution thereof by the computer system600, the main memory604and the processing device602also constituting machine-readable storage media. The machine-readable storage medium624, data storage system618, and/or main memory604can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions626include instructions to implement functionality corresponding to erase operation management component113ofFIG.1. While the machine-readable storage medium624is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 50,618 |
11861208 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to performing data operations on grouped memory cells, which can be part of a memory sub-system. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more memory components (also hereinafter referred to as “memory devices”). The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory device can be a non-volatile memory device. One example of a non-volatile memory device is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. Some memory devices, such as NAND memory devices, include an array of memory cells (e.g., flash cells) to store data. Each cell includes a transistor, and within each cell, data is stored as the threshold voltage of the transistor, based on the logical value of the cell (e.g., 0 or 1). Memory cells in these devices can be grouped as pages that can refer to a logical unit of the memory device used to store data. For example, memory cells in NAND memory devices are connected horizontally at their control gates to a word line to form a page. With some types of memory devices (e.g., NAND), pages are grouped to form blocks (also referred to herein as “memory blocks”). The host system can send access requests (e.g., write command, read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system, read data from the memory device on the memory sub-system, or read/write constructs with respect to a memory device on the memory sub-system. The data to be read or written, as specified by a host request, is hereinafter referred to as “host data.” A host request can include logical address information (e.g., logical block address (LBA), namespace) for the host data, which is the location the host system associates with the host data. The logical address information (e.g., LBA, namespace) can be part of metadata for the host data. Metadata can include error handling data (e.g., error-correcting code (ECC) codeword, parity code), data version (e.g., used to distinguish age of data written), valid bitmap (which LBAs or logical transfer units contain valid data), and so forth. Data operations can be performed by the memory sub-system. The data operations can be host-initiated operations. For example, the host system can initiate a data operation (e.g., write, read, erase, etc.) on a memory sub-system. The host system can send access requests (e.g., write command, read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system and to read data from the memory device on the memory sub-system. The memory sub-system can initiate media management operations, such as a write operation, on host data that is stored on a memory device. For example, firmware of the memory sub-system can re-write previously written host data from a location of a memory device to a new location as part of garbage collection management operations. The data that is re-written, for example, as initiated by firmware, is hereinafter referred to as “garbage collection data.” “User data” hereinafter generally refers to host data and garbage collection data. “System data” hereinafter refers to data that is created and/or maintained by the memory sub-system for performing operations in response to host request and for media management. Examples of system data include, and are not limited to, system tables (e.g., logical-to-physical (L2P) memory address mapping table (also referred to herein as a L2P table)), data from logging, scratch pad data, and so forth. A memory device can be a non-volatile memory device. A non-volatile memory device is a package of one or more die. Each die can be comprised of one or more planes. For some types of non-volatile memory devices (e.g., negative-and (NAND)-type devices), each plane is comprised of a set of physical blocks. For some memory devices, blocks are the smallest areas that can be erased. Each block is comprised of a set of pages. Each page is comprised of a set of memory cells, which store bits of data. The memory devices can be raw memory devices (e.g., NAND), which are managed externally, for example, by an external controller. The memory devices can be managed memory devices (e.g., managed NAND), which are a raw memory device combined with a local embedded controller for memory management within the same device package. A traditional computer system, such as a conventional supercomputer, can perform operations on memory units storing integer numbers of bits of data. Memory cells (e.g., flash memory cells) store data by applying a specified voltage or charge level to the memory cell. The stored charge level is indicative of a bit representation of the memory cell. A single-layer cell can store two charge levels indicating either a 0 or a 1. The single-layer cell can thus store one bit of data. As memory cells become more complex to store more bits of data, the number of charge levels increases by a power of 2. Physical limitations of memory cells make it difficult to reliably increase the number of charge levels to store greater numbers of bits. For example, a multi-level cell (MLC) has four charge levels and can store two bits of data. A triple-level cell (TLC) has eight charge levels and can store three bits of data. A quarto-level cell (QLC) has sixteen charge levels and can store four bits of data. The greater the number of charge levels per cell and the greater number of bit representations, the cell density increases. However, physical limitations of a memory cell make it difficult to differentiate between the charge levels and the memory cells wear out faster. Due to the increase of data density, electrical charge leakage may occur and cause data corruption. For a memory cell such as a penta-level cell (PLC), it is incredibly difficult to differentiate between thirty-two charge levels. Although it is desired to have a singular memory cell storing four, five, or more bits of data, conventional memory cells do not have the reliability needed for such cells to be useful. Parts of the present disclosure address the above and other issues by performing various data operations on a grouped memory cell. In particular, various embodiments enable the memory device to store an integer number of bits of data without sacrificing reliability based on a high number of charge levels per individual memory cell. By use of various embodiments, performing data operations on grouped memory cells can be performed on a memory device or a memory sub-system. Accordingly, some embodiments can provide the ability to store higher volumes of data without needing to add physical memory cells. With respect to transactional memory, a data operation mechanism can be used to enable a memory device or a memory sub-system to virtually group two or more memory cells together to create a grouped cell with the ability to store an integer number of bits of data. The integer number of bits of data is higher than the capacity of each individual memory cell prior to grouping. In this way, a memory device of various embodiments can store more data without sacrificing reliability. Though various embodiments are described herein with respect to a memory sub-system controller, some embodiments implement features described herein (e.g., operations for reading data, writing data) as part of a memory device (e.g., a controller, processor, or state machine of a memory die). For instance, various embodiments implement read operations as part of a controller, processor, or state machine for each bank within a memory device. Benefits include the ability to leverage the stable memory cell charge level capacities to create a group that can store a higher integer number of bits than each of the individual memory cells alone. FIG.1illustrates an example computing environment100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device140), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and a non-volatile dual in-line memory module (NVDIMM). The computing environment100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to different types of memory sub-system110.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one ear more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., a peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA) controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fiber Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), a double data rate (DDR) memory bus, a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Low Power Double Data Rate (LPDDR), or any other suitable interface. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access the memory components (e.g., memory device140) when the memory sub-system110is coupled with the host system120by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory device140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random-access memory (DRAM) and synchronous dynamic random-access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device140) includes a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory device140can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs), can store multiple bits per cell. In some embodiments, each of the memory devices140can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, or any combination of such. In some embodiments, a particular memory component can include an SLC portion, and an MLC portion, a TLC portion, or a QLC portion of memory cells. The memory cells of the memory devices140can be grouped as pages or memory blocks that can refer to a unit of the memory component used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as NAND type flash memory (e.g., 2D NAND, 3D NAND) and 3D cross-point array of non-volatile memory cells are described, the memory device140can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide-based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). The memory sub-system controller115can communicate with the memory devices140and/or memory component130to perform operations such as reading data, writing data, or erasing data at the memory devices140and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The memory sub-system controller115can include a processor (processing device)117configured to execute instructions stored in local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, and so forth. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110may not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory component130and/or the memory device140. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory devices140. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices140as well as convert responses associated with the memory devices140into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices140. In some embodiments, the memory devices140include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices140. An external controller (e.g., memory sub-system controller115) can externally manage the memory device140(e.g., perform media management operations on the memory device140). In some embodiments, a memory device140is a managed memory device, which is a raw memory device combined with a local controller (e.g., local media controller135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system controller115includes mapping matrix component113that can provide and/or generate mapping information of charge levels for grouped memory cells corresponding to its bit representation on a memory device (e.g., memory device140). The mapping matrix component113can enable the memory sub-system110(via the memory sub-system controller115) to perform operations such as read and write memory operations. The memory-sub system can maintain matrices storing assignments of charge levels to bit representations for numerous groups of memory cells. By utilizing the groups of memory cells and the matrix representation of a mapping of charge levels to bit representations, more data can be stored with the same number of memory cells than in a conventional memory storage device. Additionally, each memory cell can effectively store (via a charge level of the memory cell), a non-integer number of bits (e.g., X.5 number of bits). The mapping matrix component113can store some or all mapping information for grouped memory cells of an individual memory device. Mapping matrix component113can correspond with memory cell group component109to locate, read from, or program to the requested memory cell group. Additionally, the memory cell group and mapping matrix can store more data in the given physical memory cells than conventional systems having the same number of physical memory cells. Further details with regards to the operations of the mapping matrix component113and memory cell group component109are described below. An example of this is illustrated and described herein with respect toFIG.2. FIG.2is an illustrative matrix mapping200of a first group of cells. As shown,FIG.2shows a matrix mapping200of a first cell204(e.g., X-cell) and a second cell206(e.g., Y-cell) in a group of memory cells202, which may also be referred to herein as a “supercell”. In the example, both the first cell204and the second cell206support three charge levels (0, 1, 2). The three charge levels can be used to represent three bits (e.g., columns bit0, bit1, and bit2;208). The charge levels for the first cell204and the second cell206are combined into a “supercell,” capable of nine distinct charge levels (0 through 8), allowing the “super-cell” to represent three bits. The last row of the matrix mapping200is shown in a gray shade. This row can be deemed a “don't care” charge level as the number of charge levels of the group of memory cells202exceeds the number of charge levels needed to represent three bits. In some embodiments, charge level 8 of the group of cells202is deemed the “don't care” level. In some embodiments, any of the charge levels of the group of cells is deemed to be the “don't care” level. The number of “don't care” levels can correspond to the number of bits represented and the number of charge levels in the group of cells. FIG.3A-3Billustrates a matrix mapping300for a group of cells. As shown inFIG.3A, a first matrix302is a 9×9 matrix for mapping of a 3.5 bits per cell, or 7 bits per group. This first matrix302can be manipulated to create a mapping of a 4.5 bits per cell, or 9 bits per group mapping. By applying one or more matrix functions, the second matrix304can be generated to represent a mapping for a group of cells with a higher bit level storage capacity. For example, the first matrix302is organized into equal quadrants of 3×3 matrices. A matrix multiplication by an integer number is applied to top left302-1matrix. In succession, the same matrix multiplication by the integer number is applied to the top right302-2matrix with an addition of a first integer number. The same matrix multiplication by the integer number is applied to the bottom left302-3matrix with an addition of a second integer number. The same matrix multiplication by the integer number is applied to the bottom right302-4matrix with an addition of a third integer number. In the matrix mapping300, top left302-1matrix is multiplied by four, top right302-2matrix is multiplied by four and added by one, bottom left302-3matrix is multiplied by four and added by two, and bottom right302-4matrix is multiplied by four and added by three. The resulting mapping of second matrix304is a mapping of a 4.5 bits per cell, or 9 bits per a group of memory cells (e.g., per a supercell). As described with greater detail inFIGS.5A-5C, matrix mappings such as these can be manipulated to create new matrix mappings as well as mappings for compatibility with read and/or write operations. FIG.3Billustrates the second matrix304(e.g., second matrix304,FIG.3A) that is further manipulated into a third matrix306. InFIG.3B, the second matrix and the third matrix both represents a mapping for a 4.5 bits per cell, or 9 bit per group of memory cells. A matrix operation such as a circular shift operation by 1 is applied to the rows and columns of the second matrix304to generate the third matrix306. In some embodiments, the mapping assignment matrix satisfies a constraint of: {0, 1, 2, . . . , 2k−1} ⊂ unique(Σp2pMp), where Mpis a matrix having a size of L×L, where L is the number of charge levels and k is the number of assignments. FIG.4A-4Cis a flow diagram of an example method400to perform data operations on a group of memory cells in accordance with some embodiments of the present disclosure. The method400can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by the mapping matrix component113ofFIG.1alone or in combination with memory cell group component109ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation405, the processing device receives a request to perform a data operation associated with at least one memory unit. The memory device can include a plurality of memory units, each memory unit including a first group of memory cells. Each memory cell in the first group of memory cells supports a specified number of charge levels such that each charge level represents a non-integer number of bits. The first group of memory cells represents a first sequence of bits based on a first sequence of charge levels stored by the first group of memory cells, and the first sequence of bits has an integer number of bits. In some embodiments of the present disclosure, the first group of memory cells includes two or more memory cells. Each memory cell supports the same number of charge levels. For example, a first and a second memory cell can each support 23 charge levels, which can enable the first and the second memory cells to each represent non-integer number of bits (e.g., 4.5 bits per each of the first and second memory cells). For example, a first group of memory cells comprises a first and a second memory cell, each of the first and the second memory cell supports 23 charge levels, and the first group of memory cells supports 529 different sequences of charge levels. The first sequence of bits comprises 9 bits of data. The memory device can include 16 KB word lines of data, and each word line is represented by 9 pages of 8 KB of data. In some embodiments, each of the first and the second memory cells supports 24 charge levels and the first group of memory cells supports 579 different sequences of charge levels. In some embodiments, the first group of memory cells includes a first and a second memory cell, each of the first and second memory cells supports 3 charge levels and supports the individual sequence of 1.5 bits of data. In some other embodiments, each of the first and second memory cells supports 6 charge levels and represents 2.5 bits of data. In some other embodiments, each of the first and second memory cells supports 12 charge levels and represents 3.5 bits of data. In some other embodiments, each of the first and second memory cells supports 23 or 24 charge levels and represents 4.5 bits of data. At operation406-A ofFIG.4B, the processing device reads the sequence of charge levels from the first group of memory cells. The processing device determines, at operation407-A, the first sequence of bits corresponding to the first sequence of charge levels based on the mapping stored on the memory device. The processing device may include numerous mappings stored on the system for distinct groupings of memory cells. It is understood that although memory cells are referred to as grouped memory cells, the memory cells are not necessarily within the same physical vicinity of each other. At operation408-A, the processing device performs the data operation at least in part by providing the first sequence of bits in response to the request. InFIG.4C, the processing device determines, at operation406-B a second sequence of charge levels corresponding to a second sequence of bits to be written to a second group of memory cells of the at least one memory unit based on the mapping stored on the system. Each memory cell in the second group of memory cells supports the specified number of charge levels such that each charge level can represent the non-integer number of bits. At operation407-B, the processing device performs the data operation at least in part by causing the second group of memory cells to store the second sequence of charge levels. In some embodiments, the second group of memory cells stores the second sequence of charge levels by applying a voltage to the memory cells as indicated by the second sequence of charge levels. Returning to final operation410inFIG.4A, the processing device performs the data operation with respect to at least one memory unit based on the mapping stored on the system. The mapping assigns an individual sequence of charge levels, stored by an individual group of memory cells to an individual sequence of bits represented by the individual group of memory cells. In some embodiments, the mapping assigns the individual sequence of charge levels to the individual sequence of bits that satisfies a specified Gray code constraint, or below a specified Gray code penalty. Gray code is a particular mapping of bits to symbols (e.g., charge levels) that minimize the Hamming distance (number of bit difference) between two adjacent symbols (e.g., charge levels). The Gray code constraint is a number of bit flip per symbol error with an error rate (e.g., Gray code constraint) of 1. The operations of405through410may be repeated as desired. FIG.5A-5Cis a flow diagram of an example method500to prepare mapping matrices containing a mapping of charge levels to bit representations on a group of memory cells in accordance with some embodiments of the present disclosure. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500is performed by the mapping matrix component113ofFIG.1alone or in combination with memory cell group component109ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation505, the processing device generates a first matrix representing a first mapping of sequences of charge levels to sequences of bits for a first group of two memory cells. Each memory cell of the first group supports two charge levels and represents one bit of data. In some embodiments, the generated first matrix representation is: [0123]. At operation510, the processing device generates a second matrix representing a second mapping of sequences of charge levels to sequences of bits for a second group of two memory cells. Each memory cell of the second group supports three charge levels and represents 1.5 bits of data. In some embodiments, the generated second matrix representation is: [764520132]. At operation515, the processing device applies a matrix operation to the first and second matrices. At operation516ofFIG.5B, the processing device generates a fourth matrix representing a fourth mapping of sequences of charge levels to sequences of bits for a third group of two memory cells. Each memory cell of the third group supporting 12 charge levels and representing 3 bits of data. At operation517, the processing device applies a second matrix operation to the second and fourth matrices. For example, a matrix operation such as a Kronecker Product operation is applied between the second and fourth matrices to generate the fifth matrix (see operation518). At operation518, the processing device generates a fifth matrix representing a fifth mapping of sequence of 529 charge levels and representing 9 bits of data. FIG.5Cdescribes operations pertaining to the fifth matrix generated at operation518inFIG.5B. At operation519-1ofFIG.5C, the processing device separates the fifth matrix into equal quadrants. For example, a 9×9 matrix such as matrix302shown inFIG.3Ais separated into four equal quadrants of 3×3 matrices. At operation519-2, the processing device applies one or more matrix operations to each quadrant. In some embodiments, the second matrix is a mapping of 1 bit per cell and the fourth matrix is a mapping of 1.5 bits per cell, or 3 bits per group. The resulting fifth matrix is a mapping of a 2.5 bits per cell, or 5 bit per group. In another example, the second matrix is a mapping of a conventional 3 bits per cell and the fourth matrix is a mapping of 1.5 bits per cell, or 3 bits per group. The resulting fifth matrix is a mapping of a 4.5 bits per cell, or 9 bits per group. In other words, a matrix mapping for an X.5 bits per cell can be generated by A+B.5, where A+B=X. At operation519-3, the processing device generates a new version of the fifth matrix. In some embodiments, the mapping of the fifth matrix is manipulated to create a new version of the fifth matrix that includes features allowing the processing device to quickly read data stored in the fifth group of cells. In some embodiments, a circular shift operation is applied to the fifth matrix to generate the new version of the fifth matrix. The processing device generates, at operation520, a third matrix representing a third mapping of sequences of charge levels to sequences of bits for a third group of two memory cells. Each memory cell of the third group supports six charge levels and represents 2.5 bits of data. In some embodiments, the third mapping of sequences represents Gray code for the third group of memory cells. At operation525, the processing device stores each matrix on a memory device coupled to the processing device. In some embodiments, the processing device stores any combination of the first, the second, the third, the fourth, and the fifth matrices. FIG.6provides an interaction diagram illustrating interactions between components of a computing environment in the context of some embodiments in which a method that uses allocation techniques of data on a memory device as described herein is performed. The operations of the method can be performed by processing logic that can include hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, an integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method is performed by a host system (e.g., host system120), a memory sub-system controller (e.g., memory sub-system controller115), a memory device (e.g., memory device140), or some combination thereof. Although the operations are shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, and/or alternatively, one or more processes can be omitted in various embodiments. Thus, not all processes are used in every embodiment. In the context of the example illustrated inFIG.6, the host system can include the host system120, the memory sub-system controller can include the memory sub-system controller115, and the memory device can include the memory device140. As shown inFIG.6, at operation602, the host system sends a memory command to the memory sub-system110in association with a memory unit. At operation604, the memory sub-system controller115receives the memory command associated with a request to perform a data operation. The memory device, at operation606, provides a mapping assigning individual sequences of charge levels to individual sequences of bit representations for memory units of the memory device. In response to the memory command received at operation604, the memory sub-system controller115performs the requested data operation. The data operation is performed based on the mapping assignment of the memory device. Based on the mapping, the sequence of charge levels corresponding to the first sequence of charge levels is determined. In accordance with a received read memory command, the memory device at operation610reads the sequence of charge levels from the group of memory cells associated with the memory command and provides the first sequence of bits corresponding to the first sequence of charge levels. In accordance with a received write memory command, the memory device at operation612causes the group of memory cells to store the first sequence of charge levels. At operation614, after the performance of the requested data operation is completed by the memory sub-system controller115, the host system receives an indication associated with performance of the memory command. The host system can decide to repeat the steps of602-614by providing one or more memory commands associated with memory units to the memory sub-system controller115. FIG.7illustrates an example machine of a computer system700within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system700can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the mapping matrix component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system700includes a processing device702, a main memory704(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory706(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system718, which communicate with each other via a bus730. Processing device702represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device702can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA)), a digital signal processor (DSP), network processor, or the like. The processing device702is configured to execute instructions726for performing the operations and steps discussed herein. The computer system700can further include a network interface device608to communicate over the network720. The data storage system718can include a machine-readable storage medium724(also known as a computer-readable medium) on which is stored one or more sets of instructions728or software embodying any one or more of the methodologies or functions described herein. The instructions728can also reside, completely or at least partially, within the main memory704and/or within the processing device702during execution thereof by the computer system700, the main memory704and the processing device702also constituting machine-readable storage media. The machine-readable storage medium724, data storage system718, and/or main memory704can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions726include instructions to implement functionality corresponding to a memory cell group component (e.g., the memory cell group109and/or mapping matrix component113ofFIG.1). While the machine-readable storage medium724is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader parts of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 45,974 |
11861209 | DETAILED DESCRIPTION Specific structural or functional descriptions of embodiments according to the concept which are disclosed in the present specification or application are illustrated only to describe the embodiments according to the concept of the present disclosure. The embodiments according to the concept of the present disclosure may be carried out in various forms and should not be construed as being limited to the embodiments described in the present specification or application. FIG.1illustrates a memory system1000according to an embodiment of the present disclosure. The memory system1000may be configured to store data, output the stored data to a host2000, or erase the stored data, in response to a request of the host2000. The host2000may be an electronic device such as a mobile phone or a computer, or may be a processor used in an electronic device. The memory system1000may include a memory device1100in which data is stored, a system memory1200configured to store information used in the memory system1000, and a controller1300configured to control the memory device1100and the system memory1200. The memory device1100may include a memory cell array1110configured to store data, and may further include peripheral circuits (not shown) configured to perform a program, read, or erase operation under control of the controller1300. The system memory1200may be configured to temporarily store information used in the memory system1000. For example, the system memory1200may store mapping information between addresses used in the host2000and addresses used in the memory device1100, respectively, and temporarily store data transmitted between the controller1300and the memory device1100. In embodiments of the memory system1000, the system memory1200may include a volatile memory for a fast speed operation, a nonvolatile memory, or both. For example, the volatile memory may be a dynamic random access memory (DRAM) or a static random access memory (SRAM), and the nonvolatile memory may be an electrically erasable and programmable ROM (EEPROM), a NAND flash memory, a NOR flash memory, a phase-change RAM (PRAM), a resistive RAM (ReRAM), a ferroelectric RAM (FRAM), or a spin transfer torque-magnetic RAM (STT-MRAM). The controller1300may be configured to control the memory device1100and the system memory1200according to requests of the host2000. The controller1300may be configured to control transmission of data between the host2000, the memory device1100, and the system memory1200. The controller1300may map a logical address used in the host2000and a physical address used in the memory device1100to each other, and may change the mapped addresses. The controller1300may store the mapped address in the system memory1200and may find or change the address stored in the system memory1200. In addition, the controller1300may be configured to activate a background mode for efficiently managing the memory device1100, and execute various operations such as garbage collection or wear leveling in the background mode, when a request of the host2000is not being processed. The controller1300may be configured to transmit a read command to the memory device1100, and detect and correct an error in data output from the memory device1100, when performing the read operation according to a read request of the host2000, garbage collection, wear leveling, or read reclaim. In general, when the number of error bits in data read from the memory device1100is greater than an allowed number of error bits for that data, the controller may determine the read operation of the read data has failed and process the read data as invalid data. That is, in general, when the read operation has failed, the controller does not output the read data. For example, in a memory system of the related arts, a page of data in the memory device1100includes a plurality of chunks, the allowed number of error bits applies to each chunk, and a read of the page of memory will be determined to have failed if any chunk of the page has more error bits than the allowed number of error bits. Such a memory system may not output any data from the page when the read operation is determined to have failed. The controller1300according to the present embodiment may be configured to use usable data from among the data read from the selected page even though the read operation of the selected page is determined to have failed when performing garbage collection or wear leveling. FIG.2illustrates the memory cell array1100. The memory cell array1110may include first to j-th memory blocks BLK1to BLKj (j is a positive integer greater than 1). Each of the first to j-th memory blocks BLK1to BLKj may include a plurality of memory cells in which data may be stored. The first to j-th memory blocks BLK1to BLKj may be respectively associated with first to j-th physical addresses. During the program, read, or erase operation, one memory block may be selected from among the first to j-th memory blocks BLK1to BLKj according to the associated physical address, and the program, read, or erase operation on the selected memory block may be performed. FIG.3illustrates the j-th memory block BLKj. Because the first to j-th memory blocks BLK1to BLKj shown inFIG.2are configured identically to each other, the j-th memory block BLKj is shown inFIG.3as an example. The j-th memory block BLKj may include a plurality of strings ST connected between respective first to m-th bit lines BL1to BLm and a source line SL. Each of the strings ST may include a source select transistor SST, first to n-th memory cells C1to Cn, and a drain select transistor DST connected in series between the source line SL and the respective bit line of the first to m-th bit lines BL1to BLm (m is a positive integer). While the j-th memory block BLKj shown inFIG.3illustrates one possible configuration of the memory block, the number of source select transistors SST, number of first to n-th memory cells C1to Cn, and number of drain select transistors DST is not limited to the numbers shown inFIG.3. Gates of the source select transistors SST connected to different strings may be commonly connected to a source select line SSL, gates of each of the first to n-th memory cells C1to Cn of the different strings may be commonly connected to first to n-th the word lines WL1to WLn, and gates of the drain select transistors DST connected to the different strings may be commonly connected to a drain select line DSL. Memory cells connected to the same word line and included in different strings ST may configure a page PG, and the program operation and the read operation may be performed in a unit of the page PG. For example, during the read operation, the read operation may be performed on a selected page included in a selected memory block according to a physical address. FIG.4illustrates the page PG included in the memory block. Because the page PG includes the plurality of memory cells connected to the same word line, a storage capacity of the page PG may be determined according to the number of memory cells. During the read operation, the memory cells may be selected in the unit of the page PG, but data of the memory cells may be read in a chunk unit as the number of memory cells included in the page PG increases. The chunk may be defined as a group of memory cells having a number of bits less than that of the page PG. Therefore, the memory device may output data read from the page PG to the controller in the chunk unit, and the controller may perform error detection and correction of the data received on a chunk-by-chunk basis. For example, the memory cells included in the page PG may be divided into first to i-th chunks CK1to CKi. User data U_DATA and meta data M_DATA may be stored in each of the first to i-th chunks CK1to CKi. The user data U_DATA may be data transmitted by the host during the program operation and may be unique to each chunk, and the meta data M_DATA may be data generated by the controller to manage the memory system. For example, the controller may divide the meta data M_DATA generated during the program operation according to a meta slice unit, and store the divided meta slices in the first to i-th chunks CK1to CKi, respectively. For example, user data U_DATA for the first chunk CK1of the page PG and a first meta slice 1st meta slice may be stored in the first chunk CK1, user data U_DATA for the second chunk CK2and a second meta slice 2nd meta slice may be stored in the second chunk CK2, and user data U_DATA for the third chunk CK3and a third meta slice 3rd meta slice may be stored in the third chunk CK3. Accordingly, the user data U_DATA and the meta data M_DATA divided into i may be stored in the first to i-th chunks CK1to CKi, respectively. In embodiments, the first to i-th chunks CK1to CKi may also include respective Error Correction Code (ECC) bits (not shown) configured for the detection and correction of bit errors in the respective chunk. In other embodiments, the ECC bits for each chunk may be included in a spare area (not shown) of the page PG. The different meta slices may include different information. The meta data M_DATA including the meta slices is described in more detail with reference toFIGS.5A and5B. FIGS.5A and5Billustrate the meta data stored in the chunks. Referring toFIG.5A, when the page is divided into the first to i-th chunks CK1to CKi, the meta data M_DATA may be divided into i meta slices and stored in the respective chunks. For example, the first meta slice stored as the meta data M_DATA in the first chunk CK1may include a logical address LADD of the data stored in the selected page. The logical address LADD may be the address used in the host and may be an address transmitted by the host to the controller during the program operation. For example, the address used in the host may be the logical address LADD, and the address used in the memory system may be the physical address. The logical address LADD may be stored in a format such as a logical page number designated by the host. The meta slices stored as the meta data M_DATA in the second to i-th chunks CK2to CKi may include various information related to the program operation of the selected page or information related to the data stored in the selected page. For example, the meta slices stored as the meta data M_DATA in the second to i-th chunks CK2to CKi may include a bitmap related to the data stored in the selected page. The bitmap may indicate a data structure of the data stored in the selected page. The chunk in which the logical address LADD is stored may be preset according to the memory system. For example, as shown inFIG.5A, in a memory system in which the chunk storing the logical address LADD is set as the first chunk CK1, the logical address LADD associated with a page may be stored in the first chunk CK1of that page during the program operation for that page. As shown inFIG.5B, the chunk in which the logical address LADD is stored may be a chunk other than the first chunk CK1. For example, in a memory system configured to store the logical address LADD in the sixth chunk CK6, the logical address LADD associated with a page may be stored in the sixth chunk CK6of that page during the program operation for that page. FIG.6illustrates a controller according to an embodiment of the present disclosure. Referring toFIG.6, the controller1300may include a host interface410, a flash translation layer FTL420, a central processing unit (CPU)430, a recovery manager440, a memory interface450, an error correction circuit460, and a buffer interface470. The host interface410, the flash translation layer420, the CPU430, the recovery manager440, the memory interface450, the error correction circuit460, and the buffer interface470may exchange information through a bus. The CPU430may control the host interface410, the flash translation layer420, the recovery manager440, the memory interface450, the error correction circuit460, and the buffer interface470, in response to a request of the host2000or in the background mode. The host interface410may be configured to transmit information between the host2000and the controller1300. For example, during the program operation, the host interface410may receive a program request, a logical address, and data output from the host2000, transmit the program request to the CPU430, and transmit the logical address and the data to the buffer interface470under control of the CPU430. During the read operation, the host interface410may receive a read request and the logical address from the host2000and may output data read from the memory device to the host2000. The flash translation layer420may be configured to perform various functions for optimizing the memory device1100. For example, the flash translation layer420may be configured to perform address mapping, garbage collection, wear leveling, or the like. Address mapping is a function of mapping the logical address used in the host2000and the physical address used in the memory device1100to each other. Garbage collection is a function for collecting valid data stored in a plurality of memory blocks of the memory device1100into one memory block and erasing invalid data stored in remaining memory blocks. Wear leveling is a function for evenly distributing a use frequency of the memory blocks included in the memory device1100. For example, since the erase operation of the memory block in which the invalid data is stored is not performed simultaneously when wear leveling is performed, garbage collection may be performed when the number of free blocks is decreased due to wear leveling being performed. In addition, the flash translation layer420may be configured to further perform functions such as trimming or over provisioning in order to optimize the memory device1100. The CPU430may be configured to generally control devices included in the controller1300, and may perform various operations required by the memory system during the program, read, or erase operation. In addition, the CPU430may convert various requests of the host2000into commands and transmit the commands to the memory device1100to control the memory device1100. For example, when the host2000transmits the program request, the CPU430may convert the program request into a program command and transmit the program command to the memory device1100. When the host2000transmits the read request, the CPU430may convert the read request into a read command and transmit the read command to the memory device1100. The CPU430may also generate a program, read, or erase command according to a request of the flash translation layer420in the background mode, and transmit the generated program, read, or erase command to the memory device1100. In addition, the CPU430may generate the commands for performing the program, read, or erase operation even though the request of the host2000does not exist when a condition governing whether wear leveling or garbage collection is to be performed is satisfied. The recovery manager440may be configured to determine whether the read operation has passed or failed. For example, the recovery manager440may determine whether the read operation has passed or failed according to a signal output from the error correction circuit460. The recovery manager440may be configured to determine whether recovery is possible according to the meta data among the data received from the memory device1100when it is determined that the read operation has failed. For example, the recovery manager440may be configured to recover the meta data using the information stored in the system memory1200when the meta data of the chunk on which the read operation has failed included the logical address. More specifically, the recovery manager440may determine whether the meta data of the chunk on which the error correction operation has failed included the logical address, and when it is determined that the logical address was included, the recovery manager440may find the logical address mapped to the physical address of the chunk on which the error correction operation has failed, using the information stored in the system memory1200. The recovery manager440may thereby recover the entire meta data of the data stored in the selected page by combining the found logical address with the meta data of the chunks on which the error correction operation has passed. Subsequently, the recovery manager440may output the data stored in the chunks on which the error correction operation has passed, using the recovered meta data. Since the recovery manager440performs a function of managing the memory device1100, the recovery manager440may be included in the flash translation layer420according to the controller1300. The memory interface450may be configured to transmit information between the controller1300and the memory device1100. For example, the memory interface450may be configured to transmit the program command, the physical address, and the data received through the bus to the memory device1100during the program operation. The data transmitted to the memory device1100may include the user data and the meta data. The memory interface450may be configured to transmit the data received from the memory device1100during the read operation to the buffer interface470through the bus. The error correction circuit460may be configured to perform an error correction encoding operation on the data received from the host interface410during the program operation and perform an error correction decoding operation on the data received from the memory device1100through the memory interface450during the read operation. During the error correction decoding operation, the error correction circuit460may perform the error correction decoding operation on the data received on a per-chunk basis; that is, separately for each chunk of the received data. When an error is detected in a chunk of the received data, the error correction circuit460may compare the number of error bits detected in that chunk with the allowed number of error bits, and when the number of error bits detected is less than or equal to the allowed number of error bits, the error correction circuit460may output a pass signal indicating the read operation of that chunk has passed. When the number of error bits detected in a chunk is greater than the allowed number of error bits, the error correction circuit460may output a fail signal indicating that the read operation of that chunk has failed. The recovery manager440may determine that the read operation is passed in response to the pass signal output for the chunks of the received data, and may perform a recovery operation in response to the fail signal output for one of the chunks of the received data. The buffer interface470may be configured to transmit information between the controller1300and the system memory1200. For example, the buffer interface470may be configured to transmit addresses or data transmitted through the bus to the system memory1200, and configured to transmit addresses or data stored in the system memory1200to the bus. FIG.7illustrates a system memory according to an embodiment of the present disclosure. Referring toFIG.7, the system memory1200may include an address map buffer71, a data buffer72, and an error information buffer73. The address map buffer71may store an address map table for logical addresses LADD1to LADDn and physical addresses1PADD to nPADD. For example, assuming that the first physical address1PADD is mapped to the first logical address LADD1by the flash translation layer420ofFIG.6during the program operation, the mapping of the first physical address1PADD to the first logical address LADD1may be stored in the address map buffer71. The physical address stored in the address map buffer71may include the address of the memory block in which data is stored and the address of the page, and (in some embodiments) the address of the chunk in which the logical address is stored in each page. For example, the first physical address1PADD may include an address of the first memory block BLK1in which data is stored, an address of a first page1PG, and an address of the first chunk CK1in which the first logical address is stored among the chunks included in the first page1PG. That is, the n-th physical address nPADD mapped to the n-th logical address LADDn may include an address of the n-th memory block BLKn in which data corresponding to the n-th logical address LADDn is stored, an address of an n-th page nPG included in the n-th memory block BLKn, and an address of the first chunk CK1in which the n-th logical address LADDn is stored among the chunks included in the n-th page nPG. In other embodiments wherein the chunk of the page that stores the logical address is predetermined, chunk addresses may not be included in the address map table. During the program operation, when a capacity of data corresponding to the logical address is greater than a capacity of one memory block, the physical address may include addresses of a plurality of memory blocks, addresses of a plurality of pages, and addresses of the chunks in which the logical address is stored in each page, as shown in physical address701. For example, when a capacity of the data corresponding to the n-th logical address LADDn is a capacity stored in two memory blocks, the n-th physical address nPADD may include an address of the (n−1)-th memory block BLK(n−1) and an address of the n-th memory block BLKn in which data is stored, addresses of first to n-th pages1PG to nPG included in each of the (n−1)-th and n-th memory blocks BLK(n−1) and BLKn, and an address of the first chunk CK1in which the n-th logical address LADDn is stored in each of the first to n-th pages1PG to nPG. In the diagram shown inFIG.7, all chunks in which the logical address is stored in each of the pages are at the same relative address within their respective pages (here, the first chunk), but the chunks in which the logical address is stored may be different according to the memory block or the page, or may be different according to the memory block and the page. The address of the chunk in which the logical address is stored may be determined according to an algorithm set in the flash translation layer420. Therefore, in some embodiments, during the program operation, the flash translation layer420may store, in the address map buffer71, the address of the chunk in which the logical address is stored for each page according to the physical address in which data is stored. The data buffer72may be configured to temporarily store the data read from the memory device. The data buffer72may divide and store the data read from the selected page according to the chunk unit. For example, assuming that first to third chunks are included in the selected page during the read operation, in the data buffer72, first data DATA1read from the first chunk CK1, second data DATA2read from the second chunk CK2, and third data DATA3read from the third chunk CK3may be stored to be divided according to each chunk. The data stored in the data buffer72may include the user data U_DATA and the meta data M_DATA read from each chunk, as shown for chunk702. The error information buffer73may be configured to store addresses of the page and the chunks on which the read operation has failed during the read operation of the selected memory block. For example, when failure occurs in the second chunk CK2of the third page3PG during the read operation of the second memory block BLK2, the recovery manager440ofFIG.6may store the addresses of the second memory block BLK2in which the failure occurred, the third page3PG of the second memory block BLK2, and the second chunk CK2of the third page3PG in the error information buffer73. The addresses stored in the error information buffer73may be used in a next program operation or read operation. For example, during the next program operation, the controller may control the program operation so that dummy data is stored in a page corresponding to the address stored in the error information buffer73. Alternatively, the controller may control the read operation so that a read voltage is adjusted during the next read operation corresponding to the address stored in the error information buffer73. In addition, the controller may perform garbage collection or wear leveling using the information stored in the error information buffer73when the controller is in the background mode. FIG.8illustrates operation of a memory system according to an embodiment of the present disclosure. Referring toFIGS.6and8, during an operation performed without being provided with the logical address corresponding to the data stored in the selected page, the controller1300may recover the logical address corresponding to the selected page through the meta data stored in the selected chunk of the selected page, and perform the operation using the recovered logical address. For example, when wear leveling, garbage collection, or read reclaim is performed, the controller1300may output a read command CMDr to the memory device1100(S81). When outputting the read command CMDr, the controller1300may also output the selected physical address. The memory device1100may perform the read operation in response to the read command CMDr and the physical address (S82). For example, the memory device1100may output the data read from the selected page to the controller1300in the chunk unit. The controller1300may perform the error correction operation using the data of each chunk unit received from the memory device1100and an error correction code (ECC) corresponding to each chunk unit to determine the number of error bits in each chunk unit (S83). Here, the error correction operation may be the error correction decoding operation. The controller1300may determine whether a chunk for which the error correction operation has failed exists according to a result of the error correction operation (S84). For example, when the number of error bits detected in a selected chunk is less than or equal to the allowed number of error bits (path NO out of S84), the controller1300may determine that the error correction operation of the selected chunk has passed. When the error correction operations of all chunks included in the selected page have passed, the data of the selected page may be output after correcting any error detected in each of the chunks (S85). In step S84, when the number of fail bits detected in the selected chunk is greater than to the allowed number of fail bits (path YES out of S84), the controller1300may determine whether the meta data may be recovered based on the data of the chunk in which the fail occurs (S86). For example, when the logical address corresponding to the selected page is not included in the meta data of the chunk on which the error correction operation is failed (path NO out of S86), the controller1300may process the read operation of the selected page as having failed (S87). In step S86, when the logical address of the selected page is included in the meta data of the chunk on which the error correction operation is failed (path YES out of S86), the controller1300may recover the meta data of the selected page (S88). For example, the controller1300may recover the meta data by finding the logical address mapped to the physical address of the failed chunk in the address map table stored in the system memory1200. More specifically, when a fail occurs in the chunk in which the logical address is stored among the plurality of chunks of the selected page, the logical address information is lost, and thus the meta data for the selected page is not complete. In this case, since the logical address mapped to the physical address for the failed chunk may be found in the system memory1200, when the meta slices of the passed chunks and the logical address found in the system memory1200are combined, the entire meta data for the selected page may be recovered. When the meta data including the logical address is recovered, the controller1300may output the data corresponding to the logical address and the chunk on which the error correction operation is passed, from among each of the chunks stored in the system memory1200(S89). That is, in memory systems of the related arts, during the read operation of a selected page, when the error correction operation is failed in any one of the chunks included in the selected page, the entire read operation of the selected page may be processed as having failed, and thus the data stored in the selected page may not be used. However, according to the present embodiment, even though the chunk on which the error correction operation is failed occurs among the chunks included in the selected page, when logical address data is included in the meta data of the failed chunk, the controller1300may use data corresponding to the chunks of the selected page on which the error correction operation has passed. For example, the controller1300may recover the logical address based on the information stored in the system memory1200and use the data corresponding to the chunks on which the error correction operation is passed among the data corresponding to the recovered logical address. In addition, the controller1300may store the physical address of the chunk on which the error correction operation has failed in the system memory1200and may use the physical address when setting operation conditions during the next program or read operation. FIG.9illustrates operations on a chunk unit according to an embodiment of the present disclosure, and steps S82to S88described with reference toFIG.8are shown more specifically. Referring toFIGS.8and9, during a read operation of a selected page Sel_PG, the memory device1100may perform the read operation of the selected page Sel_PG and output read data in a units of first to i-th chunks CK1to CKi, respectively (S82). The controller1300may perform an error correction operation on data in each of the chunk units received from the memory device1100(S83). For example, the error correction circuit460ofFIG.6included in the controller1300may perform an error correction operation on the data read from the first chunk CK1, and output a result of the error correction operation as a pass signal or a fail signal. In this way, a result of an error correction operation on data received from each of the first to i-th chunks CK1to CKi may be output. When the error correction operation on the data of the fourth chunk CK4among the data of the first to i-th chunks CK1to CKi has failed, the recovery manager440ofFIG.6included in the controller1300may determine whether the logical address LADD is stored in the meta slice included in the fourth chunk CK4among the meta data of the selected page, and determine whether recovery is possible according to a result of that determination (S86). For example, when the logical address LADD is not included and a bitmap is included in the meta slice of the fourth chunk CK4, the recovery manager440may determine that the read operation the selected page has failed (S87). In contrast, when the logical address LADD is included in the meta slice of the fourth chunk CK4, the recovery manager440may recover the meta data of the selected page (S88), and output the data corresponding to the chunks of the selected page on which the error correction operation has passed. FIG.10illustrates operations for recovering meta data according to an embodiment of the present disclosure, and a read operation of a page including two chunks is described as follows. In the example illustrated inFIG.10, the third page3PG is the selected page Sel_PG during the read operation and the third page3PG is divided into the first and second chunks CK1and CK2. First user data U_DATA1and the bitmap may be stored in the first chunk CK1, and second user data U_DATA2and a third logical address LADD3may be stored in the second chunk CK2. The bitmap and the third logical address LADD3stored in the third page3PG are meta data. During the read operation of the third page3PG, when the error correction operation of the first chunk CK1is passed and the error correction operation of the second chunk CK2is failed, the recovery manager440ofFIG.6may find the physical address of the second chunk CK2in the address map buffer71. For example, when the second chunk CK2corresponds to a third physical address3PADD including addresses of the third page3PG of the third memory block BLK3, the recovery manager440may, at step11, find an entry in the address map buffer71that corresponds to the third physical address3PADD, and then at step12find a third logical address LADD3mapped to the third physical address3PADD in that entry of the address map buffer71. Subsequently, at step13, the recovery manager440finds the first chunk CK1, which corresponds to the third logical address LADD3, and on which the error correction operation is passed, in the data buffer72, and at step14outputs the first user data U_DATA1corresponding to the first chunk CK1. When the first user data U_DATA1is output, the recovery manager440may store the addresses of the third memory block BLK3on which the error correction operation has failed, the third page3PG, and the second chunk CK2in the error information buffer73. FIG.11illustrates a memory card system to which a controller of the present disclosure is applied. Referring toFIG.11, the memory card system3000may include a controller3100, a memory device3200, a connector3300, and a system memory3400. The controller3100may control overall operations of the memory card system3000and may be configured similarly to the controller1300shown inFIG.6. For example, the controller3100may be configured to control the memory device3200and the system memory3400. The controller3100may be configured to control a program, read, or erase operation of the memory device3200or control operations in a background mode. The controller3100is configured to provide an interface between the memory device3200and a host. The controller3100is configured to drive firmware for controlling the memory device3200. For example, the controller3100may include components such as a random access memory (RAM), a processor, a host interface, a memory interface, a flash translation layer, and a recovery manager. The controller3100may communicate with an external device through the connector3300. The controller3100may communicate with an external device (for example, the host) according to a specific communication standard. For example, the controller3100is configured to communicate with an external device through at least one of various communication standards such as a universal serial bus (USB), a multimedia card (MMC), an embedded MMC (eMMC), a peripheral component interconnection (PCI), a PCI express (PCI-E), an advanced technology attachment (ATA), a serial-ATA, a parallel-ATA, a small computer system interface (SCSI), an enhanced small disk interface (ESDI), integrated drive electronics (IDE), FireWire, a universal flash storage (UFS), Wi-Fi, Bluetooth, and an NVMe. For example, the connector3300may be defined by at least one of the various communication standards described above. For example, the memory device3200may be configured of various nonvolatile memory elements such as an electrically erasable and programmable ROM (EEPROM), a NAND flash memory, a NOR flash memory, a phase-change RAM (PRAM), a resistive RAM (ReRAM), a ferroelectric RAM (FRAM), and a spin-transfer torque magnetic RAM (STT-MRAM). For example, the system memory3400may be configured to include the address map buffer71, the data buffer72, and the error information buffer73as shown inFIG.7. The controller3100, the memory device3200, and the system memory3400may be integrated into one semiconductor device to configure a memory card. For example, the controller3100, the memory device3200, and the system memory3400may be integrated into one semiconductor device to configure a memory card such as a PC card (personal computer memory card international association (PCMCIA)), a compact flash card (CF), a smart media card (SM or SMC), a memory stick, a multimedia card (MMC, RS-MMC, MMCmicro, or eMMC), an SD card (SD, miniSD, microSD, or SDHC), and a universal flash storage (UFS). FIG.12illustrates a solid state drive (SSD) system to which a controller of the present disclosure is applied. Referring toFIG.12, the SSD system4000includes a host4100and an SSD4200. The SSD4200exchanges a signal SIG with the host4100through a signal connector4001and receives power PWR through a power connector4002. The SSD4200includes a controller4210, a plurality of flash memories4221to422n, an auxiliary power supply4230, and a buffer memory4240. According to an embodiment of the present disclosure, the controller4210may perform a function of the controller1300described with reference toFIG.6. The controller4210may control the plurality of flash memories4221to422nin response to the signal received from the host4100. For example, the signal may be signals based on an interface between the host4100and the SSD4200. For example, the signal may be a signal defined by at least one of interfaces such as a universal serial bus (USB), a multimedia card (MMC), an embedded MMC (eMMC), a peripheral component interconnection (PCI), a PCI express (PCI-E), an advanced technology attachment (ATA), a serial-ATA, a parallel-ATA, a small computer system interface (SCSI), an enhanced small disk interface (ESDI), integrated drive electronics (IDE), FireWire, a universal flash storage (UFS), Wi-Fi, Bluetooth, and an NVMe. The auxiliary power supply4230is connected to the host4100through the power connector4002. The host4100may include a host buffer2100and may store a logical address and a physical address in the host buffer2100. The auxiliary power supply4230may be charged by receiving a power voltage from the host4100. The auxiliary power supply4230may provide a power voltage of the SSD4200when power supply from the host4100is not smooth. For example, the auxiliary power supply4230may be positioned in the SSD4200or may be positioned outside the SSD4200. For example, the auxiliary power supply4230may be positioned on a main board and may provide auxiliary power to the SSD4200. The buffer memory4240operates as a buffer memory of the SSD4200. For example, the buffer memory4240may temporarily store data received from the host4100or data received from the plurality of flash memories4221to422n, or may temporarily store meta data (for example, a mapping table) of the flash memories4221to422n. In addition, the buffer memory4240may temporarily store data read from the memory device, and may store a physical block address of a chunk on which an error correction operation is failed during a read operation. The buffer memory4240may include a volatile memory such as a DRAM, an SDRAM, a DDR SDRAM, and an LPDDR SDRAM, or a nonvolatile memory such as an FRAM, a ReRAM, an STT-MRAM, and a PRAM. | 39,203 |
11861210 | DETAILED DESCRIPTION The present disclosure provides a data processing method and a device for a solid state drive, which is used to predict the I/O information of the next first period time. The solid state drive processor actively performs the SSD management according to the predicted result, which can improve SSD performance, reduce the response time of data operation. The technical solutions in the embodiments of the present disclosure are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present disclosure. It is obvious that the described embodiments are only a part of the embodiments of the present disclosure, but not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present disclosure without creative efforts are within the scope of the present disclosure. The description and claims of the present disclosure and the terms “first”, “second”, “third”, “fourth”, etc. in the above drawings are used to distinguish the similar object without having to describe specific order or sequence. It is to be understood that the data used in this way can be interchanged in appropriate circumstances, so that the embodiments described here can be implemented in an order other than what is graphically or described here. Furthermore, the terms “including” and “having” and any variations are intended to cover non-exclusive inclusion, for example, the process, methods, systems, products, or equipment that contain a series of steps or units are not necessarily limited to what is clearly listed. These steps or units may include other steps or units that are not clearly listed or for these processes, methods, products, or equipment. Based on the prior art, when realizing efficient SSD management, the delay mechanism (passive buffer scheduling, or passive garbage disposal, etc.), the problem of longer data response time is proposed. A data processing method and device for a solid state drive is used to proactively predict the I/O information of the next first period of time, so that the processor of the SSD actively performs management according to the prediction result to improve the performance of the SSD and reduce the response time of the data operation. In order to understand, the data processing method of the SSD in the present disclosure will be described below. For details, seeFIG.1, a data processing method for a SSD in an embodiment of this disclosure includes the following steps. Step101, acquiring the interface protocol command received by the solid state drive. The interface protocol command is a communication protocol command between the host and the memory (SSD), mainly with protocol interfaces such as SATA, PCIE, NVME and so on. In some embodiments, the interface protocol command in the present embodiment is an NVME command. Specifically, NVME is interface specification that connects the memory to the server through the PCIE bus, and NVME enables SSD to communicate with the host system faster. In the present disclosure, the SSD in the process of data interaction with the host system, through the NVME protocol instruction, in order to realize the rapid input and output of the data. Different from prior art, SSD realizes the configuration of hardware resources and instruction queue management in SSD through passive SSD management (passive buffer scheduling, passive garbage collection, etc.). The configuration of hardware resources includes buffer scheduling and garbage collection and so on. In the embodiment of this disclosure, when the SSD receives the NVME protocol instruction sent by the host system, the data processing device of the SSD acquires the protocol instruction, and performs step102according to the protocol instruction to achieve the prediction of I/O information in the first time period in the future, and the SSD processor performs SSD management (such as active buffer scheduling, or active garbage collection) based on the prediction result. The SSD processor in this embodiment can not only realize the function of the main control in the ordinary SSD, but also further realize the function of data processing. It can be regarded as the collection of the master control unit and the data processing unit of the SSD. Specifically, the data processing device of the SSD can acquired the NVME protocol instruction from the SSD actively, and can also receive the NVME protocol instruction transmitted by the SSD passively, which is not specifically limited. It should be noted that the data processing apparatus of the SSD in the present disclosure may specifically represent a partial functional module of the SSD processor, or a data processing device independent of SSD, which is not specifically limited. Step102, parsing the interface protocol command to obtain I/O information in the protocol command, wherein the I/O information includes at least I/O timestamp, I/O type, and I/O size. After the data processing device of the solid state drive acquires the interface protocol command, it resolves the interface protocol instruction to obtain I/O information in the protocol instruction, and the resolution action can be specifically manifested as according to a specific protocol specification (such as NVME protocol specification), reading out the U/O information in the protocol instruction, wherein the I/O information includes at least the I/O timestamp, I/O type, and I/O size. Specifically, the U/O timestamp is the generation time of the U/O operation, and the I/O types includes a reading operation or writing operation, the I/O size is a data length of the reading operation or writing operation. Step103, performing machine learning on the I/O information to predict the I/O information in the first time in the future, so that the processor of the solid state drive performs management actively according to the prediction result. After the data processing apparatus of the solid state drive acquires the I/O information of the current time period, it can learn the I/O information of the previous time period according to the machine learning algorithm to predict the I/O information of the future. The solid state drive processor performs solid state drive management based on the prediction results, thereby increasing the performance of the solid state drive, reducing the response time of data operations. Among them, the machine learning algorithm includes, but is not limited to, the algorithm, linear regression, time series, etc., which are not limited thereto. And the solid state drive management in this embodiment includes, but is not limited to buffer scheduling, garbage collection, or instruction queue management. In the embodiment of the present disclosure, the method includes obtaining the interface protocol instruction received by the SSD firstly; and then analyzing the interface protocol instruction to acquire I/O information in the protocol instruction, where the I/O information includes at least I/O time stamp, I/O type and I/O size; and executing machine learning on the I/O information so as to predict I/O information of a future first time period, such that an SSD processor actively executes SSD management according to the prediction result. In the present disclosure, through machine learning algorithm, the I/O information of the first time period in the future can be predicted actively according to the current I/O information, so that the SSD processor can execute the SSD management actively according to the predicted results, so as to improve the performance of the SSD and reduce the response time of data operation. Based on the embodiments described inFIG.1, the step103is described in detail below; seeFIG.2for details, which is a schematic diagram of refinement steps of Step103in the embodiment ofFIG.1. Step201, preprocessing on the I/O information to obtain a first I/O information table. The data processing device of the SSD obtains the I/O information in the interface protocol instruction (such as NVME protocol instruction) by parsing it, and then preprocesses the I/O information to get the first I/O information table. Specifically, the specific process of the pretreatment operation will be described in detail in the following embodiments and will not be repeated here. Step202, combining multiple adjacent I/O operations in the first I/O information table into one I/O operation. After obtaining the first I/O information table, the data processing device of the SSD combines several adjacent I/O operations in the first I/O information table into one I/O operation. Merging here can be understood as concatenating the eight adjacent I/Os processed in Table 2 below into one I/O input. In this way, compared with the existing input of one by one operation, multiples I/O can be input and calculated at the same time, which greatly reduces the amount of calculation and meets the demand of real time calculation. Specifically, you can combine four adjacent I/O operations into one I/O operation, or you can combine eight adjacent I/O operations into one I/O operation. The “multiple” in this embodiment is primarily determined by the processor's processing performance (that is, the processor's computing power), and if the processor's computing power is weak, a larger number of adjacent I/O operations can be combined. If the processor is computationally powerful, a smaller number of contiguous I/O operations can be combined to fit the processor's processing performance. Step203, using characteristic values of the multiple combined I/O operation for model learning of neural network LSTM to obtain an I/O prediction result in the first time period. After combining multiple I/O operations into one I/O operation, a set of I/O operation eigenvalues can be obtained. The specific I/O operation eigenvalues are variables related to the I/O timestamp, I/O type, and I/O size. In order to predict the I/O information in the first time period in the future, machine learning algorithm is generally adopted for prediction. The machine learning algorithm in this embodiment can be a neural network learning algorithm. It should be noted that when the neural network learning algorithm is used to predict the I/O information in the first time period in the future, the known I/O information needs to be trained by the neural network learning model to obtain the neural network training model. Then I/O information in the first period of the future is predicted according to the neural network training model and the current I/O information. The training process of the neural network learning model based on the known I/O information is described in detail in the prior art and will not be repeated here. The specific implementation process of performing neural network model learning (i.e., model reasoning) on multiple combined I/O operation eigenvalues to obtain I/O prediction information in the first time period will be described in the following embodiments and will not be repeated here. Step204, performing post-processing on the I/O prediction result to adjust the prediction result. Because the prediction algorithm relies more on the selection of assumptions and features, the I/O prediction information may have a large deviation from the prediction result, and the prediction result is obviously deviated from the reality. Therefore, post-processing on the I/O prediction information is required, such as deleting abnormal predicted values in I/O prediction information to obtain normal prediction results. In this disclosure, the process of implementing machine learning based on I/O information and predicting the I/O information in the first time period in the future are described in detail, which improves the implementability of this disclosure. Based on the embodiments described inFIG.2, the preprocessing of I/O information in Step201is described in detail below. For details, please refer toFIG.3, a schematic diagram of refinement steps of Step201in the embodiments. Step301, calculating time interval between multiple sets of 11 operations that meet a first preset number according to the I/O timestamp. According to the description in Step201, suppose that after parsing the NVME instruction, the data processing device of the SSD obtains the I/O information as shown in Table 1: TABLE 1I/O type (R for reading,DatatimestampW for writing)size (bit)0.000025554R40960.071675595R122880.081838272R122880.096054779R122880.12846458R12288 Then, the preprocessing of I/O information is described by taking Table 1 as an example. The preprocessing is to first execute Step301, that is, to calculate the time interval between multiple groups of I/O operations that meet the first preset number. Specifically, step301can calculate the time interval between 32 I/O operations, or calculate the time interval between 16 I/O operations. The selection of the preset number depends on the processing capacity of the host system, which is not specific limited here. The following is an example of 32 I/O operations. Specifically, calculate the interval difference every 32 I/O timestamp, and calculate the interval time: interval=timestepi+31−timestepi Wherein, the timestep; represents the i-th timestamp of the I/O instructions. Step302: counting a total number of read operations and write operations of the I/O operations in each group within the time interval and the corresponding data size to obtain the first I/O information table. After the time interval of multiple groups of I/O operations satisfying the first preset number is obtained, the sum of the times of read and write operations and the corresponding data size of the first preset number of I/O operations in each group within the said time interval are calculated. Further, executing digital conversion on the reading and writing I/O data, the writing I/O indicated as 1, the reading I/O indicated as 0. Then each group of 32 I/Os are added up to figure out the number of reading I/Os and the number of writing I/Os, the formula is as follows: ioTypeSumj=∑i=031ioType32j+1Where,ioTypei={0,whentheiinstuctionisreading1,whentheiinstuctioniswriting Specifically, the total number and the corresponding data size of the read operations and write operations of the I/O operations in each group within the time interval can be calculated according to the above formula. Step303, executing the data size to obtain the first I/O information table. In order to facilitate regression prediction, the size of the combined data needs to be compressed to a reasonable range (for example, 0-500). Therefore, the size of the read I/O corresponding to a set of 32 I/Os needs to be summed and then divided by 200K for compression. Similarly, the size of the write I/O corresponding to a set of 32 I/Os is summed and then divided by 200K for compression to obtain the first I/O information table as shown in Table 2. Specifically, the size of the compression can be customized (for example, the size of the read I/O for a set of 32 I/Os can be added up and then divided by 400K or 500K for compression). As long as the size of the compressed data conforms to the data range for the regression analysis, there is no specific restriction here. TABLE 2I/O TypeR compressionW compressionIntervalSumsizesize0.13524198302.7200.00106960604.3200.0014586705.4400.00423380103.1200.00314713402.5800.00067306242.240.20.00063764712.720.08 The time intervals in Table 2 above represent the time intervals between 32 I/O operations. The I/O type sum represents the total number of read and write operations in 32 I/O operations. For example, when the sum is 0, the 32 I/O operations are all read operations. When the sum is 4, there are 4 writes operations and 28 reads operations of the 32 I/O operations. The compressed data size of read and write operations is the compressed data size of the read and write operations in 32 I/O. In the above embodiment, the preprocessing of I/O information is described in detail. That is, the generation process of the first I/O information table is described in detail, which improves the implementability of this disclosure. Based onFIG.2, the process of neural network LSTM model learning (that is, model reasoning) for multiple combined I/O eigenvalues in Step203is described in detail to obtain the I/O prediction information in the first period. Refer toFIG.4for details, which is a schematic diagram of refinement steps of Step203in the embodiment ofFIG.2. Step401, inputting the characteristic values of the multiple combined I/O operation into the input fully connected layer of the neural network to map the merged I/O operation characteristic values into a high-dimensional vector space through linear transformation; the I/O operation characteristic values are related to the I/O timestamp, the I/O type, and the I/O size. According to the Step202of the embodiments described inFIG.2, the multiple adjacent I/O operations in table 2 were merged into one I/O operation, and then the multiple combined I/O operation eigenvalues were input to the input fully connected layer of the neural network, so that the combined I/O operation eigenvalues could be mapped into a high-dimensional vector space through linear transformation. That is, mapping the eigenvalues of low-dimensional I/O operations to the eigenvalues of high-dimensional I/O operations. The I/O operation eigenvalues are related to the I/O timestamp, I/O type, and I/O size. Specifically, the eigenvalues of each merged U/O operation can be obtained by the following method. For example, the processed eight adjacent I/Os in Table 2 are merged into one I/O input (here merging can be understood as concatenating the processed eight adjacent I/Os in Table 2 into one I/O input). Because each I/O in Table 2 is the result of merging 32 I/Os in Table 1, merging eight adjacent I/Os in Table 2 into one I/O input is equivalent to merging 32*8=256 I/Os into one I/O input. The specific merge operation can be shown below to obtain a combined I/O operation eigenvalue. Each I/O operation eigenvalue is a variable related to the I/O timestamp, I/O type, and I/O size. [interval1, ioTypeSum1, rSize1, wSize1, interval2, ioTypeSum2, rSize2, wSize2, . . . , interval8, ioTypeSum8, rSize8, wSize8] After obtaining multiple combined I/O eigenvalues, the method includes inputting the multiple combined I/O eigenvalues to the input fully connected layer of the neural network, so as to map the combined I/O operation eigenvalues into a high-dimensional vector space through linear transformation. The eigenvalues of low-dimensional I/O operations are mapped to the eigenvalues of high-dimensional I/O operations to find more eigenvalues. Specifically, you can transform a set of vectors 32*1 into a set of vectors 128*1 through the weight matrix. The specific transformation process can be X128×1=W128×32×S32×1+b128×1. Wherein, W128×32is the weight matrix and b128×1is the offset, so as to map the 32-dimensional I/O operation eigenvalue to the 128-dimensional I/O operation eigenvalue. Step402, forming neuron set by a plurality of neural network cells, so that the feature values after the high-dimensional vector space mapping are sequentially input into the neuron set to perform operations to obtain the operation result. After the eigenvalues of high-dimensional I/O operation are obtained, a neuron set is formed through multiple LSTM cells (as shown inFIG.5, assuming that 128 LSTM cells are included in this disclosure), and the eigenvalues of high-dimensional I/O operation are input into this neuron set for calculation circularly to get the result of the operation. Specifically, in practice, the more neurons there are in a given range, the more accurate the calculation will be. It is worth noting that too much computation will put too much load on the processor, which will affect the efficiency of real-time output. So the amount of data in a neuron cell is generally determined by the accuracy of the result and the processing power of the processor. The following is a single neuronal cell as an example, and the calculation process of high dimensional eigenvalues is described. Specifically, the calculation process of each neuron cell is shown inFIG.6. The neural network architecture of each neuron cell includes input gate, forgetfulness gate and output gate, as well as new memory state and past memory state. The function of each gate or state is described in detail in prior art and will not be repeated here. In this embodiment, it is assumed that there are 128 neurons, and the calculation results are cycled for 128 times in order to improve the accuracy of the calculation. Step403, inputting the operation result to the output fully connected layer of the neural network to map the operation result to the feature value of the output dimension of the prediction result, and the feature value of the output dimension of the prediction result reflects the I/O prediction result of the first time period. After the high-dimensional feature vector value is calculated by multiple neurons, the output data from the output gate is input to the output fully connected layer of the neural network, so as to map the operation result to the output dimension of the predicted result. Among them, the specific output dimension depends on the user's choice, that is, it can be the output eigenvalue of any dimension. For example, output the eigenvalues of 32 dimensions or output the eigenvalues of 16 dimensions, or output the eigenvalues of other dimensions. Specifically, the present disclosure illustrates the 32 dimensional value output as an example. The process is as follows: Y32×1=W32×128×X128×1+b32×1. That is, the eigenvalue of 128 dimension is mapped to the eigenvalue of 32 dimension through the weight matrix W32×128. Wherein, b32×1is the matrix offset and represents the influence of noise in data processing. The data processing process ofFIG.4will be described in detail below with a specific embodiment. Assume that in the LSTM network model, a sample input is X={x1, x2 . . . , xT}, wherein T is the time step of the input sequence (tentatively T=1024). Specifically, corresponding to Step401, each XTis 32*8=256 eigenvalues of I/O operations. Since the duration of each I/O operation is about 3 us, the time interval corresponding to 1024 x in the input sample is about 32*8*3*1024≈1s. The output of the model after Step403is Y={y1, y2 . . . , y16, y17 . . . , y32}, wherein y1, y2 . . . , y16 is the predicted value of the size of future read operations at intervals of 50 ms. In other words, they are the predictive value of the size of intensity of read operations in the future, for 0˜50 ms, 50˜100 ms, . . . 750 ms˜800 ms respectively. And y17, y18 . . . , y32 is the predicted size of future write operations every 50 ms. In other words, they are the predictive value of the size of intensity of write operations in the future, for 0˜ 50 ms, 50˜100 ms, . . . 750 ms˜800 ms respectively. It should be noted that when the neural network model is used to forecast the I/O information in the first time period in the future, the amount of historical data is generally required to be greater than the amount of predicted data. For example, in this embodiment, the actual I/O information in 1s is used to predict the I/O information in the future 800 ms, and in order to improve the accuracy, it can also be used to predict the I/O information in the future 400 ms. Generally speaking, the larger the amount of historical data is and the smaller the amount of forecast data is, the higher the corresponding prediction accuracy will be, and vice versa. For the Y value of the model output, y1, y2 . . . , y16 can also be a predicted value of the size of future write operations at every 50 ms interval, and y17, y18 . . . , y32 can be the predicted value of the size of future read operations at every 50 ms interval. In other words, the output value of Y mainly depends on the previous model training process. If in the early model training process, y1, y2 . . . , y16 is the predicted value of the size of future read operations at intervals of 50 ms. Then, when the model outputs later, y1, y2 . . . , y16 is the predicted value of the size of read operations at intervals of 50 ms. Based on the embodiments described inFIG.1toFIG.4, the accuracy of the predicted I/O value can also be evaluated by the first formula after the predicted value is obtained. For details, seeFIG.7, another embodiment of the SSD data processing approach in this disclosure, including the followings. Step701: evaluating the prediction result through the first formula to evaluate the accuracy of the prediction result. The first formula includes: r=∑i=1n(xi-x_)(yi-y_)∑i=1n(xi-x_)2∑i=1n(yi-y_)2 The r represents the accuracy of prediction result, and the xiindicates the actual intensity of the current I/O, the yirepresents the predicted I/O intensity, thexrepresents the average value of multiple xi, and theyrepresents the average value of multiple yi. In some embodiments, the intensity of an I/O is represented by the size of an I/O. Specifically, after obtaining the predicted I/O value in the first period, the accuracy of the predicted I/O value can also be evaluated through the first formula. Wherein the xiindicates the actual intensity of the current I/O, the yirepresents the predicted I/O intensity, thexrepresents the average value of multiple xi, and theyrepresents the average value of multiple yi. In some embodiments, the intensity of an I/O is represented by the size of an I/O. In the present embodiment, a detailed description of the first formula for evaluating prediction results is described in detail, and the implementability of the present disclosure is improved. The data processing method for a solid state drive in an embodiment of this disclosure is described above and the data processing device for a solid state drive in an embodiment of this disclosure is described below. Refer toFIG.8, an embodiment of a data processing device for a solid state drive in this disclosure includes the following: an obtaining unit801, configured to acquire the interface protocol instruction received by the SSD; an analyzing unit802, configured to parse the interface protocol instruction to acquire I/O information in the protocol instruction; and the I/O information includes at least: I/O timestamp, I/O type, and I/O size; and a predicting unit803, configured to perform machine learning on the I/O information to predict the I/O information for the first time period in the future, so that the processor of the SSD performs SSD management actively according to the prediction result. In some embodiments, the predicting unit803is specifically configured to: preprocess on the I/O information to obtain a first I/O information table; combine multiple adjacent I/O operations in the first I/O information table into one I/O operation; use characteristic values of the multiple combined U/O operation for model learning of neural network LSTM to obtain an I/O prediction result in the first time period; and perform post-processing on the I/O prediction result to adjust the prediction result. In some embodiments, the predicting unit803is specifically further configured to: calculate time interval between multiple sets of I/O operations that meet a first preset number according to the I/O timestamp; count a total number of read operations and write operations of the I/O operations in each group within the time interval and the corresponding data size to obtain the first I/O information table. In some embodiments, the predicting unit803is specifically further configured to input the characteristic values of the multiple combined I/O operation into the input fully connected layer of the neural network to map the merged I/O operation characteristic values into a high-dimensional vector space through linear transformation; the I/O operation characteristic values are related to the I/O timestamp, the I/O type, and the I/O size. The predicting unit803is also configured to form a neuron set with at least one LSTM cell, so that the feature values after the high dimensional vector space mapping are sequentially input into the neuron set to perform operations to obtain the operation result. The predicting unit803is further configured to input the operation result to the output fully connected layer of the neural network to map the operation result to the feature value of the output dimension of the prediction result, and the feature value of the output dimension of the prediction result reflects the I/O prediction result of the first time period. In some embodiments, the predicting unit803is specifically configured to input the feature values after the mapping of the high dimensional vector space sequentially into the network architecture of each neuron cell for calculation; the network architecture includes an input gate, a forget gate and an output gate. In some embodiments, the time interval corresponding to the multiple combined I/O operation feature values is greater than the first time period. In some embodiments, the solid-state disk management includes cache management, garbage collection, or instruction queue management. In some embodiments, the device further includes a assessing unit804, which can be used to assess the prediction results through the first formula to evaluate the accuracy of the prediction result; The first formula includes: r=∑i=1n(xi-x_)(yi-y_)∑i=1n(xi-x_)2∑i=1n(yi-y_)2 The r represents the accuracy of prediction result, and the xiindicates the actual intensity of the current I/O, the yirepresents the predicted I/O intensity, thexrepresents the average value of multiple xi, and theyrepresents the average value of multiple yi. In some embodiments, the intensity of an I/O is represented by the size of an I/O. It should be noted that the function of the above units is similar to those described inFIGS.1-7, and details are not described herein again. In this disclosure, the interface protocol instruction received by SSD is first obtained by obtaining unit801, and then the interface protocol instruction is parsed through the analyzing unit802to obtain the I/O information in the protocol instruction. The I/O information includes at least I/O timestamp, I/O type, and I/O size. The machine learning is performed on the I/O information to predict the I/O information in the first time period in the future through the predicting unit803, so that the SSD processor can actively execute and optimize the SSD management according to the prediction results. In the present disclosure, through machine learning algorithm, the I/O information of the first time period in the future can be predicted actively according to the current I/O information, so that the SSD processor can execute the SSD management actively according to the predicted results, so as to improve the performance of the SSD and reduce the response time of data operation. The data processing device for a solid state drive in an embodiment of this disclosure has been described above from the perspective of a modular functional entity and a computer device in an embodiment of this disclosure is described below from the perspective of a hardware processing. The computer device is used to implement the data processing device of the SSD, and one embodiment of the computer apparatus in the present disclosure includes a processor and a memory. The memory is used to store computer programs, and when the processor is used to execute computer programs stored in memory, the following steps can be achieved: acquiring the interface protocol command received by the solid state drive; parsing the interface protocol command to obtain I/O information in the protocol command, wherein the I/O information includes at least I/O timestamp, I/O type, and I/O size; and performing machine learning on the I/O information to predict the I/O information in the first time in the future, so that the processor of the solid state drive performs management actively according to the prediction result. In some embodiments of the present disclosure, the processor can also be used to achieve the following steps: preprocessing on the I/O information to obtain a first I/O information table; combining multiple adjacent I/O operations in the first I/O information table into one I/O operation; obtaining I/O prediction results of the first future time period by using at least one LSTM neural network with feature values extracted from the at least one combined I/O operation; and post-processing the I/O prediction results to adjust the I/O prediction results. In some embodiments of the present disclosure, the processor can also be used to achieve the following steps: calculating time intervals between multiple groups of a given number of I/O operations according to the I/O timestamps; and counting total number of read and write operations for each group of the given number of the I/O operations that occur in the time intervals and the corresponding data sizes to obtain the first I/O information table. In some embodiments of the present disclosure, the processor can also be used to achieve the following steps: inputting the feature values extracted from the at least one combined I/O operation to an input fully connected layer of the at least one LSTM neural network in order to map the feature values to a high dimensional vector space using linear transformations, wherein the feature values are related to the I/O timestamp, the I/O type, and the I/O size; forming a neuron set with at least one LSTM cell and feeding the feature values transformed from the high dimensional vector space to the neuron set to conduct computation; and inputting computation result to an output fully connected layer of the at least one LSTM neural network and mapping the computation result to form feature values of an output dimension of a prediction result, wherein the feature values of the output dimension of the prediction result represent the I/O prediction results of the first future time period. In some embodiments of the present disclosure, the processor can also be used to achieve the following steps: feeding the feature values transformed from the high dimensional vector space to network architecture of each LSTM cell to conduct computation, wherein the network architecture comprises an input gate, a forget gate and an output gate. In some embodiments of the present disclosure, the processor can also be used to achieve the following steps: assessing the prediction results through the first formula to evaluate the accuracy of the prediction results. The first formula includes: r=∑i=1n(xi-x_)(yi-y_)∑i=1n(xi-x_)2∑i=1n(yi-y_)2 The r represents the accuracy of prediction result, and the xiindicates the actual intensity of the current I/O, the yirepresents the predicted I/O intensity, thexrepresents the average value of multiple xi, and theyrepresents the average value of multiple yi. In some embodiments, the intensity of an I/O is represented by the size of an I/O. It will be appreciated that when the processor in the above-described described computer device performs the computer program, the function of each unit in each of the corresponding device embodiments may be realized, and details are not described herein again. Exemplary, the computer program can be split into one or more modules/units, the one or more modules/units being stored in the memory and is performed by the processor to complete this application. The one or more modules/units may be a series of computer program instruction segments capable of completing a particular function, the instruction segment for describing the execution process of the computer program in the data processing apparatus of the solid state drive. For example, the computer program can be divided into each unit in a data processing apparatus of the SSD, and each unit can implement a particular function illustrated by the data processing apparatus such as the corresponding sSSD. The computer device can be computing devices such as table-top computers, notebooks, handheld computers and cloud servers. The computer device can include, but is not limited to the processor, a memory. Those skilled in the art will appreciate that the processor, memory is merely an example of a computer device, and does not constitute a limitation to the computer device, it can include more or fewer components, or a combination of certain components, or different components; for example the computer device can also include an input and output devices, a network access device, a bus, and so on. The processor can be a central processing unit (CPU), as well as other general purpose processors, Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA), or other programmable logic devices, separate doors or transistor logic devices, discrete hardware components, and so on. The general purpose processor can be a microprocessor or the processor can be any conventional processor, etc. The processor is a control center of the computer device, using various interfaces and lines to connect components of the entire computer device. The memory can be used to store the computer program and/or module, the processor implements the computer by running or executing a computer program and/or module stored in the memory, and is invoking data stored in the memory to implement various functions of the device. The memory can primarily include a storage program area and a storage data area, wherein the storage program area can store the operating system, the application required for at least one function, etc.; the storage data area can store data created according to the use of the terminal. In addition, memory can include high-speed random access memory, and also non-volatile memory, such as hard disk, memory, plug-in hard disk, smart media card (SMC), secure digital (SD) card, Flash card, at least one disk storage device, flash device, or other volatile solid state storage device. The disclosure also provides a computer readable storage medium, which is used to realize the function of the application startup system and on which a computer program is stored; when the computer program is executed by a processor, the processor can be used to perform the following steps: acquiring the interface protocol command received by the solid state drive; parsing the interface protocol command to obtain I/O information in the protocol command, wherein the I/O information includes at least I/O timestamp, I/O type, and I/O size; and performing machine learning on the I/O information to predict the I/O information in the first time in the future, so that the processor of the solid state drive performs management actively according to the prediction result. In some embodiments of this disclosure, when a computer program stored on a computer readable storage medium is executed by a processor, the processor can be specifically used to perform the following steps: preprocessing on the I/O information to obtain a first I/O information table; combining multiple adjacent I/O operations in the first I/O information table into at least one I/O operation; obtaining I/O prediction results of the first future time period by using at least one LSTM neural network with feature values extracted from the at least one combined I/O operation; and post-processing the I/O prediction results to adjust the I/O prediction results. In some embodiments of this disclosure, when a computer program stored on a computer readable storage medium is executed by a processor, the processor can be specifically used to perform the following steps: calculating time interval between multiple sets of I/O operations that meet a first preset number according to the I/O timestamp; and counting a total number of read operations and write operations of the I/O operations in each group within the time interval and the corresponding data size to obtain the first I/O information table. In some embodiments of this disclosure, when a computer program stored on a computer readable storage medium is executed by a processor, the processor can be specifically used to perform the following steps: inputting the feature values extracted from the at least one combined I/O operation to an input fully connected layer of the at least one LSTM neural network in order to map the feature values to a high dimensional vector space using linear transformations, wherein the feature values are related to the I/O timestamp, the I/O type, and the I/O size; forming a neuron set with at least one LSTM cell, and feeding the feature values transformed from the high dimensional vector space to the neuron set to conduct computation; and inputting the computation result to the output fully connected layer of the neural network to map the computation result to the feature value of the output dimension of the prediction result, and the feature value of the output dimension of the prediction result reflects the I/O prediction result of the first time period. In some embodiments of this disclosure, when a computer program stored on a computer readable storage medium is executed by a processor, the processor can be specifically used to perform the following steps: feeding the feature values transformed from the high dimensional vector space to network architecture of each LSTM cell to conduct computation. The network architecture includes an input gate, a forget gate and an output gate. In some embodiments of this disclosure, when a computer program stored on a computer readable storage medium is executed by a processor, the processor can be specifically used to perform the following steps: assessing the prediction result through the first formula to evaluate the accuracy of the prediction result. The first formula includes: r=∑i=1n(xi-x_)(yi-y_)∑i=1n(xi-x_)2∑i=1n(yi-y_)2 The r represents the accuracy of prediction result, and the xiindicates the actual intensity of the current I/O, the yirepresents the predicted I/O intensity, thexrepresents the average value of multiple xi, and theyrepresents the average value of multiple yi. In some embodiments, the intensity of an I/O is represented by the size of an I/O. It will be appreciated that if the integrated unit is implemented in the form of a software functional unit and is used as a separate product sales or in use, it can be stored in a respective computer readable storage medium. Based on this, the present disclosure implements all or part of the above-described corresponding embodiment method, or can be done by a computer program to instruct the associated hardware, the computer program can be stored in a computer readable storage medium, when executed by the processor, the steps of the various method embodiments can be implemented. Wherein, the computer program includes a computer program code that can be a source code form, an object code form, an executable file, or some intermediate form. The computer readable medium can include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a disk, an optical disk, a computer memory, a read-only memory (ROM), random access memory (RAM), electrical carrier signal, telecommunications signal, and software distribution media. It should be noted that the contents of the computer readable medium can be appropriately increased and decreased according to the requirements of legislation and patent practice within the judicial jurisdiction, such as in some jurisdiction, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunications signals. Those skilled in the art will clearly understand that in order to describe convenient and concise, the specific operation of the above-described system, apparatus, and unit, can refer to the corresponding process in the foregoing method embodiment, and will not be described herein. In several embodiments provided herein, it should be understood that the disclosed systems, apparatus, and methods can be implemented in other ways. For example, the device embodiment described above is merely a schematic, for example, the division of the unit is only a logic function division, and there may be additional division mode, such as a plurality of units or components, may be combined or can be integrated into another system, or some features can be ignored, or not executed. In addition, the coupling or direct coupling or communication connections displayed or discussed may be an electrical, mechanical or other form. The unit as the separation member may be or may not be physically separated, and the components displayed as the unit may be or may not be a physical unit, i.e., in one place, or can be distributed to a plurality of network elements. The object of the present embodiment can be implemented in accordance with the actual needs to select some or all units. Further, each of the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit can be generated separately, or two or more units can be integrated into one unit. The above-described integrated units can be implemented in the form of hardware, or may be implemented in the form of a software functional unit. As described above, the above embodiments are intended to illustrate the technical solutions of the present disclosure, not to limit them; although the foregoing embodiments have been described in detail, those of ordinary skill in the art will understand that they can still modify the technical solutions described in the various embodiments, or replace part of the technical features; these modifications or replacements does not depart the nature of the corresponding technical solutions from the spirit and scope of the present disclosure. | 46,269 |
11861211 | DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS Multi-host technology, which is known, allows multiple compute or storage hosts to connect into a single interconnect adapter, for example by separating an adapter (PCIe, for example) bus into several independent interfaces. For example, Mellanox Multi-Host™ technology, first introduced with ConnectX®-4, is enabled in the Mellanox Socket Direct card. This technology allows plural hosts to be connected into a single adapter by separating the PCIe (for example) interface into plural independent interfaces. As described for example in the following https www link: nvidia.com/en-us/networking/multi-host/, each interface typically connects to a separate host CPU. Typically, in multi-host architectures, multiple hosts connect directly to a single network controller, yielding direct access to data with low capital and operating expenses. The multi-host architecture may include different CPU architectures (e.g., x86 and/or Power and/or ARM central processing units) where each host is independent of the others, yet all hosts may share the same network interface which conserves switch ports, cables, real estate and power. A PCI (peripheral component interconnect) or PCIe device, when addressed, is typically enabled e.g., by being mapped into a system's I/O port address space or memory-mapped address space. The system's firmware, device drivers, or the operating system may program so-called “Base Address” Registers (aka BARs) to inform the device of the PCIe device's address mapping e.g., by writing configuration commands to the PCIe controller. InfiniBand (IB) is a computer networking communications standard, used for data interconnect among and/or within computers e.g., supercomputers and/or as a direct or switched interconnect either between servers and storage systems, or among storage systems. Published InfiniBand specifications are available from the InfiniBand trade association. InfiniBand provides remote direct memory access (RDMA) capabilities for low CPU overhead. InfiniBand uses a switched fabric network topology aka switching fabric in which, typically, nodes interconnect via more network switches, such as crossbar switches. Mellanox's InfiniBand host bus adapters and network switches, for example, are used in many commercially available computer systems and databases. The following terms may be construed either in accordance with any appropriate definition thereof appearing in literature in the relevant field of technology, or in accordance with the specification, or to include in their respective scopes, the following: The term “Fabric” is intended to include, by way of non-limiting example, the fabric illustrated at the following https link: etherealmind.com/wp-content/uploads/2011/07/what-switch-fabric-pt2-6.jpg/. The term “privilege” refers to an individual attribute (e.g., of a computer process) that allows a “privileged” process to perform a security-related operation (such as a memory access operation e.g., access of a DPU to host memory from the ARM side) and does not allow other (“non-privileged”) processes to perform the same operation. A process that is running with a privilege or privileges is called a privileged process and the program that the process is running is called a privileged or trusted program. A privileged computer process is authorized and/or trusted, to perform at least one given security-relevant function that other, (“non-privileged”) processes are not trusted to perform, hence are not authorized to perform. Host: a networked computer, such as, by way of non-limiting example, a DPU. The terms “compute host” and “storage host” are defined in the following https www online location: nvidia.com/en-us/networking/multi-host/. A “key identifier” is an identifier e.g., a number or other string which identifies (is associated, typically uniquely, with) a given memory region and serves as a key (e.g., the key identifier must be given to the hardware in order to access the given memory region). “Memory isolation” occurs when a process A cannot access process B's memory (e.g., cannot access any memory which is associated with any process other than process A itself), except when process A is given the privilege to access, or is granted explicit permission to access, to memory associated with some process other than process A itself. For example, “logical” memory isolation may be achieved by an NTB which provides processor domain partitioning and address translation between memory-mapped spaces of processor domains, such that devices on each side of the bridge are not visible from the other side, and, nonetheless, data transfer and status exchange between the processor domains is possible. The term “process” is intended to include, e.g., as described in this link: en.wikipedia.org/wiki/Process_(computing), an instance of a computer program that is being executed by one or many threads. The process typically contains program code and its activity. Depending on the operating system (OS), a process may include plural threads of execution, all executing instructions concurrently. Bridge chip: a device that connects one bus (e.g., PCIe bus), on which there may be a first host, to another (e.g., PCIe) bus, on which there may be a second host. A bridge chip may have on-board read and write queues, may have prefetching functionality, and may have caching functionality, either or both of which may be configurable by a user. Local/remote: given a computer process performed by a first host, which seeks access, via a bridge chip, to at least one local memory buffer residing on a second host, a “local” key is a key from the first host, whereas a “remote” key is a key from the second host. Read operation: includes reading data from remote host memory, and writing that data to a local host's memory. Write operation: includes reading data from the local host memory, and writing that data to the remote host's memory. “Offset” of a host is a term which defines where a memory transaction starts in a given memory area or region pointed to by a given key. For example, especially when the memory area comprises a very large memory region, the offset (e.g., from the start of the region) and a length parameter may be used to define a sub region (within the given area) that a given transaction is to act on. Register: intended to include storage, e.g., for a key identifier; if this occurs in accordance with the PCIe specification, each register comprises a valid address on the device address space range or config space, from which a host may read and/or from which a host may write. An NTB (Non-Transparent Bridge) is a bridge chip which (e.g., as described in the following online https www link: kernel.org/doc/html/latest/driver-api/ntb.html), connects plural memory systems or memories residing on plural computers respectively, to a single fabric (e.g., to a PCI-Express (aka PCIe) fabric in which case the NTB may be termed a PCIe NTB chip). NTB hardware may support read-and-writable registers that are accessible from both sides of the device, which allows peers to exchange a certain amount of data at a fixed address. NTB client drivers register with the NTB core driver; registration may use the Linux Device framework, for example. NTB is useful when it is desired to share some memory between plural systems which are deployed on the 2 “sides” of the bridge respectively. Typically, the NTB has an API which supports two types of memory window interfaces—inbound translation and outbound translation. The former is typically configured on a local NTB port, and the latter (outbound translation) is typically configured by the peer or remote side of the bridge, on the peer NTB port. It may be useful, for a user of a multi-host environment, and specifically in an environment which includes a SmartNIC or a DPU, to offload some of the processing and computation to the DPU host itself (e.g., ARM cores on Bluefield®2). However, those offloaded processes may need access to data which resides on the main host itself (e.g., the x86). The problem to be addressed is how multiple processes in the DPU host access asynchronously to different part of the memory of the main host with varying granularity or different granularities, while giving each process different permission access, and ensuring that memory isolation (e.g., as defined above) is provided between the various processes. It is appreciated that if plural processes are to be downloaded to a DPU host, each such process may need access to the host memory. Possible solutions to the problem of how to support asynchronous access of multiple processes in a host, to various portions of a main host's memory having various granularities, include: 1. Legacy NTB—conventional APIs of conventional NTBs may include fixed mapping of the host memory to the NTB address space. Here, accessing different portions or regions of memory typically involves reconfiguring the window each time—which can only be done by a privileged process. Moreover, according to the PCIe specification, reconfiguring the window typically cannot be done asynchronously. If it is desired or needed to support accessing to different address spaces, a window for each address space may be required—which would likely translate to an unattainable size of BAR. 2. RDMA—using RDMA read and RDMA write, one can overcome the problem. However, this solution requires an RDMA capable device and opening multiple connections between each offloaded process to the host. It is appreciated that when an RDMA capable device opens plural connections between plural offloaded processes and a host, this undesirably consumes network resources. 3. A client-server model of an RPC (remote procedure call) may be used. This involves the DPU host having a client daemon to send transactions to read/write a memory on the main host and having an exposed API to the other process. The main host itself may have a server daemon to accept those transactions, execute them, and send a response, if and as needed. This solution requires additional utilization of the CPU—for both the DPU host and the main host. Moreover, this solution has high latency overall, and the required software is complex. However, in contrast, certain embodiments of the invention include an NTB device API or IO API e.g., as shown inFIG.1which, typically, asynchronously supports both read and write, typically provides memory isolation, and typically provides privileges for memory access. Generally, the term “asynchronous” is used to differentiate from “synchronous” operation of a system in which tasks or IO operations or transactions are performed one at a time, and only when one is completed, the next task is unblocked such that it is necessary to wait for a task to finish before moving to the next task. In asynchronous operations, a next task can begin before a previous task finishes. Thus, with asynchronous programming, multiple requests may be dealt with simultaneously, enabling many more tasks to be completed in a given period of time, thereby to facilitate parallelism. For example, a conventional computer process typically needs to wait to get memory, since a conventional or legacy NTB typically uses conventional PCIe transactions which generate a response only after a certain time has elapsed; this is not the case for the embodiments herein which are asynchronous. A-synchronicity of tasks e.g., of NTB IO operations (e.g., read and write transactions between two hosts which may be interconnected by an NTB) and asynchronous I/O are defined in the following on-line entry: en.wikipedia.org/wiki/Asynchronous_I/O. Thus,FIG.1illustrates an API (Application Programming Interface) apparatus, which is operative in conjunction with a bridge chip, a first host, and a second host, and which provides plural processes in a host with asynchronous access to plural portions of the memory of another host. The apparatus ofFIG.1typically provides non-privileged Non-Transparent Bridge control and IO, whereby plural processes in a host, e.g., DPU host, may asynchronously access various portions of a main host's memory, e.g., in accordance with the method ofFIG.2. Typically, no privileged process is required to control the NTB, since, instead, pre-registration is provided, in which key identifiers of memory region/s may be registered, and thereafter, NTB control can be provided by at least one (typically any) process which has, previously, registered a key identifier of a given memory region which is being used (accessed from the local host by another host via the NTB, for example) in a given transaction or IO session between the 2 hosts. The first host is, in some embodiments, external to the API (Application Programming Interface) apparatus, but in other embodiments the first host could be comprised therein. Similarly, the second host is, in some embodiments, external to the API (Application Programming Interface) apparatus, but in other embodiments the second host could be comprised therein. The bridge chip is, in some embodiments, external to the API (Application Programming Interface) apparatus, but in other embodiments the bridge chip could be comprised therein. The apparatus ofFIG.1typically includes 2 APIs, on both sides of the bridge chip e.g., NTB; each API belongs to different address space since the first API belongs to host1's address space, and the second API belongs to the host2 address space. According to certain embodiments, the hosts are connected to busses on either side of the NTB, and, when one of the host's CPU uses the API, that host can, typically, access the other host's memory, via the bridge chip e.g., NTB. The non-transparent bridge or NTB may have, but does not necessarily have, scratchpad registers and/or doorbell registers and/or heartbeat messages. The NTB device API may be via PCIe or any other suitable standard for device connectors, and the transactions between the 2 hosts may or may not be PCIe transactions. The second host may comprise a DPU e.g., a Bluefield®2 DPU in which case the second host includes ARM processors and the DPU host may also be termed “the ARM side” or “DPU side\host” of the bridge, as opposed to the other side of the bridge, which may be termed the “x86 side” (assuming the first or main host includes a x86 processor, as most do), or “main host side”, of the bridge. InFIG.1, dotted vs. dashed lines are used to differentiate between commands on the 2 sides of the bridge respectively. The NTB is typically symmetric in that each NTB API generates a READ/WRITE to both hosts. It is appreciated that DPU's can gain non-privileged random accesses to host memory from the ARM side in an asynchronous way, e.g., via RDMA. However, the API shown and described inFIG.1may be a less complex solution; for example, creation of a valid RDMA connection for usage may include various commands and/or data structures which need not be provided when using the API shown and described herein. FIG.2is a method, which may be performed by the system ofFIG.1and which provides non-privileged Non-Transparent Bridge control and IO, whereby plural processes in a host, e.g., DPU host, may asynchronously access various portions of a main host's memory, with various granularities. It is appreciated that the flow is typically asynchronous because the NTB is responsible for performing all the transactions, in contrast to legacy NTB architectures in which the process itself (not the bridge) performs read/write transactions. The method ofFIG.2may be performed each time a process on a first host, host1, seeks to access a memory of a process on a second host, host2. Alternatively, or in addition, processes on host 2 may seek to access a memory of a process on the second host. FIG.2may include all or any suitable subset of the following operations or those shown inFIG.2, suitably ordered e.g., as shown or as follows: Operation210: host1 (or a process thereon) has a memory request to be issued to host2 memory (e.g., to write data of given size into host2 memory, or to read data of given size from host2 memory). Typically, host1 comprises the DPU host shown inFIG.1, and host2 comprises the main host (e.g., x86) shown inFIG.1. However, this is not intended to be limiting, since other alternatives are also possible. For example, the host2 may comprise the DPU host shown inFIG.1, and host1 may comprises the main host. Host1 could be either a compute host or a storage host, and the same is true of host2. Host1 may make any suitable use of the memory subject of host1's memory request, if host1's memory request is granted. Example use-case: host1 may be a DPU to which host2 offloaded some process, e.g., in order to use the DPU as an accelerator. However, as a result, the offloaded process may have to access host2 memory (e.g., a database which host2 uses), thus in this use-case, host1 (e.g., the process offloaded thereto) may need access to host2 memory. Also, certain devices may be connected to a DPU and may not be connected directly to a host such as BlueField®-2× SSDs. If host1's request is granted, this could then enable the device to access the host memory and/or could enable the host (e.g., the main host) to access DPU device buffer/s. It is appreciated that the above are but examples of the many use-cases in which a DPU may need non-privileged random accesses to host memory from the ARM side, e.g., in an asynchronous way. According to certain embodiments, each process has a command gateway at its disposal (one gateway per process) e.g., as described herein. Assume that in Operation210, a “process 1” on host1 seeks to issue the memory request to the memory of “process 2” on host2. Subsequently, the following operations may follow: Operation220: process 1 on host1 registers its local memory buffer in the NTB and gets its key identifier (typically using the NTB API ofFIG.1). Operation230: process 2 on host2 registers its local memory buffer in the NTB and gets its key identifier (typically using the NTB API ofFIG.1). Typically, host2 gives host 1 the host2 key identifier typically through a method which is external to the NTB API. Any suitable technology may be employed for exchanging key identifiers, such as, by way of non-limiting example, via network connection (e.g., TCP), via a shared file from which both hosts can read and/or to which both hosts can write, or even by manual copying. It is appreciated that if the memory buffer being used is already registered, Operation220is typically omitted. Also, if the memory buffer being used is already registered, Operation230is typically omitted. Operation240: process 1 on host1 issues the secured NTB access command (which may include all or any suitable subset of: <host1_key, host2_key, host1_offset, host2_offset, size, read/write>). This command may, generally, include a local address where the NTB writes a read response which is required at a target address, or takes write data from, and/or a remote address serving as a target address for the read response or for the write data, and/or transaction size, and/or indicator bit stipulating whether the transaction is a read transaction or a write transaction (typically using the NTB API ofFIG.1). It is appreciated that operation240onward can be done multiple times (depending on the number of accesses host1 needs to perform). Operation250: The NTB gets a command to perform the transaction Operation260: The NTB checks if the keys are valid and have permissions Operation270: Determination of validity e.g., yes/no Operation280: Report error Operation290: Issue the transaction Operation300: Increment a command counter, or write a completion bit indicating that command has been completed, e.g., in local address which may be in host1's own local memory. The term “completion bit” is intended to include a bit whose value indicates whether or not a certain transaction has issued and is completed. Since, when the bridge chip finishes a transaction, a “completion” bit is flipped (e.g., if the bit is 0, the bridge chip writes 1, or if the bit is 1, the NTB writes 0), and at least one (typically any) process can determine whether or not a transaction has been completed by tracking the completion bit's value. However, it is appreciated that implementation via a command counter or completion bit are described herein merely by way of non-limiting example. Alternatively, any other appropriate method to track when a transaction has finished or been completed, may be used. Typically, the command counter is physically stored on an internal memory of the NTB. The command counter may, logically, form part of the address space range indicated by the BAR of the NTB. Operation310: Done. According to certain embodiments, each time a process in host1 wants to issue a memory request to a portion of host2's memory, the flow ofFIG.2is performed. If for example, the method ofFIG.2is performed n times by n processes in host1, where the n processes each seek to issue a memory request to a respective portion of host2's memory, each of the n processes typically gains asynchronous access to the corresponding portion of host2's memory. Any suitable action may be taken or not taken, responsive to an error having been reported in operation280. Typically, whichsoever entity e.g., process in host1 commanded the NTB to perform the transaction, is the entity which gets the error report. One action that may be taken, is for the entity e.g., process in host1 to issue a command to perform a valid transaction, if an error has been reported indicating that a command previously issued by that entity, was not valid. For example, if the length field of a command issued by the process is greater than the size of the memory region—the NTB may fail the command, but no fatal error state is entered; instead, the method, after operation280, returns to operation250, and the process which issued the failed or invalid command may issue another command to use a valid transaction, in which case no error would be reported in operation80, and the transaction may issue (operation290). “Fatal” error states, if any, may be defined using any suitable criterion e.g., may be a result of an internal NTB HW implementation. For example, the NTB HW implementation may create scenarios of fatal errors. It is appreciated that incrementation of a command counter or flipping a completion bit are 2 possible methods, mentioned by way of non-limiting example, for indicating to the process which issued the command, that the asynchronous transaction, subject of operation280, has been completed. According to certain embodiments, memory isolation is achieved, since only a process A which has a given key identifier can access a given memory, whereas at least one (typically any) process which does not have the given key identifier cannot access the given memory. Thus, the given memory is isolated from all processes A. The NTB API may be characterized by a command structure which includes all or any suitable subset of the following command structure components: <local key, remote key, local offset, remote offset, transaction size e.g., in bytes, and a binary read\write bit>. The local and/or remote keys typically each uniquely identify a memory address range or memory region which the NTB can access; this allows security and/or isolation to be enforced. Typically, certain portions of each memory region or range e.g., the first few bytes (say, 4) of the memory region are reserved for NTB use, e.g., for provision of command gateways as described herein. The stored local/remote offset may indicate an offset from the start of the memory space identified, typically uniquely, by the key. The read/write bit indicates whether a given transaction is a read operation or a write operation. The NTB API command/s may be written, by at least one process, to a “command gateway” assigned thereto, typically uniquely, i.e., the command gateway is typically assigned to a given process, and to no other process. The NTB may then start processing the command and a new command may be written to the gateway. Typically, each command gateway has a corresponding “command counter” which increments upon command completion; the command gateway and its command counter typically require no more than a few bytes of storage. Multi-Host Environment The API of the NTB may include a command in the following format:<local address, remote address, size, read\write> “local address” is the space to which the bridge chip may write the read response or from which the bridge chip may take the write data; the remote address is also termed the “target” address in the host memory, and size is the size of the transaction in bytes and read\write is the transaction type. Each time a process seeks to issue a read from the host memory in address X for S bytes and seeks to store the result in the process's address Y—the process may issue the command: <Y, X, S, read>. The command for write transaction may be analogous e.g., each time a process seeks to issue a write of S bytes to the host memory at address X, and the S bytes are currently in the process's address Y—the process may issue the command: <Y, X, S, write>. After issuing this command, the process does not need to wait, since the NTB will handle the command asynchronously. The NTB typically notifies the process of command completion by incrementing a command counter which corresponds to a command gateway via which this command was issued, or by storing a completion bit in a local address which may reside, say, in the first byte of the local address. The process may then proceed accordingly, e.g., the program may, depending on its own logic, take some action and/or refrain from taking some action, responsive to being notified of command completion. For example, if a process in a host on one side of a bridge is configured to read data from another host on the other side of the bridge, the process may begin using the data only upon receipt of notification of command completion. In contrast, it is appreciated that a conventional computer process typically needs to wait to get memory, since a conventional NTB uses conventional PCIe transactions which generate a response only after a certain time has elapsed; this is not the case for the embodiments herein. To allow multiple processes to access the NTB concurrently, each process may be given at least one “command gateway”, where that process may write its commands. It is appreciated that plural command gateways may be assigned to a single process, however, to ensure isolation, plural processes typically are not assigned to a single command gateway. Since each process's command gateway/s need be only a few bytes in size, e.g., as long as the length of the command itself, up to thousands of processes may be supported at the “cost” of but a few KB of BAR which may be devoted to a command gateway. The KB devoted to the command gateway may comprise a suitable memory address space or memory region which may be pointed to by the Base Address Register (BAR) and which typically is but a few KBs in size, sufficient to include the addresses of the command gateway. The KB devoted to the command gateway may be part of the device BAR address space (which may or may not be resizable), thus may simply be memory addresses the device exposes to hosts, and may not be connected to any actual memory. The NTB may have an internal queue which monitors the commands written to the gateway aka GW (or may monitor the commands using any other suitable method or data structure). The NTB typically cannot distinguish between different processes, and, instead, simply enables a process to use the NTB if and only if that process has the right key. Thus typically, the NTB need not know how many commands are in the gateway, nor which command belongs to which process. It is appreciated that even a conventional NTB may be configured to transfer a transaction (e.g., read or write) between two hosts. The command gateway provided in accordance with certain embodiments serves as an interface for commanding the NTB to transfer a transaction (read or write) between two hosts. Typically, key identifiers, e.g., as described herein, indicate to the NTB which memory region to operate on, and/or the size of the data to be read/write, and/or a read/write bit indicates to the NTB which type of transaction to issue. To support permissions and/or isolation, key identifiers may be employed. The secured command format may, for example, include all or any suitable subset of the following format components: <local key, remote key, local offset, remote offset, size, read/write> where local/remote key is an identifier, typically unique, for a local/remote memory address range respectively, which the NTB can access, local/remote offset is the offset from the start of the local/remote memory space respectively, and the remaining parameters (size, read/write) characterize the transaction which the process seeks to issue by respectively indicating the amount of data to be read/written, and whether the transaction is a read-type or write-type transaction. Typically, the key identifier is required to be pre-registered by the relevant process with all relevant permissions, to ensure that only a process which owns a previously registered key identifier, uniquely (typically) identifying a given region in memory, may access the given region. Typically, the keys are not one-time keys. A key identifier is typically registered once, and may then, from that point onward, be used repeatedly by the NTB (e.g., unless and until unregistered explicitly). Thus, typically, even given a memory region which is to be used (read from and/or written to) multiple times using a given key, a single registration of that key identifier is sufficient, assuming indeed that the same key is used for all the various uses of the memory region. Any suitable method may be used to register the memory such as, by way of non-limiting example, conventional memory registration methods known in InfiniBand technology e.g., as described in the following https www link: rdmamojo.com/2012/09/07/ibv_reg_mr/. It is appreciated that each process registers its key identifier, and, typically, such pre-registration occurs each time a memory region needs to be accessed by the NTB, because use of the NTB with the memory region typically requires that the memory region be registered, and that the key identifier which, typically uniquely, identifies the registered memory region be provided as a condition for accessing that region. The entity responsible for registering a specific memory region is typically the process which owns this memory region. It is appreciated that in conventional use of NTB to share memory between 2 hosts, the NTB may create a region on its own address space which points to the memory region of the host. If Host A wants to write to Host B, then all or any suitable subset of the following operations may be performed: Operation 1a: Host A may configure the NTB window to point to the desired memory Operation 1b: Host A may notify Host B that Host B may write Operation 1c: Host B may write its actual data to the NTB address space. The NTB may forward this write to the memory that the NTB window points to (to the Host A memory). Typically, this window allocation on the NTB address space limits the size and/or amount of memory which can be spanned using a single NTB. Alternatively, however, an NTB provided according to embodiments of the present invention, does not include the above type of window—and instead copies from a region on Host A to a region on Host B (or vice versa). For example (using the above example), if Host A wants to write to Host B, then all or any suitable subset of the following operations may be performed: Operation 2a: Host A may register a memory region with write permission Operation 2b: Host B may register a memory region with read permission Operation 2c: Host A may notify Host B about the Host A key Operation 2d: Host B may write its data to its registered region Operation 2e: Host B may write a command which commands the NTB to copy from Host B memory pointed by Host B key to Host A memory pointed by Host A key. Thus typically, Host B does not write the actual data because the command typically does not include the actual data. It is appreciated that, typically, Host B needs the Host A key to proceed, which prevents other processes on Host B from gaining access to memory. Typically, a NTB command is valid if and only if it includes both keys—a local key and a remote key. It is appreciated that embodiments herein have many advantages in practice, such as, by way of non-limiting example, the following: processes in a first host accessing the other host memory may be non-privileged. Security and isolation between processes is maintained. Multiple processes are supported. Asynchronous access to host memory, typically with more than one level of granularity, is supported. It is appreciated that granularity may define a relationship e.g., a ratio between amounts of computation and of communication. If parallelism is fine-grained, task code sizes and execution times are small, and small amounts of data (e.g., a few memory words or less) is communicated between processors frequently. Conversely, if parallelism is coarse-grained, task code sizes and execution times are large, and the amounts of data transferred among processors are also large and are transferred among processors infrequently. Thus, granularity defines how frequently data is communicated, and with which amount (large or small) of computation. It is appreciated that determination of granularity typically is subject to the following tradeoff: fine granularity increases parallelism and speed, but also increases overheads of synchronization and communication. Conversely, coarse granularity decreases parallelism and speed, but also decreases overheads of synchronization and communication. According to certain embodiments, use of the system ofFIG.1and/or of the method ofFIG.2allows multiple processes in a host, e.g., DPU host, to asynchronously access different parts of a main host's memory, with various granularities. Example 1: Consider a single offloaded process with a single granularity e.g., table entry size. This offloaded process may need to access a few entries in a table in main host memory. The table size is huge (e.g., tens of GB) whereas each entry therewithin is small (e.g., a few bytes each). The table cannot, due to its large size, be mapped all at once through legacy NTB, so accessing to the first entry and to the last would require costly re-configuring of the legacy NTB. In contrast, the API described herein, e.g., via the embodiments ofFIG.1and/orFIG.2, allows this access to take place with just one registration (of just one local memory buffer in the NTB, for example). It is appreciated that use of the embodiment ofFIG.2in the above Example 1 typically reduces communication overhead and/or prevents load imbalance, where load indicates the number of processes which use the NTB and how NTB resources are allocated to each. Still with reference to Example 1, it is appreciated that on legacy NTB, each window reconfiguration requires notifying the other host. Thus, accessing an entry on the table which is not mapped requires reconfiguring the window and letting the other host know about the window reconfiguration. In contrast, when using the API shown and described herein (e.g., when using the embodiment ofFIG.2), the entire table is mapped all at once. It is appreciated that load imbalance would result if, say, process A only requires accessing a few table entries, but maps the entire table. In this situation, only one mapping is required, and the mapping is coarse since it includes considerable unused table entries/regions. In contrast, if the window only includes the memory needed each time, the window typically needs to be configured on each access, thus each window configuration typically requires communication to inform the other host about the window reconfiguration. It is appreciated that references herein to memory regions may be interchanged with references to buffers, and vice versa. Example 2: The offloaded process needs to access big data residing on the main host. If legacy NTB were to be used, the big data may all be mapped, however, using that big data would require the process to transfer or read this big data itself from the window, which would require the process to use its own resources for accessing the big data. In contrast, if the API described herein, e.g., as per the embodiments ofFIG.1and/orFIG.2, is employed, the NTB does the transactions and copies the transaction results to whichever memory buffer was registered, thereby freeing the offloaded process's resources for other tasks. It is appreciated that embodiments herein improve parallel performance, by enabling a better balance between load and communication overhead. Due to the fact that varying granularity is supported, performance is no longer strictly subject to the above tradeoff. Instead, according to certain embodiments, the eventuality of too fine granularity, in which performance suffers from communication overhead, may be reduced or eliminated, and the eventuality of too coarse granularity, in which performance suffers from load imbalance, may also be reduced or eliminated. It is appreciated that in Example 2, the CPU is used less (is used more parsimoniously) since the copying is performed by the NTB, rather than by the CPU. DPU use-cases, for example, may require, or benefit from, or be improved by, non-privileged random accesses to host memory from the ARM side, which are asynchronous. FIGS.3a-3bshow examples of multi-host environments in which embodiments herein may be implemented; as shown, each host is associated with a memory (aka MEM). The term “all” is used herein for simplicity, to describe example embodiments. It is appreciated however that, alternatively, whatever is said herein to be true of, or to characterize or to pertain to, “all” members of, or “each” member of, or “every” member of, a certain set can also, in other embodiments, be true of, or characterize or pertain to, most but not all members of that set, or all but a few members of that set, or at least one (but less than all) member/s of the set. It is appreciated that software components of the present invention may, if desired, be implemented in ROM (read only memory) form. The software components may, generally, be implemented in firmware or hardware, if desired, using conventional techniques. It is further appreciated that the software components may be instantiated, for example as a computer program product, or on a tangible medium. In some cases, it may be possible to instantiate the software components as a signal interpretable by an appropriate computer, although such an instantiation may be excluded in certain embodiments of the present invention. It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately, or in any suitable sub-combination. It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather, the scope of the invention includes, inter alia, the appended claims and equivalents thereof. | 39,917 |
11861212 | DETAILED DESCRIPTION The invention aims at providing the technical solutions capable of simplifying the number of issued command sequences to use a single command sequence to achieve the purposes of multiple command sequences for the communication between a flash memory device and a flash memory controller105and also for all possible access/processing operations such as a data read operation, a copy back read operation, an erase operation, and a write/program operation under different write modes such as SLC (Single-Level Cell) mode, MLC (Multi-Level Cell) mode, TLC (Triple-Level Cell) mode, QLC (Quad-Level Cell) modes, and so on. A simplified command sequence for example comprises a starting command, intermediate information, and an ending command such as a confirm command, and the confirm command means that the flash memory device can start to execute or perform a corresponding operation associated of the simplified command sequence. It should be noted that the function of the simplified command sequence is different from that of a conventional command sequence since the simplified command sequence is used to access (e.g. read, write/program, or erase) data unit(s) for multiple planes or all planes while a conventional command sequence merely access data of one only one plane; the data unit(s) may mean block/page/sector unit(s) or other data unit(s) defined by a different data size/amount. FIG.1is a block diagram of a storage device100according to an embodiment of the invention. The storage device100for example is a flash memory storage device (but not limited) and comprises a memory controller such as a flash memory controller105and a memory device such as a flash memory device110having multiple flash memory chips/dies, each flash memory chip/die may comprise one or more different flash memory planes such as four planes. However, the number of planes can be different in different products of the storage device100, and this is not meant to be a limitation. The flash memory controller105at least comprises a processor1051and an input/output (I/O) circuit1052. The processor1051is coupled to the I/O circuit1052and is arranged to control the I/O circuit1052sending access (e.g. read, write, or erase) commands through a specific communication interface to the flash memory device110to control and access the flash memory device110. The flash memory device110comprises an I/O control circuit1101, a logic control circuit1102, a control circuit1103, a counter circuit1104, an address register1105, a command register1106, a memory cell array1107, a row address decoder1108, a column address decoder1109, and a data register1110. It should be noted that in different embodiments address data/information included in a command sequence transmitted from the flash memory controller105to the flash memory device110may be formed by only block address information, a combination of page address information and block address information or may be formed by a combination of page address information, block address information, and plane address information for the different processing operations. The plane address information is optional. A block/page/plane address information for example is indicated by using a serial number or an index number which may be in a range from zero to a maximum number. The maximum numbers for block/page/plane addresses are different. For instance, the flash memory controller105sends a write command, data to be written, and address information comprising a specific page address, a specific block address, and a specific plane address, and the flash memory device110after receiving such communication signals can correspondingly to write data into a page unit corresponding to the specific page address, specific block address, and specific plane address. The erase and read operations are similar. The flash memory controller105is coupled to the flash memory device110through the specific communication interface and controls or accesses the flash memory device110by sending one or more commands into the flash memory device110. The specific communication interface for example comprises at least signal ports/pins such as data pins DQ0-DQ7 or other data pins (not shown inFIG.1), and logic control pins such as CLE (Command Latch Enable), ALE (Address Latch Enable), RE (Read Enable), and other logical control pins. The data pins are coupled to the I/O control circuit1101, and the logic control pins are coupled to the logic control circuit1102. The memory cell array1107has two or more planes, and for example at least has a first plane and a second plane to respectively store a first block data or page data corresponding to the first plane and a second block data or page data corresponding to the second plane different from the first plane. To obtain the block data or page data of one or more planes, which is stored in pages of the memory cell array1107, the processor1051of the flash memory controller105sends a data read command or a data toggle command to the flash memory device110to control the memory cell array1107outputting the block data or page data from the pages into to the data register1110, and it then controls the data register1110outputting the block data or page data to the I/O control circuit1101, so that the I/O control circuit1101can transmit the block data or page data to the flash memory controller105through the pins DQ0-DQ7 of the specific communication interface. It should be noted that the selection of the data read command or data toggle command can be predetermined or pre-negotiated by the flash memory controller105and the flash memory device110. FIG.2is a diagram showing an example of the flash memory controller105sequentially issuing and sending commands to the flash memory device110to control the memory cell array1107outputting the block data or page data to the data register1110with the flash memory device110according to an embodiment of the invention. As shownFIG.2, in the example (but not limited), the flash memory device110comprises four different planes PLN0, PLN1, PLN2, and PLN3. The data register1110may allocate corresponding buffers such as four buffers which will be used for respectively storing block data or page data of different planes PLN0, PLN1, PLN2, and PLN3; each allocated buffer can be used to buffer a corresponding block data or page data of a specific plane when a corresponding data read command or a corresponding data toggle command is received and stored by the command register1106. It should be noted that the data sizes/amounts of the block data or page data for the different planes may be identical or different. To read out the block data or page data of a particular plane from the flash memory device110, the flash memory controller105(or the processor1051controls the I/O circuit1052) in a first sub-step as shown inFIG.2sequentially issues and sends four command sequences each comprising a read mode command (or page read command) such as 00h (the tailing ‘h’ means hexadecimal), block addresses and page addresses of the block data or page data of the particular plane (e.g. Addr0, Addr1, Addr2, or Addr3 of the planes PLN0, PLN1, PLN2, and PLN3), and a program load command such as multi-plane command 32h (but not limited) or a confirm command such as a read second cycle command 30h (but not limited) to the flash memory device110through the specific communication interface by using the pins DQ0-DQ7, ALE, and CLE, RE, and other pin(s). For instance, in the first sub-step ofFIG.2, the flash memory controller105sequentially issues and sends the read mode command 00h, address Addr0, and multi-plane command 32h, issues and sends the read mode command 00h, address Addr1, and multi-plane command 32h, issues and sends the read mode command 00h, address Addr2, and multi-plane command 32h, and then issues and sends the read mode command 00h, address Addr3, and the confirm command 30h (but not limited). For example, in this embodiment, when receiving the confirm command 30h, the control circuit1103of the flash memory device110can know and confirm that the first sub-step is finished. In practice, when the flash memory controller105sends command data of the read mode command or program load command, the flash memory controller105controls the signal of the pin ALE at a low level and controls the signal of the pin CLE at a high level so that the flash memory device110can know that the data received via the pins DQ0-DQ7 is command data and then can store the received command data into the command register ofFIG.1. Similarly, when the flash memory controller105sends address data of one or more planes, the flash memory controller105controls the signal of the pin ALE at the high level and controls the signal of the pin CLE at the low level so that the flash memory device110can know that the data received via the pins DQ0-DQ7 is address data and then can store the received address data into the address register ofFIG.1. The operations associated with the pin RE and/or other pin WE (not shown inFIG.1) are not detailed for brevity. In a second sub-step ofFIG.2, the control circuit1103of the flash memory device110is arranged to control the row address decoder1108and the column address decoder1109to control the memory cell array1107outputting corresponding block data or page data to the data register1110based on the received address (es) buffered by the address register1105and the received command(s) buffered by the command register1106, so that the corresponding block data or page data of the different planes PLN0, PLN1, PLN2, and PLN3 can be transmitted from the memory cell array1107to and buffered in the buffers of the data register1110. In one embodiment, when the block data or page data is buffered in the data register1110, the flash memory controller105can issue and send a specific indication command (e.g. a specific data read command or a data toggle command) to the flash memory device110, and then in a third sub-step ofFIG.2the control circuit1103can obtain and move the corresponding block data or page data from the data register1110to the I/O control circuit1101ofFIG.1so that the I/O control circuit1101in a fourth sub-step can perform a data toggle operation to control the data register1110selecting and transferring the different block data or page data to the I/O control circuit1101to make the I/O control circuit1101sequentially transmit the selected different block data or page data to the flash memory controller105through the specific communication interface in response to the specific data read command or the data toggle command, so as to return or output the corresponding block data or page data from the flash memory device110to the flash memory controller105. For instance, the transmission of one block data or page data of a plane can be followed by the transmission of another block data or page data of a different plane. In this embodiment, the data toggle command may be determined and selected among multiple reserved commands, and for example is can be configured to be different from a standard command (or a vendor specific command) and may be implemented by using a reserved command such as 0Bh, 12h, 14h, 18h, 1Bh-1Ch, 62h-64h, AAh, 76h, 82h-83h, 86h, and 8Eh wherein ‘h’ means hexadecimal. The following table shows the different examples of the reserved commands which can be used to implement the data toggle command: TypeOpcodeStandard00h, 05h-06h, 10h-11h, 15h, 30h-32h, 35h, 3Fh, 60h, 70h,Command Set78h, 80h-81h, 85h, 90h, D0h-D1h, D4h-D5h, D9h,E0h-E2h, ECh-EFh, F1h-F2h, F9h, FAh, FCh, FFhVendor Specific01h-04h, 07h-0Ah, 0Ch-OFh. 13h, 16h-17h, 19h-1Ah,1Dh-2Fh, 33h-34h, 36h-3Eh, 40h-5Fh, 61h, 65h-6Fh,71h-75h, 77h, 79h-7Fh, 84h, 87h-8Dh, 8Fh, 91h-CFh,D2h-D3h, D6h-D8h, DAh-DFh, E3h-EBh, F0h, F3h-F8h,FBh, FD-FEhReserved0Bh, 12h, 14h, 18h, 1Bh-1Ch, 62h-64h, 76h, 82h-83h,86h, 8Eh As shown inFIG.1, the flash memory device110provides the data toggle operation/function and can perform the data toggle operation to output and transmit the corresponding block data or page data of one or more selected planes stored in the data register1110. In practice, after the corresponding block data or page data of the different planes PLN0, PLN1, PLN2, and PLN3 has been transmitted from the memory cell array1107to the data register1110and stored in the data register1110, the control circuit1103of the flash memory device110can control the address register1105to make corresponding block data or page data stored in the data register1110be transmitted to the I/O control circuit1101and then outputted to the flash memory controller105in response to the requirement of the flash memory controller105according to the bit map information INFO and a mask value VM which are stored in the control circuit1103. The bit map information INFO and the mask value VM can be predetermined by the processor1051of the flash memory controller105. The bit map information INFO (or regarded as plane bit map information) may comprise multiple bits, and each bit corresponds to a specific plane and is used to indicate whether a block data or page data of such specific plane is transmitted when the data toggle operation is executed or performed. The number of the multiple bits, i.e. the number of the sequence of bits, is identical to the number of planes. The mask value VM is used to indicate the number (i.e. maximum) of bits/bytes of each block data or page data, and it may be different in each product/implementation of the flash memory device110. The bit map information INFO and the mask value VM can be determined by the processor1051of the flash memory controller105when the flash memory device110is supplied with power before the specific data read command or the data toggle command is received by the flash memory device110. The parameter(s) of the data toggle operation can be predetermined by the processor1051of the flash memory controller105.FIG.3is a timing diagram showing an example of the flash memory controller105sending a data toggle set-feature signal to the flash memory device110to configure/set the feature information or parameter(s) of the data toggle operation of the flash memory device110according to an embodiment of the invention. As shown inFIG.3, when the flash memory controller105or flash memory device110is supplied with power (or it is powered on), the processor1051of the flash memory controller105can control the I/O circuit1052sending a data toggle set-feature signal to the flash memory device110to enable or disable the data toggle operation of the flash memory device110or configure one or more parameters of the data toggle operation. For example, the data toggle set-feature signal may comprise a set-feature command (cycle type indicated by CMD) such as EFh (but not limited) and a data toggle control information which follows the set-feature command EFh. The data toggle control information is associated with transmissions of different planes of the flash memory device110, and it for example comprises a feature information FA (cycle type indicated by ADDR) and/or one or more parameter data P1, P2, P3, and P4 (cycle type indicated by DIN) to the flash memory device110; and one parameter data can be implemented using one or more bits or using one byte or more bytes if the number of totally used parameter data is less than four. The total data length of all parameter data can be configured to meet or match the requirements specified in the standards of flash memory controller/device products; for example (but not limited), the total data length can be configured as four bytes. The number of parameter data is not intended to be a limitation. For setting the features or parameters of the data toggle operation, the content of feature information FA is determined to be associated with the data toggle operation, and thus when receiving such feature information FA the flash memory device can know that the following parameter data are used for setting the data toggle operation. In one embodiment, the feature information FA can be used to specify or define the different feature operations/functions to be configured or specify the page addresses of the block data or page data of the different planes to be read out by the data toggle operation. Equivalently, the feature information FA comprises a plurality of toggle parameters respectively corresponding to the different planes of the flash memory device110; for example, a first toggle parameter of a first plane may be different from a second toggle parameter of a second plane. In addition, in other embodiment, the feature information FA can carry the above-mentioned bit map information INFO and/or mask value VM per die/chip, which are determined or dynamically adjusted by the processor1051and then transmitted to the flash memory device110. Different mask values VM can be configured by using the feature information FA. The examples of its corresponding information and descriptions of the feature information FA can be indicated by the following table: FeatureDescription00hReserved01hTiming Mode02hNV-DDR2/NV-DDR3/NV-LPDDR4 Configuration03h-0FhReserved10hI/O Drive Strength11h-1FhReserved20hDCC, Read, Write Tx Training21hWrite Training RX22hChannel ODT configuration for NV-LPDDR423hInternal VrefQ value24h-2FhReserved30hExternal Vpp Configuration31h-4FhReserved50hReserved51h-57hReserved58hVolume Configuration59h-5FhReserved60hReserved61hReserved62h-7FhVendor specific80h-FFhVendor specific For example, the reserved examples, e.g. 00h, 03h-0Fh, 11h-1Fh, 24h-2Fh, or other reserved addresses, can be used to implement the feature information FA. The parameter data P1 of the data toggle set-feature signal is used to indicate whether to enable or disable the data toggle operation. When the parameter data P1 is set as a first logic bit such as ‘1’, the data toggle operation, to be performed by the flash memory device110, can be enabled and configured as a sequential data read mode which will be arranged to sequentially transmit block data or page data of all the different planes from the flash memory device110to the flash memory controller105according to serial numbers of the different planes in response to requested address data included in a data read command (or a data toggle command) sent from the flash memory controller105. When the parameter data P1 is set as a second logic bit such as ‘0’, the data toggle operation of the flash memory device110is disabled. In this situation, the execution of the data toggle operation is stopped, and the flash memory controller105needs to send a data read command to the flash memory device110each time when it wants to receive the block data or page data of one plane. The parameter data P2 of the data toggle set-feature signal is used to indicate whether the data toggle operation enters an enhance mode. When the parameter data P2 is set as a first logic bit ‘1’, the data toggle operation, to be performed by the flash memory device110, is configured as the enhance mode (i.e. a partial selecting mode) which is arranged to transmit a portion of block data or page data of a portion of the different planes from the flash memory device to the flash memory controller105according to the bit map information INFO. That is, when the parameter data P1 indicates ‘1’ and the parameter data P2 indicates ‘1’, the control circuit1103of flash memory device110is arranged to select and transmit the block data or page data corresponding to specific block/page address of one or more planes based on the information of bit map information INFO. For example, a block data or page data corresponding to specific block/page address of a particular plane may be not selected and transmitted by the data toggle operation, and the serial number of the particular plane may be positioned between serial numbers of two different planes which is to be serviced by the data toggle operation from the flash memory device110to the flash memory controller105. Alternatively, when the parameter data P1 indicates ‘1’ and the parameter data P2 indicates ‘0’, sent from the flash memory controller105, the control circuit1103may transmit the block data or page data corresponding to specific block/page address of all planes. The parameter data P3 of the data toggle set-feature signal is used to indicate whether the data toggle operation can be performed in response to a data read command or in response to a data toggle command. When the parameter data P3 is set as a first logic bit ‘1’, the data toggle operation, to be performed by the flash memory device110, is configured to transmit block data or page data in response to a data toggle command such as 0xAA sent from the flash memory controller105. Alternatively, when the parameter data P3 is set as a second logic bit ‘0’, the data toggle operation, to be performed by the flash memory device110, can be configured to transmit block data or page data in response to a specific data read command such as 0x05 or 0x06 sent from the flash memory controller105. The parameter data P4 of the data toggle set-feature signal is used to indicate whether the data toggle operation uses a preset mask value stored by the flash memory device110or uses an updated mask value sent from the flash memory controller105as the mask value VM to transmit block data or page data corresponding to specific block/page address for one or more planes. When the parameter data P4 is set as a first logic bit ‘1’, the data toggle operation, to be performed by the flash memory device110, is configured to transmit block data or page data in response to a data read command (or a data toggle command) according to the preset mask value stored by the flash memory device110. Alternatively, when the parameter data P4 is set as a second logic bit ‘0’, the data toggle operation, to be performed by the flash memory device110, is configured to transmit block data or page data in response to a data read command (or a data toggle command) sent from the flash memory controller105according to the updated mask value sent from the flash memory controller105. Additionally, in other embodiment, the data toggle control information may further comprise a parameter data P5 (not shown inFIG.3). The parameter data P5 of the data toggle set-feature signal is used to indicate whether the data toggle operation uses the updated bit map information sent from the flash memory controller105or automatically calculates and obtains the bit map information by itself for the different planes. When the parameter data P5 is set as a first logic bit ‘1’, the data toggle operation, to be performed by the flash memory device110, is configured to transmit block data or page data in response to a data read command (or a data toggle command) sent from the flash memory controller105according to a bit map information which is automatically calculated and stored by the flash memory device110. Alternatively, when the parameter data P5 is set as a second logic bit ‘0’, the data toggle operation, to be performed by the flash memory device110, is configured to transmit block data or page data in response to a data read command (or a data toggle command) sent from the flash memory controller105according to a bit map information which is updated by the flash memory controller105. It should be noted that the bit map information, sent from the flash memory controller105to the flash memory device110, can be transmitted by using the feature information FA or by using another toggle control information sent from the flash memory controller105wherein the transmission of the another toggle control information may follow the transmission of a data read command (or a data toggle command) which will be described later. That is, based on the dynamically updated bit map information, the processor1051of flash memory controller105can real-timely determine whether to ignore the block data or page data corresponding to specific block/page address of a particular plane and notifies the flash memory device110of further transmitting the block data or page data corresponding to specific block/page address of a specific plane which is not asked last time. For instance, originally the bit map information may indicate that the block data or page data corresponding to specific block/page address of all planes are transmitted by the data toggle operation, and the updated bit map information can indicate that the data toggle operation does not need to transmit the block data or page data corresponding to specific block/page address of a particular plane. FIG.4is a timing diagram showing the example of the communication of the data toggle operation between the flash memory controller105and the flash memory device110according to an embodiment of the invention. As shown inFIG.4, the processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending a specific data read command (or page read command) such as 0x05 (i.e. 05h) or 0x06 (i.e. 06h), toggle control information, and a change read command (or change read column command) such as 0xE0 (i.e. E0h) to the flash memory device110; hexadecimal numbers can be written with and indicated by a leading “0x” or a trailing “h”. When receiving the data read command 0x05 and the data toggle operation has been enabled by the data toggle set-feature signal mentioned above, the flash memory device110or control circuit1103can know that the information following the command 0x05, i.e. the toggle control information, is used to configure the parameters of data toggle operation wherein the data amounts of the toggle control information for example may be two bytes (but not limited). Then, once receiving the command 0xE0, the flash memory device110or control circuit1103can know and confirm that the reception of toggle control information has been finished and it can start to execute the data toggle operation. The toggle control information may comprise block and page addresses (such as multiple-cycle addresses) of one or more planes, and the toggle control information can be used to specify which one or more block data or page data is to be returned. Then, the flash memory device110transmits the block data or page data, to be returned, from the data register1110to the I/O control circuit1101, and then the I/O control circuit1101can transmit the block data or page data, to be returned, to the flash memory controller105through the pins DQ0-DQ7. For example, a transmission of the change read command 0xE0 is followed by a transmission of a first block data or page data (i.e. first data unit corresponding to a first plane), which is followed by a transmission of a second block data or page data (i.e. second data unit corresponding to a second plane), and the first block data or page data and the second block data or page data are respectively associated with different planes of the flash memory device110. In practice, the control circuit1103stores the bit map information predetermined by the flash memory controller105, and it refers to the bit map information to determine which one or more block data or page data should be transferred and moved from the data register1110to the I/O control circuit1101. The counter circuit1104for example comprises an AND gate logic circuit (indicated by “AND”) and a counter which is used to run and count initially from zero to a specific value determined by the control circuit1103. The determined specific value may be equal to the mask value VM corresponding to the total byte number of a specific data amount such as one page data amount, and it for example (but not limited) is equal to 16384, i.e. 16×1024, if the mask value VM is associated with one page data amount of 16K bytes. Each time when the flash memory controller105changes/alters the signal level at the RE pin (i.e. read enable pin), the logic control circuit1102is arranged to transmit a trigger signal to notify the counter circuit1104, and the counter circuit1104increments its counter value by one and then compares the counter value with the determined specific value such as the mask value VM each time when receiving the trigger signal; that is, the counter value of the counter is incremented by one. When the counter value becomes equal to the mask value VM, the counter circuit1104sends an interrupt signal to the control circuit1103to make the control circuit1103select and switch to another plane (i.e. a next plane) and then transmit another block data or page data from the data register1110to the I/O control circuit1101if it is needed, so as to transmit a second block data or page data from the I/O control circuit1101to the flash memory controller105through the specific communication interface after the transmission of a first block data or page data is finished. In practice, in one embodiment, the counter for example is used for counting and incrementing the counter value by one in response to the trigger signal transmitted from the logic control circuit1102, and for outputting the incremented counter value to the AND gate logic circuit. The AND gate logic circuit is coupled to the counter and has a first input, a second input, and an output. The first input is coupled to the mask value VM (e.g. 16384 for one page having 16K bytes) determined by the control circuit1103, and the second input is coupled to an output of the counter to receive the counter value. The AND gate logic circuit performs an AND logic operation upon the counter value and the mask value VM and generates the interrupt signal to the control circuit1103only when the incremented counter value is equal to the mask value VM. When the interrupt signal is sent, the counter value is reset as zero. The circuit structure of AND gate logic circuit is not intended to be a limitation of the invention. In other embodiment, the initial counter value of the counter circuit1104may be set by the control circuit1103as the mask value, and the counter is arranged to count down to zero and the AND gate logic circuit performs the AND logic operation upon the decremented counter value and zero. The AND gate logic circuit generates the interrupt signal to the control circuit1103only when the decremented counter value is equal to zero. Thus, the control circuit1103can know and confirm that now the counting data amount is equal to a specific data amount such as one page data or one block data when receiving such interrupt signal, and then the control circuit1103can control the address register1105and the data register1110to select and switch to a next plane and transmit a next block data or page data of the next plane to the I/O control circuit1101if it is needed and also simultaneously reset the counter. For example (but not limited), the bit map information INFO may record four bits to respectively indicate whether the data of four planes PLN0-PLN3 should be transmitted or not. For example (but not limited), if the four bits is ‘1101’, then this indicates that the block data or page data corresponding to specific block/page address of the planes PLN0, PLN2, and PLN3 should be transmitted while the block data or page data corresponding to specific block/page address of the plane PLN1 is not transmitted. That is, the serial number of the plane PLN1 is between the serial number of the plane PLN0 and the serial number of the plane PLN2, and the block data or page data corresponding to specific block/page address of the plane PLN1 is not transmitted from the data register1110into the I/O control circuit1101in response to the bit map information INFO determined by the flash memory controller105. Thus, when the control circuit1103receives the interrupt signal for the first time, the control circuit1103can know and confirm that the transmission of the data amount of the current plane is finished, and it refers to the bit map information INFO and then can know the next block data or page data to be transmitted is the data corresponding to specific block/page address of plane PLN2 since the bit map information INFO indicates that the data of plane PLN1 is not transmitted. Similarly, when the control circuit1103receives the interrupt signal for the second time, the control circuit1103can know and confirm that the transmission of the data amount of the current plane is finished, and it refers to the bit map information INFO and then can know and confirm the next block data or page data to be transmitted is the data corresponding to specific block/page address of plane PLN3 since the bit map information INFO indicates that the current plane is plane PLN2. Similarly, when the control circuit1103receives the interrupt signal for the third time, the control circuit1103can know that the transmission of the data amount of the current plane is finished, and it refers to the bit map information INFO and then can know that the current plane PLN3 is the last plane and stops the data transmission from the data register1110to the I/O control circuit1101. Thus, by using the mask value VM, the operation of counter circuit1104, and the bit map information INFO, which is predetermined, the flash memory device110can correctly return which one or more block data or page data that is asked by the flash memory controller105. Accordingly, as shown inFIG.4, after receiving the confirm command 0xE0, the flash memory device110can correctly return or transmit one or more block data or page data that is asked by the flash memory controller105. In this example, the block data or page data of planes PLN0, PLN2, and PLN3 are sequentially transmitted from the flash memory device110to the flash memory controller105wherein the data transmission of plane PLN3 follows the data transmission of plane PLN2 which follows the data transmission of plane PLN0. It should be noted that, in the embodiment ofFIG.4, the mask value VM can be different in response to the plane requirements of a flash memory manufacturer and can be predetermined by using the two parameter data P2 and P3 such as two bytes included within the data toggle set-feature signal mentioned above. That is, when the flash memory device110is supplied with power, the mask value VM can be configured by using the data toggle set-feature signal. In addition, the mask value VM can be dynamically adjusted by the processor1051of the flash memory controller105respectively for the different planes. Also, the data toggle operation of the flash memory device110can be enabled by using the bit of parameter data P1 in the data toggle set-feature signal sent from the flash memory controller105when the flash memory device110is supplied with power, so that multiple corresponding block data or page data corresponding to specific block/page address of the different planes can be directly and sequentially retuned from the flash memory device110to the flash memory controller105based on a single one data read command (e.g. 0x05 or 0x06) sent from the flash memory controller105. It should be noted that, in other embodiment, the flash memory controller105may ask the flash memory device110return the data corresponding to specific block/page address of all planes PLN0-PLN3, the bit map information INFO may record ‘1111’ that is predetermined by the flash memory controller105, and the control circuit1103inFIG.1is arranged to control the address register1105to make the data register1110sequentially transmit all the block data or page data corresponding to specific block/page address to the I/O control circuit1101so that the I/O control circuit1101can sequentially transmit all the block data or page data corresponding to specific block/page address to the flash memory controller105. That is, once receiving a data read command from the flash memory controller105, the flash memory device110can return one or more corresponding block data or page data, which is asked by the flash memory controller105, back to the flash memory controller105without waiting for another data read command of the flash memory controller105. The data toggle operation performed by the control circuit1103can be used to transmit a sequence of block data or page data associated with different planes of the memory cell array1107in response to only one specific data read command or only one data toggle command. In other embodiments, the data toggle operation can be dynamically enable or disabled if needed.FIG.5is a timing diagram of the operation of the flash memory controller105sending the data toggle command such as 0xAA (but not limited) to the flash memory device110to enable the data toggle operation according to an embodiment of the invention.FIG.6is a timing diagram of the operation of the flash memory controller105sending the data toggle command such as 0xAA (but not limited) to the flash memory device110to enable the data toggle operation according to another embodiment of the invention. As shown inFIG.5, the processor1051controls the I/O circuit1052sequentially transmitting the data toggle command such as 0xAA, the specific data read command such as 0x05 or 0x06, toggle control information, and a change read command such as 0xE0 to the flash memory device110through the specific communication interface, to make the flash memory device110return the block data or page data corresponding to specific block/page address of one or more planes. The toggle control information includes block/page addresses of the planes. For example, inFIG.5, the data toggle command 0xAA can be configured to be followed by the data read command 0x05 (or 0x06 in other embodiments). The transmission of the change read command 0xE0 is followed by a transmission of a first block data or page data which may be followed by a transmission of a second block data or page data. As shown inFIG.6, the data toggle command 0xAA can be configured to follow the command 0xE0. The processor1051controls the I/O circuit1052sequentially transmitting the data read command 0x05 or 0x06, toggle control information, the change read command 0xE0, and the data toggle command 0xAA to the flash memory device110through the specific communication interface, to make the flash memory device110return the block data or page data corresponding to specific block/page address of one or more planes. The toggle control information includes page addresses of the planes. When receiving the data toggle command 0xAA, the flash memory device110can know and confirm that the data toggle operation or function is enabled, and then it is arranged to transmit one or more corresponding block data or page data to the flash memory controller105even though it receive only one data read command 0x05. For example, the transmission of the data toggle command 0xAA is followed by a transmission of a first block data or page data which is followed by a transmission of a second block data or page data. In other embodiment, the data toggle command 0xAA mentioned above can be used to replace the data read command 0x05 or replace the confirm command 0xE0.FIG.7is a timing diagram of the operation of the flash memory controller105sending the data toggle command 0xAA to the flash memory device110to enable the data toggle operation without sending the data read command 0x05 according to an embodiment of the invention. As shown inFIG.7, the data toggle command 0xAA can be configured to replace the function of a data read command 0x05 or 0x06. The processor1051controls the I/O circuit1052sequentially transmitting the data toggle command 0xAA, toggle control information, and a change read command 0xE0 to the flash memory device110through the specific communication interface, to make the flash memory device110return the block data or page data corresponding to specific block/page address of one or more planes. The toggle control information includes specific block/page address information of different planes or includes only specific block/page address information of a starting plane. The data toggle command 0xAA is followed by the toggle control information which is followed by the command 0xE0. InFIG.7, when receiving the data toggle command 0xAA, the flash memory device110can know that the data toggle operation or function is enabled and also know that the data toggle command (i.e. 0xAA) is received. For example, the transmission of the change read command 0xE0 (i.e. a confirm command) is followed by a transmission of a first block data or page data at a first plane which is followed by a transmission of a second block data or page data at a second plane. FIG.8is a timing diagram of the operation of the flash memory controller105sending a data toggle command 0xAA to the flash memory device110to enable the data toggle operation without sending the command 0xE0 according to another embodiment of the invention. As shown inFIG.8, the data toggle command 0xAA can be configured to replace the function of a change read command 0xE0 and used as a confirm command. The processor1051controls the I/O circuit1052sequentially transmitting the specific data read command such as 0x05 or 0x06, toggle control information, and the data toggle command 0xAA to the flash memory device110through the specific communication interface, to make the flash memory device110return the block data or page data corresponding to block/page address information of one or more planes. The toggle control information includes block/page addresses of the different planes. The data toggle command 0xAA follows the toggle control information which follows the data read command 0x05. InFIG.8, once receiving the data toggle command 0xAA, the flash memory device110can know that the data toggle operation or function is enabled and also now can transmit one or more corresponding block data or page data to the flash memory controller105even though it receive only one data read command 0x05. For example, the transmission of the data toggle command 0xAA is followed by a transmission of a first block data or page data at a first plane which is followed by a transmission of a second block data or page data at a second plane. In other embodiment, the flash memory controller105can send only the data toggle command 0xAA to the flash memory device110to enable the data toggle operation and sending the data read command without using and sending a data read command 0x05, toggle control information, and command 0xE0.FIG.9is a timing diagram of the operation of the flash memory controller105sending only the data toggle command 0xAA to the flash memory device110to enable the data toggle operation according to an embodiment of the invention. As shown inFIG.9, the toggle control information can be preset by the flash memory controller105when the flash memory device110is powered on and supplied with power by using the data toggle set-feature signal mentioned above. The toggle control information can be sent from the flash memory controller105to the flash memory device110by using the parameters in the data toggle set-feature signal. In this situation, the processor1051controls the I/O circuit1052transmitting the data toggle command 0xAA to the flash memory device110through the specific communication interface, to make the flash memory device110return the block data or page data at one or more planes. Alternatively, in other embodiments, the processor1051may control the I/O circuit1052transmitting only the data toggle command 0xAA to the flash memory device110through the specific communication interface, to make the flash memory device110return all the block data or page data corresponding to specific block/page address information of all different planes; the specific block/page address information can be set or configured by using a set-feature signal mentioned above. Once receiving the data toggle command 0xAA, the flash memory device110can know that a data read command is received, the data toggle operation is enabled, and also now it can start to execute the data toggle operation to return one or more corresponding block data or page data to the flash memory controller105. For example, a transmission of the data toggle command 0xAA is followed by a transmission of a first block data or page data at a first plane which is followed by a transmission of the second block data or page data at a second plane. If the flash memory controller105asks the group of block data or page data, then a transmission of only the data toggle command 0xAA for example is followed by transmissions of the group of block data or page data. In addition, it should be noted that the transmission of the data toggle command 0xAA in other embodiment can be positioned between the transmission of data read command (e.g. 0x05) and the transmission of toggle control information (i.e. block/page address information). Alternatively, the transmission of the data toggle command 0xAA in other embodiment can be positioned between the transmission of toggle control information and the transmission of command 0xE0. The modifications also fall within the scope of the invention. In other embodiments, the invention further provides a technical solution capable of simplifying the command sequences, sent from a memory controller (e.g. the flash memory controller105) to a memory device (e.g. a flash memory device), when the memory controller performs a copy back read operation, a write/program operation, and/or an erase operation. It should be noted that the copy back read operation is arranged to copy some data from the memory cell array1107into the data register1110within the flash memory device110, which is different from the data read operation that is arranged to read data sent from the flash memory device110to the flash memory controller105. More particularly, the provided technical solution can simplify multiple command sequences respectively issued by a conventional method to generate and output a simplified command sequence to the flash memory device so as to significantly reduce the command/address amounts of the communications between the flash memory device and the flash memory controller105. FIG.10is a block diagram of a storage device1000according to another embodiment of the invention. The storage device1000for example is a flash memory storage device (but not limited) and comprises a memory controller such as the flash memory controller105and a memory device such as the flash memory device110having multiple flash memory chips/dies, each flash memory chip/die may comprise one or more different planes such as four planes. However, the number of planes can be different in different products of the storage device1000. The processor1051controls the I/O circuit1052sending read, write/program, erase commands through the above-mentioned specific communication interface to the flash memory device110to control and access the flash memory device110. The flash memory device110comprises the I/O control circuit1101, logic control circuit1102, control circuit1103, counter circuit1104, address register1105, command register1106, memory cell array1107, row address decoder1108, column address decoder1109, data register1110, and an address control circuit1112. The operations and functions of the elements having the same reference signs inFIG.10are identical or similar to those inFIG.1, and are not detailed for brevity. For the operation of address control circuit1112, the address information for example is first address information which is used for a first plane and is carried by a single command sequence. The address control circuit1112is arranged to automatically generate second address information associated with a second plane according to the first address information of the first plane, and then control the address decoder(s) selecting multiple data units at the first plane and the second plane based on the first address information and the second address information in response to the command information buffered in the command register1106so as to perform an access operation upon the multiple data units at the first plane and the second plane. The access operation is an erase operation, a write operation, or a copy back read operation. In one embodiment, the address control circuit1112may automatically generate the second address information in response to only the received first address information and control the address decoder (s) transferring the first data unit and the second data unit respectively from the first plane and the second plane to the data register1110. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit for the first plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder transferring the third data unit from the first plane to the data register without transmitting the first data unit. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit at a third plane different from the first plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder transferring the third data unit from the third plane to the data register1110without transmitting the first data unit. Further, in one embodiment, the address control circuit1112may automatically generate the second address information in response to only the received first address information and control the at least one address decoder selecting the first data unit and the second data unit respectively at the first plane and the second plane to erase the first data unit and the second data unit. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit for the first plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder selecting the third data unit at the first plane to erase the third data unit without selecting the first data unit. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit for a third plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder selecting the third data unit at the third plane to erase the third data unit without selecting the first data unit at the first plane. Further, the address control circuit1112may automatically generate the second address information in response to only the received first address information and control the at least one address decoder selecting the first data unit and the second data unit respectively at the first plane and the second plane to write data into the first data unit and the second data unit. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit for the first plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder selecting the third data unit at the first plane to write data into the third data unit without selecting the first data unit at the first plane. Further, the address control circuit1112may automatically change the first address information as a third address information, which indicates a third data unit for a third plane different from the first plane, in response to a bit map information or a set-feature signal sent from the flash memory controller105, and controls the at least one address decoder selecting the third data unit at the third plane to write data into the third data unit without selecting the first data unit at the first plane. In practice, for a copy back read operation, the processor1051of the flash memory controller105can send a copy back read command and/or a specific indication command to the flash memory device110to make the address control circuit1112control the memory cell array1107outputting block data or page data from one or more blocks/pages of the different planes to the data register1110. In response to the event of receiving such copy back read command and/or specific indication command, the address control circuit1112can control the row address decoder1108and column address decoder1109selecting corresponding page address(es), block address(es), and plane address(es) according to address information carried by a command sequence, the default setting, or the configuration dynamically configured by the processor1051, so as to output the corresponding page data or block data to the data register1110. It should be noted that the page data may mean one page data unit or more page data units, and the block data may mean one block data unit or more block data units. For example, compared to the four command sequences in the first sub-step ofFIG.2,FIG.11shows four examples of the command sequence sent by the flash memory controller105for the copy back read operation according to an embodiment of the invention. Each example is used to simplify the command sequences of the first sub-step ofFIG.2as only one command sequence. As shown inFIG.11, in the first example, the processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending a specific indication command such as 0xAA (i.e. AAh), a copy back read command such as 0x00 (i.e. 00h), address information comprising only one plane address (e.g. address of the m-th plane) and corresponding block/page address, and the confirm command such as another read command such as 30h (but not limited) to the flash memory device110; hexadecimal numbers can be written with and indicated by a leading “0x” or a trailing “h”. When receiving the specific indication command 0xAA and the copy back read command 0x00, the flash memory device110or control circuit1103inFIG.10can know that the information following the copy back command 0x00 for example comprises address information of a particular plane such as the m-th plane; in addition, when receiving the command 0x30, the flash memory device110or control circuit1103can know first sub-step inFIG.2is finished, and the address control circuit1112can control the row address decoder1108and column address decoder1109automatically switching to use the addresses of different planes based on only the address information of the m-th plane stored in the address register1105. For example, the address control circuit1112can generate the addresses of all the planes and use the generated addresses to control the row address decoder1108and column address decoder1109performing automatic address switching to select block/page units corresponding to specific block/page address at all the different planes, so that the page data and/or block data corresponding to specific block/page address at all the different planes can be transmitted from (or copied back from) the memory cell array1107into the data register1110. That is, the copy back operation is performed after the confirm command 0x30 is received. Further, in other embodiments, the address control circuit1112can generate the addresses of only some planes based on the default setting or the configuration dynamically determined by the flash memory controller105, wherein the configuration can be determined by plane bit map information and/or block address information sent from the flash memory controller105, which can be set by a set-feature signal and will be explained later. The address control circuit1112can be arranged to generate corresponding plane address (es) and select its corresponding address buffer (s) to perform decoding according to the corresponding plane address(es). In the second example ofFIG.11, the specific indication command can be positioned between the copy back read command and address information. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the copy back read command such as 0x00 (i.e. 00h), the specific indication command such as 0xAA (i.e. AAh), address information comprising only one plane address (e.g. address of the m-th plane) and corresponding block/page address information, and the another read command such as 30h (but not limited) to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. Further, in the third example ofFIG.11, the specific indication command can be positioned between address information and the command 0x30. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the copy back read command such as 0x00 (i.e. 00h), address information comprising only one plane address (e.g. address of the m-th plane) and corresponding block/page address, the specific indication command such as 0xAA (i.e. AAh), and the another read command such as 30h (but not limited) to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. Further, in the fourth example ofFIG.11, the specific indication command may be positioned later than the command 0x30. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the copy back read command such as 0x00 (i.e. 00h), address information comprising only one plane address (e.g. address of the m-th plane) and corresponding block/page address, the another read command such as 30h (but not limited), and the specific indication command such as 0xAA (i.e. AAh) to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. FIG.12shows two examples of the command sequence sent by the flash memory controller105for the copy back read operation according to an embodiment of the invention. In the first example ofFIG.12, the specific indication command 0xAA is used to replace the function of the copy back command 0x00, and in this situation the flash memory device110can know and confirm that the command sequence is simplified and the information following the command 0xAA is the address information of a particular plane such as the m-th plane and block/page address information. In the second example ofFIG.12the specific indication command 0xAA is used to replace the function of the confirm command 0x30, and in this situation the flash memory device110can know and confirm that the command sequence is simplified and the end of such command sequence after receiving such specific indication command 0xAA. In either of the both the examples, after receiving the block/page address (es) of the m-th plane, the flash memory device110is arranged to automatically generate block/page addresses of multiple or all planes according to only the block/page address information of the m-th plane. This effectively simplifies multiple command sequences into only one command sequence. FIG.13shows an example of the command sequence sent by the flash memory controller105for the copy back read operation according to an embodiment of the invention. InFIG.13, in the default setting, the flash memory device110enables the command sequence simplification operation after the flash memory device110is powered on. The setting of the flash memory device110can be dynamically adjusted by the flash memory controller105through the communication of a set-feature signal. In the example ofFIG.13, the processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the copy back read command such as 0x00 (i.e. 00h), address information comprising only one plane address (e.g. address of the m-th plane) and corresponding block/page address information, and the another read command such as 30h (but not limited), to the flash memory device110without sending the specific indication command 0xAA mentioned above. In this situation, the flash memory device110after receiving the above-mentioned command sequence can confirm the execution of the automatic address switching. FIG.14shows four examples of the command sequence sent by the flash memory controller105for an erase operation according to an embodiment of the invention. Each example is used to simplify the command sequences of the erase operation as only one command sequence. As shown in in the first example ofFIG.14, the processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the specific indication command 0xAA, the erase command such as 0x60 (i.e. 60h), address information such as block address data/information of one plane such as the m-th plane, and the confirm command such as the command 0xD0 (i.e. D0h) to the flash memory device110. For example, the block address data may indicate the block address for the n-th block of the m-th plane (but not limited). When receiving the specific indication command 0xAA and the command 0x60, the flash memory device110(or control circuit1103) inFIG.10can know and confirm that the information following the command 0x60 comprises one or more block address information of the m-th plane; in addition, when receiving the command 0xD0, the flash memory device110(or control circuit1103) can be arranged to start executing the erase operation upon the block (s) corresponding to one or more block address information at the different planes including the m-th plane. In this situation, the address control circuit1112can automatically expand the block address information of the m-th plane into the same block address information at the different planes including the m-th plane, e.g. all the planes. Then, the address control circuit1112controls the row address decoder1108and column address decoder1109automatically and sequentially switching to corresponding address (es) to erase corresponding block unit(s) at the different planes based on the expanded block addresses information at the different planes. For example, in one embodiment, a group of block units corresponding to the same block address information at different planes may form a super block unit. The flash memory controller105can send merely a single command sequence, including merely one plane's block address information, to the flash memory device110to make the flash memory device110erase the super block unit having the corresponding block units at the different planes. This significantly improves the performance of the flash memory device110. Similarly, in the second example ofFIG.14, the specific indication command may be positioned between the command 0x60 and the address information of the m-th plane. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the erase command 0x60, the specific indication command 0xAA, the block address information of the m-th plane, and the confirm command 0xD0 to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. Further, in the third example ofFIG.14, the specific indication command 0xAA can be positioned between the block address information and the command 0xD0. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the erase command 0x60, three-cycles block address information, the specific indication command 0xAA, and the command 0xD0 to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. Further, in the fourth example ofFIG.14, the specific indication command 0xAA may be positioned later than the command 0xD0. The processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the erase command 0x60, the block address information, the command 0xD0, and the specific indication command 0xAA to the flash memory device110. The other operations of the second example are similar to those of the first example and are not detailed. FIG.15shows two examples of the command sequence sent by the flash memory controller105for the erase operation according to an embodiment of the invention. In the first example ofFIG.15, the specific indication command 0xAA is used to replace the function of the erase command 0x60 (i.e. it is not needed to send the erase command 0x60), and in this situation the flash memory device110can know and confirm that the command sequence is simplified and the information following the command 0xAA is the block address information. In the second example ofFIG.15the specific indication command 0xAA is used to replace the function of the command 0xD0 (i.e. it is not needed to send the command 0xD0), and in this situation the flash memory device110can know and confirm that the command sequence is simplified and know that the command 0xAA is an end of the simplified command sequence. In either of the both the examples, after receiving the block address data of the m-th plane, the flash memory device110is arranged to automatically expand the block address data of the m-th plane into the same block addresses data for different planes. This effectively simplifies multiple command sequences into only one command sequence. FIG.16shows an example of the command sequence sent by the flash memory controller105for the erase operation according to an embodiment of the invention. InFIG.16, in the default setting, the flash memory device110enables the command sequence simplification operation after the flash memory device110is powered on. The setting of the flash memory device110can be dynamically adjusted by the flash memory controller105through the communication of set-feature signal. Thus, in the example ofFIG.16, the processor1051of the flash memory controller105controls the I/O circuit1052sequentially sending the erase command 0x60, block address information of the m-th plane, and the confirm command 0xD0 to the flash memory device110without sending the specific indication command 0xAA mentioned above. The flash memory device110can know the same function of the specific indication command 0xAA after receiving the erase command 0x60 or the confirm command 0xD0. Further, in other embodiments, for performing the erase operation, the address control circuit1112can select and generate the block address information for partially selected plane(s) according to the block address information of only one plane based on the default setting or the configuration dynamically determined by the flash memory controller105, wherein the configuration can be determined by plane bit map information. Additionally, the address control circuit1112can change the block address information for one or more different plane (s) based on the plane bit map information and/or block address information sent from the flash memory controller105, wherein the plane bit map information and/or block address information can be set by a set-feature signal and will be explained later. The address control circuit1112can be arranged to generate corresponding plane address(es) and select corresponding address buffer(s) in the address decoder(s) to perform decoding according to the corresponding plane address(es). FIG.17shows three examples of the command sequence sent by the flash memory controller105for a write operation of SLC mode according to an embodiment of the invention. In the first example ofFIG.17, the flash memory controller105sequentially sending an SLC programming instruction/command such as 0xA2 (but not limited), a specific indication command 0xAA, a page program command such as the command 0x80 (but not limited), block address and page address information of the m-th plane, multiple toggle page data to be programed, and a write confirm command such as a confirm command such as the command 0x10 (but not limited) to the flash memory device110. The multiple toggle page data to be programed for example (but not limited) may comprise a first toggle page data to be programed into a page of the m-th plane, a second toggle page data to be programed into a page of the n-th plane, a third toggle page data to be programed into a page of the o-th plane, and a fourth toggle page data to be programed into a page of the p-th plane. The m-th plane, n-th plane, o-th plane, and p-th plane for example are planes having the index numbers 0, 1, 2, and 3; however, in other examples, the m-th plane may be one of the planes having the index numbers 1, 2, and 3, and this is not meant to be a limitation. In addition, the number of toggle page data to be programed is also not intended to be a limitation. In the second example ofFIG.17, the flash memory controller105sequentially sending the SLC programming command 0xA2, page program command 0x80, specific indication command 0xAA, block address and page address information of the m-th plane, multiple toggle page data to be programed, and the write confirm command 0x10 to the flash memory device110. In the third example ofFIG.17, the flash memory controller105sequentially sending the specific indication command 0xAA, the SLC programming command 0xA2, page program command 0x80, block address and page address information of the m-th plane, multiple toggle page data to be programed, and the write confirm command 0x10 to the flash memory device110. The SLC programming command 0xA2 is used to indicate the SLC program/write mode, and the flash memory device110when receiving the command 0xA2 can know that a write operation operates under the SLC mode. The page program command 0x80 is used to indicate a program/write operation. In addition, once receiving the write confirm command 0x10, the flash memory device110can start to execute the SLC mode programming. In these examples ofFIG.17, the flash memory device110can automatically expand the block address and page address of a page in the m-th plane into the block addresses and page addresses of four pages (but not limited) respectively in the different planes (m-thplane, n-thplane, o-thplane, and p-thplane). Thus, the address control circuit1112can control the row address decoder1108and column address decoder1109select corresponding pages of corresponding physical blocks in the different planes so that the multiple toggle page data to be programed can be respectively and correctly stored into the selected pages of the different planes. This effectively improves the performance of writing a super page data into multiple page units respectively in the different planes of the flash memory device110. FIG.18shows an example of the command sequence sent by the flash memory controller105for a write operation of SLC mode according to another embodiment of the invention. InFIG.18, the specific indication command 0xAA can replace or merge the function of the page program command 0x80. In this example, the flash memory controller105sequentially sends the SLC programming command 0xA2, specific indication command 0xAA, block address and page address information of a page of the m-th plane, multiple toggle page data to be programed, and the write confirm command 0x10 to the flash memory device110. In this example, when receiving the specific indication command 0xAA, the flash memory device110can know and confirm that the write operation corresponding to a page write/program instruction/command is to be executed and the command sequence is simplified. The other descriptions are similar and not detailed for brevity. FIG.19shows an example of the command sequence sent by the flash memory controller105for the write operation of SLC mode according to another embodiment of the invention. InFIG.19, the flash memory controller105sequentially sends the SLC programming command 0xA2, page program command 0x80, block address and page address information of a page of the m-th plane, multiple toggle page data to be programed, and the write confirm command 0x10 to the flash memory device110, without sending the specific indication command 0xAA. In this example, in the default setting, the flash memory device110can know and confirm that the command sequence is simplified, and this it is not needed to send the specific indication command 0xAA. Equivalently, the function of specific indication command 0xAA is merged into and included by the page program command 0x80. The other descriptions are similar and not detailed for brevity. Additionally, in other embodiments, the command sequences for the write operation of multiple level programming modes (e.g. MLC mode, TLC mode, QLC mode, and so on) can be simplified into a single one command sequence.FIG.20shows two examples of the command sequence sent by the flash memory controller105for a write operation of TLC mode according to an embodiment of the invention. In the first example ofFIG.20, the flash memory controller105sequentially sends the specific indication command 0xAA, page program command 0x80, block address and page address information of an LSB/CSB/MSB page of the m-th plane, multiple toggle LSB (least significant bit) page data to be programed, multiple toggle CSB (center significant bit) page data to be programed, multiple toggle MSB (most significant bit) page data to be programed, and the confirm command 0x10 to the flash memory device110. The multiple toggle LSB page data to be programed for example (but not limited) may comprise a first LSB page data to be programed into an LSB page of the m-th plane, a second LSB page data to be programed into an LSB page of the n-th plane, a third LSB page data to be programed into an LSB page of the o-th plane, and a fourth LSB page data to be programed into an LSB page of the p-th plane. The multiple CSB page data to be programed for example (but not limited) may comprise a first CSB page data to be programed into a CSB page of the m-th plane, a second CSB page data to be programed into a CSB page of the n-th plane, a third CSB page data to be programed into a CSB page of the o-th plane, and a fourth CSB page data to be programed into a CSB page of the p-th plane. Similarly, the multiple MSB page data to be programed for example (but not limited) may comprise a first MSB page data to be programed into an MSB page of the m-th plane, a second MSB page data to be programed into an MSB page of the n-th plane, a third MSB page data to be programed into an MSB page of the o-th plane, and a fourth MSB page data to be programed into an MSB page of the p-th plane. The number of LSB/CSB/MSB page data to be programed is not intended to be a limitation. Further, for the writing operation under multiple level modes, in practice, the address control circuit1112(or control circuit1103) can be used to record and count the number of LSB/CSB/MSB page data that have been written into the memory cell array1107so as to correctly write data into the corresponding units in the memory cell array1107. Also, the counter circuit1104can be used to count and record the number of data bytes. The corresponding operations are not detailed for brevity. In the second example ofFIG.20, the flash memory controller105sequentially sends the page program command 0x80, specific indication command 0xAA, block address and page address information of an LSB/CSB/MSB page of the m-th plane, multiple toggle LSB page data to be programed, multiple toggle CSB page data to be programed, multiple toggle MSB page data to be programed, and the write confirm command 0x10 to the flash memory device110. That is, the position of specific indication command 0xAA in the simplified command sequence can be changed. In other embodiments, for TLC mode programming, the function of page program command 0x80 can be replaced by the specific indication command 0xAA.FIG.21shows an example of the command sequence sent by the flash memory controller105for the write operation of TLC mode according to another embodiment of the invention. In the example ofFIG.21, the flash memory controller105sequentially sends the specific indication command 0xAA, block address and page address information of an LSB/CSB/MSB page of the m-th plane, the multiple toggle LSB page data to be programed, the multiple toggle CSB page data to be programed, the multiple toggle MSB page data to be programed, and the write confirm command 0x10 to the flash memory device110. When receiving the specific indication command 0xAA in this example, the flash memory deice110can know that the write operation corresponding to a page program command 0x80 is to be executed under the TLC mode. For example (but not limited), the block address and page address information of an LSB/CSB/MSB page of the m-th plane may comprise a block index number 30 and a page index number 3 for plane number 0, and the address control circuit1112of flash memory device110based on the above-mentioned address information can automatically generate a block index number (i.e. block address) 30 and a page index number (i.e. page address) 3 for plane number (i.e. plane address) 1, a block index number 30 and a page index number 3 for plane number 2, and a block index number 30 and a page index number 3 for plane number 3, a block index number 30 and a page index number 4 for plane number 0, a block index number 30 and a page index number 4 for plane number 1, a block index number 30 and a page index number 4 for plane number 2, a block index number 30 and a page index number 4 for plane number 3, a block index number 30 and a page index number 5 for plane number 0, a block index number 30 and a page index number 5 for plane number 1, a block index number 30 and a page index number 5 for plane number 2, and a block index number 30 and a page index number 5 for plane number 3. In other embodiment, the address control circuit1112can be arranged to automatically generate the page address information pages of an LSB/CSB/MSB super page in the different planes in response to page address of only one LSB/CSB/MSB page at one plane.FIG.22shows two examples of the command sequence sent by the flash memory controller105for the write operation of TLC mode according to another embodiment of the invention. In the example ofFIG.22, the flash memory controller105sequentially sends three command sub-sequences. At the first, the flash memory controller105sends the first command sub-sequence including the specific indication command 0xAA, page program command 0x80, block address and page address information of an LSB page of the m-th plane, multiple toggle LSB page data to be programed, and an intermediate confirm command such as a change write command such as the command 0x1A (but not limited). The command 0x1A is used to indicate the end of a command sub-sequence. Then, the flash memory controller105sends the second command sub-sequence including the specific indication command 0xAA, page program command 0x80, block address and page address information of a CSB page of the m-th plane, multiple toggle CSB page data to be programed, and the command 0x1A. Finally, the flash memory controller105sends the third command sub-sequence including the specific indication command 0xAA, page program command 0x80, block address and page address information of an MSB page of the m-th plane, and the write confirm command 0x10. It should be noted that the order of the above command sub-sequences can be changed, and for example the command sub-subsequence associate with the MSB page data can be transmitted at first while the command sub-subsequence associate with the LSB page data can be transmitted finally; this also falls within the scope of the invention. In the second example ofFIG.22, the specific indication command 0xAA can be positioned between the page program command 0x80 and the corresponding address information. For example (but not limited), when receiving the address information of the first command sub-sequence, e.g. a block index number 30 and a page index number 3 for plane number 0, the address control circuit1112of flash memory device110based on the above-mentioned address information can automatically generate a block index number 30 and a page index number 3 for plane number 1, a block index number 30 and a page index number 3 for plane number 2, and a block index number 30 and a page index number 3 for plane number 3; the block/page index number indicates the block/page address. When receiving the command 0x1A of the first command sub-sequence, the flash memory device110temporarily stores the multiple toggle LSB page data to be programed and then waits for the writing of a next page such as CSB page. Then, the flash memory controller105sends the second command sub-sequence including a block index number 30 and a page index number 4 (i.e. next page) for plane number 0, and the flash memory device110for example automatically generates a block index number 30 and a page index number 4 for plane number 1, a block index number 30 and a page index number 4 for plane number 2, and a block index number 30 and a page index number 4 for plane number 3. This is also similar for the writing of MSB page and is not detailed for brevity. In other embodiment, the function of specific indication command 0xAA can be merged into and included by the page program command 0x80, and it is not needed to send the specific indication command 0xAA.FIG.23shows an example of the command sequence sent by the flash memory controller105for the write operation of TLC mode according to another embodiment of the invention. In the example ofFIG.23, the flash memory controller105sequentially sends the page program command 0x80, block address and page address information of an LSB/CSB/MSB page of the m-th plane, the multiple toggle LSB page data to be programed, the multiple toggle CSB page data to be programed, the multiple toggle MSB page data to be programed, and the write confirm command 0x10 to the flash memory device110. The other operations and function are similar to those mentioned in the example ofFIG.21, and are not detailed for brevity. In other embodiment, the function of command 0x1A can be replaced by the specific indication command 0xAA. Alternatively, in other embodiment, the function of specific indication command 0xAA can be merged into the command 0x80.FIG.24shows the example of the command sequence sent by the flash memory controller105for the write operation of TLC mode according to another embodiment of the invention. In the example ofFIG.24, the function and position of specific indication command 0xAA in the command sub-sequences can be merged into the page program command 0x80. Other descriptions are not detailed again for brevity. Further, the multiple toggle page data sequences to be programmed shown byFIG.20,FIG.21, andFIG.23can be rearranged in the different orders respectively shown byFIG.25,FIG.26, andFIG.27. The transmission order of the toggle page data sequences can be arranged as the page data of LSB page of the m-th plane, the page data of CSB page of the m-th plane, the page data of MSB page of the m-th plane, the page data of LSB page of the n-th plane, the page data of CSB page of the n-th plane, the page data of MSB page of the n-th plane, the page data of LSB page of the o-th plane, the page data of CSB page of the o-th plane, the page data of MSB page of the o-th plane, the page data of LSB page of the p-th plane, the page data of CSB page of the p-th plane, and the page data of MSB page of the p-th plane. The examples are not intended to be a limitation of the invention. In the above embodiments, the specific indication command for example can be implemented by using the command 0xAA and can be configure to be different from a standard command (or a vendor specific command) and may be implemented by using a reserved command such as 0Bh, 12h, 14h, 18h, 1Bh-1Ch, 62h-64h, 76h, 82h-83h, 86h, and 8Eh wherein ‘h’ means hexadecimal. The following table shows the different examples of the reserved commands which can be used to implement the data toggle command: TypeOpcodeStandard00h, 05h-06h, 10h-11h, 15h, 30h-32h, 35h, 3Fh, 60h, 70h,Command Set78h, 80h-81h, 85h, 90h, D0h-D1h, D4h-D5h, D9h,E0h-E2h, ECh-EFh, F1h-F2h, F9h, FAh, FCh, FFhVendor Specific01h-04h, 07h-0Ah, 0Ch-0Fh, 13h, 16h-17h, 19h-1Ah,1Dh-2Fh, 33h-34h, 36h-3Eh, 40h-5Fh, 61h, 65h-6Fh,71h-75h, 77h, 79h-7Fh, 84h, 87h-8Dh, 8Fh, 91h-CFh,D2h-D3h, D6h-D8h, DAh-DFh, E3h-EBh, F0h, F3h-F8h,FBh, FD-FEhReserved0Bh, 12h, 14h, 18h, 1Bh-1Ch, 62h-64h, 76h, 82h-83h,86h, 8Eh It should be noted that an example of the specific indication command can be equal to the example of the above-mentioned data toggle command. This is not intended to be a limitation of the invention. Further, the feature information or parameter (s) of the copy back read operation, erase operation, or the write operation mentioned above can be determined, enabled, or disabled by the flash memory controller105through sending a copy back read set-feature signal, an erase set-feature signal, or a write set-feature signal to the flash memory device110. FIG.28is a diagram showing an example of setting the feature or parameters of the copy back read operation according to an embodiment of the invention. The copy back read set-feature signal may comprise a set-feature command (cycle type indicated by CMD) EFh (but not limited) and a corresponding control information which follows the set-feature command EFh. The control information for example comprises the feature information FA (cycle type indicated by ADDR) and/or one or more parameter data PD1, PD2, PD3, PD4, and PD5 (cycle type indicated by DIN) to the flash memory device110. The number and data lengths of parameter data are not intended to be a limitation, and one parameter data can be implemented using one or more bits or using one byte or more bytes if the number of totally used parameter data is less than four. The total data length of all parameter data can be configured to meet or match the requirements specified in the standards of flash memory controller/device products; for example (but not limited), the total data length can be configured as four bytes. For setting the features or parameters of the copy back read operation, the content of feature information FA is determined by the flash memory controller105and is to be associated with the copy back read operation, and thus when receiving such feature information FA the flash memory device110can know that the following parameter data/bits is/are used for setting the copy back read operation. For example, the parameter data PD1 for example are implemented by four bits B0-B3 or more bits such as eight bits. The bit BC of parameter data PD1 for setting the copy back read operation is used to indicate whether to enable or disable the copy back read operation. When the bit BC is set as the first logic bit such as ‘1’, the copy back read operation, to be performed by the flash memory device110, can be enabled and configured as a sequential mode in which the flash memory device110will be arranged to sequentially transmit block/page data units having the same block/page address information and respectively located in all the different planes from the memory cell array1107to the data register1110. For instance, in this situation, the flash memory controller105may send a simplified command sequence carrying the copy back read command (or specific indication command) and the address information which indicates a block index number (i.e. block address)30for the plane number 1, and the flash memory device110after receiving such simplified command sequence may automatically switch to the different planes to transmit block data units having the same index number 30 and respectively corresponding to all the plane numbers 0-3 if the flash memory device110has four planes. In other embodiment, the flash memory controller105may send a simplified command sequence carrying the copy back read command (or specific indication command) and the address information which indicates a block index number (i.e. block address)30for the plane number 1, and the flash memory device110after receiving such simplified command sequence may automatically switch to the different planes and transmit block data units having the same block index number 30 and respectively corresponding to the plane numbers 1-3 and transmit a block data unit corresponding to a next block index number 31 and the plane number 0. This also falls within the scope of the invention. Alternatively, when the bit BC is set as the second logic bit such as ‘0’, the copy back read operation of the flash memory device110is disabled. In this situation, the execution of the copy back read operation is disabled and stopped, and the flash memory controller105needs to send multiple command sequences, which respectively comprise the different plane address/number information, to the flash memory device110to make the memory cell array1107transmit corresponding block/page data of the different planes to the data register1110. The bit B1 of parameter data PD1 for setting the copy back read operation is used to indicate whether the copy back read operation uses the updated bit map information sent from the flash memory controller105or automatically calculates and obtains the bit map information by itself for the different planes. When the bit B1 is set as the first logic bit ‘1’, the copy back read operation is performed based on a bit map information which is automatically calculated and stored by the flash memory device110. Alternatively, when the bit B1 is set as the second logic bit ‘0’, the copy back read operation is performed based on a bit map information which is updated by the flash memory controller105. It should be noted that the bit map information, sent from the flash memory controller105to the flash memory device110, can be transmitted by using the feature information FA or by using other control information/signals sent from the flash memory controller105. FIG.29shows an example of the flash memory controller105using other control information/signals to send the bit map information used for setting the copy back read operation. InFIG.29, the flash memory controller105sends the specific indication command 0xAA, address information, plane bit map information, and a confirm command such as the command 0x30 (but not limited). The plane bit map information may be positioned between the specific indication command 0xAA and the address information in other embodiment. The plane bit map information for example can be implemented by using at least one byte (but not limited) to indicate which plane(s) of the flash memory device110is to be selected by the copy back read operation triggered by this command sequence. That is, when the bit B1 of parameter data PD1 is set as ‘0’, the flash memory device110can automatically transmit the corresponding block data unit(s) or page data unit(s) for the plane(s) requested/selected by the flash memory controller105based on the content of the plane bit map information received in such command sequence. For example, if the address information indicates the block index number 30 and the plane bit map information indicates ‘1010’, then the flash memory device110can know and confirm that its copy back read operation is arranged to transmit the block data units having the same block index number 30 and only corresponding to the planes having planes numbers 1 and 3. Further, it should be noted that in the address information may also comprise another plane bit information, and the flash memory device110can ignore such another plane bit information when the Page53of 73 bit B1 of parameter data PD1 is set as ‘0’. Further, since of the plane bit map information, the signal length of the command sequence for triggering a copy back read operation when the bit B1 of parameter data PD1 is set as ‘0’ is different from that for triggering the copy back read operation when the bit B1 of parameter data PD1 is set as ‘1’. Refer back toFIG.28. The bit B2 of parameter data PD1 for setting the copy back read operation is used to indicate whether the copy back read operation is performed in response to a copy back read command or in response to a specific indication command. When the bit B2 is set as a first logic bit ‘1’, the copy back read operation is configured to be performed in response to the specific indication command such as 0xAA. In this situation, when receiving a simplified command sequence carrying the specific indication command, the flash memory device110is arranged to automatically switch and select the different planes even though one plane number is received. When the specific indication command is not received, the flash memory device110is arranged to select a plane only corresponding to the plane number that is received from a command sequence. Alternatively, when the bit B2 is set as a second logic bit ‘0’, the copy back read operation is configured to be performed in response to only the copy back read command such as 0x00. In this situation, the flash memory device110does not switch and select another different plane when receiving a particular plane number; the flash memory device110only selects the plane corresponding to the plane number that is received. Further, in other embodiment, when the bit B2 is set as ‘1’, the flash memory device110based on a default setting is arranged to automatically switch and select the different planes, and the flash memory controller105may further send a different specific indication command such as 0xBB (but not limited) to the flash memory device110to make the flash memory device110not switching and not selecting the different planes in addition to sending the command sequence associated with the copy back read operation. The bit B3 of parameter data PD1 for setting the copy back read operation is used to indicate whether the copy back read operation can change block/page unit(s) for the different planes. When the bit B3 is set as a first logic bit ‘1’, the copy back read operation can be performed to change a block/page address/number for a different plane in response to a request signal sent from the flash memory controller105. Alternatively, when the bit B3 is set as a second logic bit ‘0’, the copy back read operation is configured to be performed for the same block/page address/number for the different planes. Refer toFIG.30.FIG.30shows an example of the flash memory controller105changing a block/page address/number for a different plane by sending the specific indication command 0xAA according to an embodiment of the invention. InFIG.30, the flash memory controller105sends the specific indication command 0xAA, address information, select information, and the command 0x30. The select information for example can be implemented by using at least three bytes (but not limited) in which one byte can be used to indicate a plane number currently or dynamically selected/modified and other two bytes can be used to indicate a block/page index number currently or dynamically selected/modified; however, this is not intended to be a limitation of the invention. For example, the address information may indicate only the plane numbers 1 and 3, and the select information can indicate a different plane number 0 to make the flash memory device110switch to and select the plane having the plane number 0. This is similar for the block/page changing. Further, in other embodiment, for the copy back read operation, the selection information mentioned above can be applied to and positioned between the address information and the command 0x30 in the examples ofFIG.11,FIG.12, andFIG.13; it can be positioned between the address information and the command 0xAA in other embodiment. In addition, the parameter data PD1 may comprise other bits used for reserved functions of setting the copy back read operation. The other parameter data PD2, PD3, PD4, and PD5 may be reserved for setting the copy back read operation. This is not intended to be a limitation of the invention. In other embodiments, the set-feature signal as shown in FIG.28can be applied into setting the feature or parameters of the erase operation or setting the feature or parameters of the write operation of SLC mode (or MLC/TLC/QLC mode). The set-feature signal ofFIG.28in this situation is for example equivalent to an erase set-feature signal which may comprise the set-feature command EFh and corresponding control information which follows the set-feature command EFh. The control information for example comprises the feature information FA associated with the erase operation and/or one or more parameter data P1, P2, P3, P4, and P5 to the flash memory device110. The number and data lengths of parameter data are not intended to be a limitation. Alternatively, in an example of the write operation of SLC mode (or MLC/TLC/QLC mode and so on), the set-feature signal ofFIG.28in this situation is equivalent to a write set-feature signal which may comprise the set-feature command EFh and corresponding control information which follows the set-feature command EFh. The control information for example comprises the feature information FA associated with the write operation and/or one or more parameter data P1, P2, P3, P4, and P5 to the flash memory device110. For setting the erase operation, the content of feature information FA is determined by the flash memory controller105and is to be associated with the erase operation. When receiving such feature information FA the flash memory device110can know that the following parameter data/bits is/are used for setting the erase operation. For example, the parameter data PD1 for example are implemented by four bits B0-B3 or more bits. In this embodiment, the bit BC of parameter data PD1 is used to indicate whether to enable or disable the erase operation for a sequential mode. When the bit BC is set as ‘1’, the erase operation performed by the flash memory device110is enabled and configured as the sequential mode in which the erase operation is arranged to switch to process the different planes having different plane numbers sequentially. For instance, regardless of which plane number indicated by the address information sent by the flash memory controller105, the address control circuit1112can automatically switch to the different planes having the different plane numbers, and the erase operation can be arranged to sequentially erase the block units having the same block index number such as 30 (but not limited) and respectively corresponding the different planes having the plane numbers 0-3 if the flash memory device has four planes. In other embodiment, the erase operation may automatically switch to erase block units having the same block index number 30 and respectively corresponding to the plane numbers 1-3 and erase a next block data unit having the block index number 31 at the plane having the plane number 0. This also falls within the scope of the invention. Alternatively, when the bit BC for the erase operation is set as ‘0’, the erase operation is disabled. In this situation, the execution of the erase operation for the all the planes in response to one command sequence is disabled and stopped, and the flash memory controller105needs to send multiple command sequences, which respectively comprise the different plane information, to the flash memory device110to erase corresponding block data units of the different planes. The bit B1 of parameter data PD1 for the erase operation is used to indicate whether the erase operation uses the updated bit map information sent from the flash memory controller105or automatically calculates and obtains the plane bit map information by itself for the different planes. When the bit B1 is set as ‘1’, the erase operation is performed based on a plane bit map information which is automatically calculated and stored by the flash memory device110. Alternatively, when the bit B1 is set as ‘0’, the erase operation is performed based on a plane bit map information which is updated by the flash memory controller105. It should be noted that the plane bit map information, sent from the flash memory controller105to the flash memory device110, can be transmitted by using the feature information FA or by using other control information/signals sent from the flash memory controller105.FIG.31shows an example of the flash memory controller105using other control information/signals to send the plane bit map information used for setting the erase operation according to an embodiment of the invention. InFIG.31, the flash memory controller105sends the specific indication command 0xAA, block/page address information, plane bit map information, and the command 0xD0. The plane bit map information may be position between the specific indication command 0xAA and the address information in other embodiment. The plane bit map information for example can be implemented by using at least one byte (but not limited) to indicate which plane (s) of the flash memory device110is to be processed by the erase operation triggered by this command sequence. That is, when the bit B1 of parameter data PD1 is set as ‘0’, the flash memory device110can automatically erase the corresponding block data unit(s) for the plane (s) requested by the flash memory controller105based on the content of the plane bit map information received in such command sequence. For example, if the address information indicate the block number 30 and the plane bit map information indicates ‘1010’, then the flash memory device110can know and confirm that its erase operation is arranged to erase the block data units having the same block index number 30 and only corresponding to the planes having planes numbers 1 and 3. Further, it should be noted that the address information may also comprise another plane bit information, and the flash memory device110can ignore the another plane bit information when the bit B1 of parameter data PD1 is set as ‘0’. Further, since of the plane bit map information, the signal length of the command sequence for triggering an erase operation when the bit B1 of parameter data PD1 is set as ‘0’ is different from that for triggering the erase operation when the bit B1 of parameter data PD1 is set as ‘1’. The bit B2 of parameter data PD1 for setting the erase operation is used to indicate whether the erase operation can be performed in response to an erase command or in response to a specific indication command such as 0xAA. When the bit B2 is set as ‘1’, the erase operation is configured to be performed in response to the specific indication command 0xAA. In this situation, the flash memory device110is arranged to automatically switch to and select the different planes even though one plane number is received. Alternatively, when the bit B2 is set as ‘0’, the erase operation is configured to be performed in response to only the erase command 0x60. In this situation, the flash memory device110does not automatically switch processing the different planes. Further, in other embodiment, when the bit B2 is set as ‘1’, the flash memory device110based on a default setting is arranged to automatically switch to and select the different planes, and the flash memory controller105further send a different specific indication command such as 0xBB (but not limited) to the flash memory device110to make the flash memory device110not switching to the different planes in addition to sending the command sequence associated with the erase operation. The bit B3 of parameter data PD1 for setting the erase operation is used to indicate whether the erase operation can change block(s) for the different planes. When the bit B3 is set as ‘1’, the erase operation can be performed to change a block address/number for a different plane in response to a request signal sent from the flash memory controller105. Alternatively, when the bit B3 is set as ‘0’, the erase operation is configured to be performed for the same block address/number for the different planes. Refer toFIG.32.FIG.32shows an example of the flash memory controller105changing a block address/number for a different plane by sending the specific indication command 0xAA according to an embodiment of the invention. InFIG.32, the flash memory controller105sequentially sends the specific indication command 0xAA, address information, select information, and an erase confirm command 0xD0. The select information for example can be implemented by using at least three bytes (but not limited) in which one byte can be used to indicate a plane number currently or dynamically selected/modified and other two bytes can be used to indicate a block number currently or dynamically selected/modified; however, this is not intended to be a limitation of the invention. For example, the address information may indicate only the plane numbers 1 and 3, and the select information can indicate a different plane number 0 to make the flash memory device110switch to and select the plane having the plane number 0. This is similar for the block changing. In addition, the parameter data PD1 may comprise other bits used for reserved functions of setting the erase operation. The other parameter data PD2, PD3, PD4, and PD5 may be reserved for setting the erase operation. This is not intended to be a limitation of the invention. Similarly, for setting the write operation of SLC/MLC/TLC/QLC mode, the feature information or parameter(s) of the write operation can be determined, enabled, or disabled by the flash memory controller105sending a write set-feature signal to the flash memory device110. The feature information FA of write set-feature signal is determined by the flash memory controller105to be associated with the write operation under SLC, MLC, TLC, QLC, or other-level modes. The other formats of such write set-feature signal is similar to that of the erase set-feature signal or copy back read set-feature signal, and are not detailed for brevity. In addition, the select information of the write operation, used to change block/page unit for the different planes, may be positioned later than the address information or other positions in a simplified command sequence used to trigger the write operation under SLC/MLC/TLC/QLC mode. FIG.33,FIG.34, andFIG.35respectively show the examples of changing block address(es)/number(s) of different plane(s) for the erase operation in response to a request signal sent from the flash memory controller105according to different embodiments of the invention. In these examples, when the bit B3 of parameter data PD1 is set as ‘1’, the erase operation can be performed to change a block address/number for a different plane in response to a request signal (e.g. a command sequence which triggers the erase operation) sent from the flash memory controller105. The spirits of the following examples can be suitable for the data read, copy back read, and the write operation (under SLC, MLC, TLC, QLC, etc. modes). In other words, the flash memory controller105can send a simplified command sequence, associated with an access operation such as the data read, copy back read, erase, or the write operation, in which plane bit map information and/or block address information is/are added to indicate whether to change some plane/block address information. As shown inFIG.33, in the first example, the plane bit map information for the erase operation can be positioned before the address information in a command sequence; alternatively, the plane bit map information may be positioned after the address information in such command sequence. The flash memory controller105sequentially sends the specific indication command 0xAA, erase command 0x60, the plane bit map information (e.g. four bits ‘0111’), address information such as address data associated with a specific block address/number such as block address ‘A’, and the confirm command 0xD0. When the flash memory controller105's erase operation is under the sequential mode the flash memory device110can know that its erase operation in the default setting is arranged to erase the block data units associated with the specific block address/number for all the different planes, and the flash memory device110can confirm that its erase operation in a modified configuration is arranged to erase the block data units associated with the specific block address/number for planes having the plane numbers 1-3 and does not erase a block data unit associated with the specific block address/number for the planes having the plane number 0 when receiving the plane bit map information which for example may carry information of ‘0111’ (but not limited). This achieves that flash memory controller105can directly determine that a portion of planes of the flash memory device110is processed (to be erased) and another portion of plane (s) is/are not processed (not to be erased) by adding the plane bit map information into the simplified command sequence of the erase operation. In the second example ofFIG.33, the flash memory controller105may sequentially send the specific indication command 0xAA, erase command 0x60, the first plane bit map information, the address information such as address data associated with a specific block address/number, the second plane bit map information, the block address information, and the confirm command 0xD0. The block address indicates which block address(es) is/are selected to be processed (i.e. to be erased). The first plane bit map information and the address information are associated with the sequential mode in which the flash memory device110can know that its erase operation in the default setting is arranged to erase the block data units associated with the specific block address/number for all the different planes. The first plane bit map information may have all bits identical to the logic bit ‘1’. The second plane bit map information, the block address information are associated with a modified configuration in which the flash memory device110can confirm that its erase operation is arranged to erase the block data units associated with the specific block address/number for plane (s) indicated by the second plane bit map information. All the medications obey the spirit of the invention. As shown in the first example ofFIG.34, the flash memory controller105sequentially sends the specific indication command 0xAA, erase command 0x60, address information such as address data associated with a specific block address/number ‘A’, plane bit map information, block address information, and the confirm command 0xD0. The plane bit map information for example is implemented by four bits or one byte to indicate which plane (s) is/are selected to be processed (i.e. to be erased), and the block address information for example is implemented by two bytes to indicate which block address (es) is/are selected to be processed (i.e. to be erased). For example, the address information originally may indicate the specific block address/number such as block address ‘A’, and this means that the erase operation is applied for the block data units of the block address ‘A’ for all the different planes if no plane bit map information and no block address information are received. Then, when receiving the plane bit map information such as bits ‘0010’ respectively for plane numbers 3, 2, 1, and 0 and receiving the block address information such as block address ‘B’, the flash memory device110(or address control circuit1112) may know and confirm that the erase operation is arranged to change the block address from ‘A’ to ‘B’ for the plane having plane number 1 indicated by plane bit map information. Thus, in the sequential mode for the erase operation, the flash memory controller105can erase the block data unit at the block address ‘A’ for the planes having the plane numbers 0, 2, and 3, and erase the block data unit at the block address ‘B’ for the plane having the plane number 1 after receiving the confirm command 0xD0 to start the execution of the erase operation. Similarly, in the second example ofFIG.34, the flash memory controller105sequentially sends the specific indication command 0xAA, erase command 0x60, address information such as address data associated with the block address ‘A’, plane bit map information, block address information, and the confirm command 0xD0 wherein the plane bit map information for example indicates four bits ‘0011’ respectively for plane numbers 3, 2, 1, and 0 to indicate two planes to be selected/processed and the block address information indicates the block address ‘B’ to be selected/processed. Thus, when receiving the plane bit map information such as ‘0011’ and receiving the block address information such as block address ‘B’, the flash memory device110(or address control circuit1112) may know and confirm that the erase operation is arranged to change the block address from ‘A’ to ‘B’ for only the planes having the numbers 0 and 1. Thus, in the sequential mode for the erase operation, the flash memory controller105can erase the block data units at the block address ‘A’ for the planes having the plane numbers 2 and 3, and erase the block data units at the block address ‘B’ for the plane having the plane numbers 0 and 1 after receiving the confirm command 0xD0 to start the execution of the erase operation. Further, the flash memory controller105can dynamically select and process different blocks for the different planes. As shown in the first example ofFIG.35for the erase operation, the flash memory controller105sequentially sends the specific indication command 0xAA, erase command 0x60, address information such as address data associated with a specific block address/number ‘A’, first plane bit map information, first block address information such as block address ‘B’, second plane bit map information, second block address information such as block address ‘C’, and the confirm command 0xD0. The first plane bit map information for example is implemented by four bits to indicate which plane (s) is/are selected to be processed for the first block address information such as block address ‘B’, and it for example is bits ‘0001’ respectively for plane numbers 3, 2, 1, and 0 to indicate that only the plane having the plane number 0 is selected. The first block address information for example is implemented by two bytes to indicate that the block address ‘B’ is selected to be processed (i.e. to be erased). The second plane bit map information for example is implemented by four bits to indicate which plane (s) is/are selected to be processed for the second block address information such as block address ‘C’, and it for example is four bits ‘0010’ respectively for plane numbers 3, 2, 1, and 0 to indicate that only the plane having the plane number 1 is selected. The second block address information for example is implemented by two bytes to indicate that the block address ‘C’ is selected to be processed (i.e. to be erased). Thus, in the first example ofFIG.35, after receiving the address information associated with the specific block address/number ‘A’, the flash memory device110(or address control circuit1112) can know that in the default setting its erase operation is arranged to erase the block data units corresponding to the block address ‘A’ for all the different planes. After receiving the first plane bit map information ‘0001’ respectively for plane numbers 3, 2, 1, and 0 and the first block address information, the flash memory device110(or address control circuit1112) can know that in a first modified manner the block address ‘A’ for the plane number 0 is changed and switched to the block address ‘B’ for the plane number 0. Similarly, after receiving the second plane bit map information ‘0010’ respectively for plane numbers 3, 2, 1, and 0 and the second block address information, the flash memory device110(or address control circuit1112) can know that in a second modified manner the block address ‘A’ for the plane number 1 is changed and switched to the block address ‘C’ for the plane number 1. Thus, after receiving the confirm command 0xD0, the flash memory device110(or address control circuit1112) can and confirm that its erase operation is arranged to erase the block data units corresponding to the block address ‘A’ for the planes having the plane numbers 2-3, erase the block data unit corresponding to the block address ‘B’ for the plane having the plane number 0, and also erase the block data unit corresponding to the block address ‘C’ for the plane having the plane number 1. Thus, this can achieve processing (i.e. erasing) block data units corresponding to different block address information for the different planes by using/sending only one simplified command sequence from the flash memory controller105to the flash memory device110. As shown in the second example ofFIG.35for the erase operation, the flash memory controller105sequentially sends the specific indication command 0xAA, erase command 0x60, address information such as address data associated with a specific block address/number ‘A’, first plane bit map information, first block address information such as block address ‘B’, second plane bit map information, second block address information such as block address ‘C’, and the confirm command 0xD0. The first plane bit map information for example is implemented by one byte to indicate which plane (s) is/are selected to be processed for the first block address information such as block address ‘B’, and it for example is four bits ‘0001’ respectively for plane numbers 3, 2, 1, and 0 to indicate that the plane having the plane number 0 is selected. The first block address information for example is implemented by two bytes to indicate that the block address ‘B’ is selected to be processed (i.e. to be erased). The second plane bit map information for example is implemented by four bits to indicate which plane (s) is/are selected to be processed for the second block address information such as block address ‘C’, and it for example is bits ‘0001’ respectively for plane numbers 3, 2, 1, and 0 to indicate that the plane having the plane number 0 is selected. The second block address information for example is implemented by two bytes to indicate that the block address ‘C’ is selected to be processed (i.e. to be erased). That is, the first plane bit map information is identical to the second plane bit map information in this example, and the first block address information is not identical to the second block address information. In this situation, after receiving the address information associated with the specific block address/number ‘A’, the flash memory device110(or address control circuit1112) can know that in the default setting its erase operation is arranged to erase the block data units corresponding to the block address ‘A’ for all the different planes. After receiving the first plane bit map information ‘0001’ respectively for plane numbers 3, 2, 1, and 0 and the first block address information, the flash memory device110(or address control circuit1112) can know that in a first modified manner the block address ‘A’ for the plane number 0 is changed and switched to the block address ‘B’ for the plane number 0. Similarly, after receiving the second plane bit map information ‘0001’ respectively for plane numbers 3, 2, 1, and 0 and the second block address information, the flash memory device110(or address control circuit1112) can know that in a second modified manner the block address ‘B’ for the plane number 0 is changed again and switched to the block address ‘C’ for the plane number 0. Thus, after receiving the confirm command 0xD0, the flash memory device110(or address control circuit1112) can and confirm that its erase operation is arranged to erase the block data units corresponding to the block address ‘A’ for the planes having the plane numbers 1-3 and also erase the block data unit corresponding to the block address ‘C’ for the plane having the plane number 0. Thus, this achieves changing different block address/number for the same plane for multiple times by using/sending only one simplified command sequence from the flash memory controller105to the flash memory device110. The method of using plane bit map information and block address information can be applied into a simplified command sequence for the read copy back operation or the write operation. For example, in one embodiment, at least one set of plane bit map information, block address information, and/or page bit map information can inserted and positioned into any position in a simplified command sequence of the write operation to particularly indicate which plane(s)/block(s)/page(s) are selected to be processed and which plane(s)/block(s)/page(s) are not selected. Similarly, in another embodiment, at least one set of plane bit map information, block address information, and/or page bit map information can inserted and positioned into any position in a simplified command sequence of the copy back read operation to particularly indicate which plane(s)/block(s)/page(s) are selected to be read from the memory cell array1107into the data register1108and which plane(s)/block(s)/page(s) are not selected. The corresponding operations are not detailed again for brevity. Further, for a simplified command sequence, the specific indication command can be positioned at a starting position in such command sequence, any intermediate position in the command sequence, or at a last position in the command sequence. These modifications also fall within the scope of the invention. To summarize, the invention provides schemes capable of simplifying multiple command sequences into one command sequence to improve the performance of the communications between a flash memory device and a flash memory controller so as to improve the whole performance of a storage device. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 122,837 |
11861213 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present specification discloses a time-division memory control device. The time-division memory control device can control a content addressable memory (CAM) in a time-division manner and thereby reduce a peak current and mitigate electromigration (EM) and/or voltage variation (IR) problems. FIG.5shows an embodiment of the time-division memory control device of the present disclosure. The time-division memory control device500ofFIG.5is configured to control a CAM cell array50of a CAM, and includes a time-division controller510and multiple peripheral circuits. The CAM cell array50can be included in the time-division memory control device500or be set outside the time-division memory control device500. An embodiment of each CAM cell (TC) of the CAM cell array50is illustrated withFIG.1. An embodiment of the multiple peripheral circuits includes a read/write auxiliary circuit520, a pre-charge circuit530, and an output circuit540. An embodiment of the output circuit540includes a sense amplifier circuit and/or a register circuit such as a latch circuit. The time-division controller510controls at least one peripheral circuit of the multiple peripheral circuits in a time-division manner and thereby makes the at least one peripheral circuit cooperate with the CAM cell array50in the time-division manner. It is noted that each of the read/write auxiliary circuit520, the pre-charge circuit530, and the output circuit540inFIG.5is divided into multiple groups of circuits, but the implementation of the present invention is not limited thereto; in other words, the grouping feature of the present invention can be fulfilled by dividing at least one of the read/write auxiliary circuit520, the pre-charge circuit530, and the output circuit540into multiple groups of circuits. In regard to the embodiment ofFIG.5, the time-division controller510is configured to generate multiple groups of control signals according to a system clock SCLK in a search and compare operation. The multiple groups of control signals include a first group of control signals and a second group of control signals. The time-division controller510outputs the first group of control signals at a first time point and outputs the second group of control signals at a second time point, wherein the second time point is later than the first time point on the timeline. In an exemplary implementation of the embodiment ofFIG.5, the time-division controller510is configured to control at least one of the read/write auxiliary circuit520and the pre-charge circuit530, and the first time point is synchronous with a trigger time point of the system clock SCLK; for example, the voltage levels of the first group of control signals and the voltage level of the system clock SCLK rise synchronously (as shown inFIG.9b). In an exemplary implementation of the embodiment ofFIG.5, the time-division controller510is configured to control the output circuit540, and the first time point is later than a trigger time point of the system clock SCLK; for example, the voltage level of the system clock SCLK rises at the trigger time point, the voltage levels of the first group of control signals rise at the first time point, and the first time point is later than the trigger time point by a predetermined time (as shown inFIG.9c). The predetermined time is determined according to the time for the CAM cell array50performing a search and comparison operation. FIG.6shows that the time-division controller510controls the read/write auxiliary circuit520in a time-division manner and thereby makes it cooperate with the CAM cell array50in the time-division manner. As shown inFIG.6, the read/write auxiliary circuit520includes multiple search-bit-line (SBL) drivers. Each SBL driver (SBL_DRV) is a known/self-developed driver and configured to control the search bit lines of a column of CAM cells of the CAM cell array50. The multiple SBL drivers are divided into at least two groups of circuits including a first group of circuits522(e.g., the circuits in the odd column(s) of the auxiliary circuit520, these circuits including one or more SBL drivers) and a second group of circuits524(e.g., the circuits in the even column(s) of the auxiliary circuit520, these circuits including one or more SBL drivers). The first group of circuits522is configured to cooperate with a first group of CAM cells of the CAM cell array50(e.g., the CAM cells in the odd column(s) of the CAM cell array50) according to the aforementioned first group of control signals SE_EN_ODD. The second group of circuits524is configured to cooperate with a second group of CAM cells of the CAM cell array50(e.g., the CAM cells in the even column(s) of the CAM cell array50) according to the aforementioned second group of control signals SE_EN_EVEN. The second group of CAM cells does not include any CAM cell of the first group of CAM cells. In regard to the first group of CAM cells and the second group of CAM cells, although two adjacent CAM cells or two adjacent columns of CAM cells share one ground trace GND, the search and compare operations of the first and second groups of CAM cells are staggered due to the above-mentioned grouping and time-division control, and thus the peak current of each ground trace GND is reduced and the EM/IR problem is mitigated. FIG.7shows that the time-division controller510ofFIG.5controls the pre-charge circuit530in a time-division manner and thereby makes it cooperate with the CAM cell array50in the time-division manner. As shown inFIG.7, the pre-charge circuit530includes multiple pre-charge units. Each pre-charge unit (PR) is a known/self-developed pre-charge unit and configured to charge or discharge the match line (ML) of one row of CAM cells of the CAM cell array50. The multiple pre-charge units are divided into at least two groups of circuits including a first group of circuits532(e.g., the circuits in the odd row(s) of the pre-charge circuit530, these circuits including one or more pre-charge units) and a second group of circuits534(e.g., the circuits in the even row(s) of the pre-charge circuit530, these circuits including one or more pre-charge units). The first group of circuits532is configured to cooperate with a first group of CAM cells of the CAM cell array50(e.g., the CAM cells in the odd row(s) of the CAM cell array50) according to the aforementioned first group of control signals PR_EN_ODD. The second group of circuits534is configured to cooperate with a second group of CAM cells of the CAM cell array50(e.g., the CAM cells in the even row(s) of the CAM cell array50) according to the aforementioned second group of control signals PR_EN_EVEN. The second group of CAM cells does not include any CAM cell of the first group of CAM cells. In regard to the first group of circuits532and the second group of circuits534, although two adjacent pre-charge units share one power trace VDD, the pre-charge operations of the first group of circuits532and the second group of circuits534are staggered due to the above-mentioned grouping and time-division control, and thus the peak current of each power trace VDD is reduced and the EM/IR problem is mitigated. FIG.8shows that the time-division controller510ofFIG.5controls the output circuit540in a time-division manner and thereby makes it cooperate with the CAM cell array50in the time-division manner. As shown inFIG.8, the output circuit540includes multiple match output units. Each match output unit (MO) is a known/self-developed match output unit (e.g., a sense amplifier and/or a register circuit such as a latch circuit) and configured to output the comparison result of the match line of one row of CAM cells of the CAM cell array50. The multiple match output units are divided into at least two groups of circuits including a first group of circuits542(e.g., the circuits in the odd row(s) of the output circuit540, these circuits including one or more match output units) and a second group of circuits544(e.g., the circuits in the even row(s) of the output circuit540, these circuits including one or more match output units). The first group of circuits542is configured to cooperate with a first group of CAM cells of the CAM cell array50(e.g., the CAM cells in the odd row(s) of the CAM cell array50) according to the aforementioned first group of control signals OE_ODD. The second group of circuits544is configured to cooperate with a second group of CAM cells of the CAM cell array50(e.g., the CAM cells in the even row(s) of the CAM cell array50) according to the aforementioned second group of control signals OE_EVEN. The second group of CAM cells does not include any CAM cell of the first group of CAM cells inFIG.8. In light of the above, the match output operations of the first group of circuits542and the second group of circuits544are staggered due to the above-mentioned grouping and time-division control, and thus the overall match output units of the output circuit540won't be activated simultaneously, the instantaneous maximum power of the output circuit540is reduced, and the EM/IR is problem mitigated. FIG.9ashows an embodiment of the time-division controller510. As shown inFIG.9a, the time-division controller510includes an enablement signal generating circuit910and a delay circuit920. The enablement signal generating circuit910(e.g., a logic circuit implementing a finite state machine (FSM) whose actions are determined according to the demand for implementation) is configured to generate the first group of control signals (i.e., the aforementioned SE_EN_ODD/PR_EN_ODD/OE_ODD) and the second group of control signals (i.e., the aforementioned SE_EN_EVEN/PR_EN_EVEN/OE_EVEN) according to the system clock SCLK in the aforementioned search and compare operation. The first group of control signals and the second group of control signals are used for enabling the aforementioned first group of circuits522/532/542and the aforementioned second group of circuits524/534/544. The delay circuit920is configured to delay the second group of control signals and thereby make the time-division controller510output the second group of control signals at the second time point later than the first time point. It is noted that the delay amount caused by the delay circuit920can be determined according to a circuit simulation result or in a known/self-developed manner. Please refer toFIGS.5-9a. On condition that the time-division controller510is configured to control at least one of the read/write auxiliary circuit520and the pre-charge circuit530, the first time point is synchronous with a trigger time point of the system clock; for example,FIG.9bshows the timing diagram of the signals inFIG.9aunder the above-mentioned condition, and the first time point (i.e., the time point when the voltage levels of the first group of control signals rise up) is synchronous with the trigger time point of the system clock SCLK (i.e., the time point when the voltage level of the system clock SCLK rises up). On condition that the time-division controller510is configured to control the output circuit540, the first time point is later than a trigger time point of the system clock SCLK; for example,FIG.9cshows the timing diagram of the signals inFIG.9aunder the above-mentioned condition, and the first time point (i.e., the time point when the voltage levels of the first group of control signals rise up) is later than the trigger time point of the system clock SCLK (i.e., the time point when the voltage level of the system clock SCLK rises up), wherein the interval between the two time points is predetermined and is not shorter than the time for the aforementioned first group of CAM cells finishing the operation (i.e., the search and compare operation or the pre-charge operation). It is noted that the second time point (i.e., the time point when the voltage levels of the second group of control signals rise up) is later than the first time point inFIGS.9b-9c. FIG.10shows another embodiment of the time-division controller510. As shown inFIG.10, the time-division controller510includes a first enablement signal generating circuit1010and a second enablement signal generating circuit1020. The first enablement signal generating circuit1010(e.g., a logic circuit implementing a finite state machine (FSM) whose actions are determined according to the demand for implementation) is configured to generate the first group of control signals (i.e., the aforementioned SE_EN_ODD/PR_EN_ODD/OE_ODD) according to the system clock SCLK in the search and compare operation and thereby activate the aforementioned first group of circuits522/532/542. After the first group of circuits522/532/542cooperates with the first group of CAM cells according to the first group of control signals, the second enablement signal generating circuit1020(e.g., a logic circuit implementing a FSM whose actions are determined according to the demand for implementation) is configured to generate the second group of control signals (i.e., the aforementioned SE_EN_EVEN/PR_EN_EVEN/OE_EVEN) and thereby activate the aforementioned second group of circuits524/534/544. The second enablement signal generating circuit1020receives a feedback signal (FB) of the first group of circuits522/532/542and accordingly determines whether the first group of circuits522/532/542has already cooperated with the first group of CAM cells according to the first group of control signals. It is noted that the timing diagrams of the system clock SCLK, the first group of control signals, and the second group of control signals ofFIG.10are illustrated withFIGS.9b-9c. Please refer toFIGS.5-10. In an exemplary implementation, the time-division controller510controls a first peripheral circuit (e.g., the read/write auxiliary circuit520) and a second peripheral circuit (e.g., the pre-charge circuit530) of the multiple peripheral circuits, wherein the control over the first peripheral circuit and the control over the second peripheral circuit are synchronous, the first peripheral circuit is divided into N groups, the second peripheral circuit is divided into N groups, and the N is an integer greater than one. The time-division controller510outputs the first group of control signals at the aforementioned first time point and outputs the second group of control signals at the aforementioned second time point, and thereby controls the first peripheral circuit. The time-division controller510outputs a third group of control signals at the first time point and outputs a fourth group of control signals at the second time point, and thereby controls the second peripheral circuit. To be more specific, the second peripheral circuit includes a third group of circuits and a fourth group of circuits; the third group of circuits is configured to cooperate with the first group of CAM cells of the CAM cell array50according to the third group of control signals; and the fourth group of circuits is configured to cooperate with the second group of CAM cells of the CAM cell array50according to the fourth group of control signals. Please refer toFIGS.5-10. In an exemplary implementation, the time-division controller510controls a first peripheral circuit (e.g., the read/write auxiliary circuit520or the pre-charge circuit530) and a second peripheral circuit (e.g., the output circuit540) of the multiple peripheral circuits, wherein the control over the first peripheral circuit and the control over the second peripheral circuit are asynchronous, the first peripheral circuit is divided into N groups, the second peripheral circuit is divided into N groups, and the N is an integer greater than one. The time-division controller510outputs the first group of control signals at the aforementioned first time point and outputs the second group of control signals at the aforementioned second time point, and thereby controls the first peripheral circuit. The time-division controller510outputs a third group of control signals at a third time point and outputs a fourth group of control signals at a fourth time point, and thereby controls the second peripheral circuit. The third time point is earlier than the fourth time point but later than the first time point, and the fourth time point is later than the second time point. To be more specific, the second peripheral circuit includes a third group of circuits and a fourth group of circuits; the third group of circuits is configured to cooperate with the first group of CAM cells of the CAM cell array50according to the third group of control signals; and the fourth group of circuits is configured to cooperate with the second group of CAM cells of the CAM cell array50according to the fourth group of control signals. Please refer toFIGS.5-10. In an exemplary implementation, the time-division controller510controls a first peripheral circuit (e.g., the read/write auxiliary circuit520) and a second peripheral circuit (e.g., the pre-charge circuit530) of the multiple peripheral circuits, wherein the control over the first peripheral circuit and the control over the second peripheral circuit are asynchronous, the first peripheral circuit is divided into N group of circuits (e.g., two groups of circuits) and the second peripheral circuit is divided into M groups of circuits (e.g., four groups of circuits), wherein both the N and the M are integers greater than one, and the N is equal to or different from the M. The time-division controller510outputs the first group of control signals at the aforementioned first time point and outputs the second group of control signals at the aforementioned second time point, and thereby controls the first peripheral circuit; in addition, the time-division controller510outputs a third group of control signals at a third time point and outputs a fourth group of control signals at a fourth time point, and thereby controls the second peripheral circuit. The third time point is earlier than the fourth time point, and can be synchronous with the first time point; the fourth time point is later than the third time point, and can be equal to or earlier/later than the second time point. To be more specific, the second peripheral circuit includes a third group of circuits and a fourth group of circuits; the third group of circuits is configured to cooperate with the first group of CAM cells of the CAM cell array50according to the third group of control signals; and the fourth group of circuits is configured to cooperate with the second group of CAM cells of the CAM cell array50according to the fourth group of control signals. When the N is different from the M, a number of CAM cells (hereinafter referred to as “the number Y”) of the third/fourth group of CAM cells is different from a number of CAM cells (hereinafter referred to as “the number X”) of the first/second group of CAM cells; for example, if the M is equal to the N multiplied by two, the Y is equal to the X divided by two (i.e., if M=2N, Y=X/2). Please refer toFIGS.5-10. Any of the multiple peripheral circuits can be divided into two or more groups of circuits (e.g., three groups of circuits). In an exemplary implementation, the time-division controller510controls a certain peripheral circuit (e.g., the read/write auxiliary circuit520, the pre-charge circuit530, or the output circuit540), wherein the certain peripheral circuit is divided into N group of circuits including a first group of circuits, a second group of circuits, and a third group of circuits, and the N is an integer greater than two. The time-division controller510outputs the first group of control signals at the aforementioned first time point and outputs the second group of control signals at the aforementioned second time point, and thereby controls the first group of circuits and the second group of circuits respectively. In addition, the time-division controller510outputs a third group of control signals at a third time point and thereby controls the third group of circuits, wherein the third time point is later than the second time point. To be more specific, the first group of circuits, the second group of circuits, and the third group of circuits of the certain peripheral circuit are configured to control a first group of CAM cells, a second group of CAM cells, and a third group of CAM cells of the CAM cell array50according to the first group of control signal, the second group of control signals, and the third group of control signals respectively. The three groups of CAM cells do not overlap, which means that any CAM cell in one of the three groups of CAM cells is different from any CAM cell in the other two groups of CAM cells. It should be noted that people having ordinary skill in the art can selectively use some or all of the features of any embodiment in this specification or selectively use some or all of the features of multiple embodiments in this specification to implement the present invention as long as such implementation is practicable; in other words, the way to implement the present invention can be flexible. To sum up, the time-division memory control device of the present disclosure controls a CAM in a time-division manner, and thereby reduces a peak current and solves the EM/IR problems. The aforementioned descriptions represent merely the preferred embodiments of the present invention, without any intention to limit the scope of the present invention thereto. Various equivalent changes, alterations, or modifications based on the claims of the present invention are all consequently viewed as being embraced by the scope of the present invention. | 21,702 |
11861214 | DETAILED DESCRIPTION OF EMBODIMENTS Currently, as part of a computer recycle event (also called recycling event herein), e.g., whereby computer systems and accompanying non-volatile memories are sanitized, updated, and/or otherwise prepared for transfer to another user (e.g., customer). Generally regions of non-volatile memory devices that are subject to alteration by users are blindly erased (i.e., wiped); then rewritten with the latest known good data. However, this approach can lead to extended downtimes for the computer systems being recycled. Not only is the wiping and rewriting process inefficient and tedious, but the computer systems are often taken offline to perform the operations. The time during which a computer system is unavailable for use by others is called “downtime” herein. Various embodiments discussed more fully below provide systems, methods, functionality that enables substantial performance improvements over such approaches, e.g., in part, by interleaving inspection (including forensics analysis) of the non-volatile memories of computers used by prior customers with the simultaneous or concurrent writing the new known good data to alternate non-volatile memory devices, during a background update process, which is part of a recycling process or operation, as discussed more fully below. For the purposes of the present discussion, a computing environment may be any collection of computing resources used to perform one or more tasks involving computer processing. A computer may be any processor in communication with a memory. A computing resource may be any component, mechanism, or capability or quantities thereof of a computing environment, including, but not limited to, processors, memories, software applications, user input devices, and output devices, servers, data, and so on. A networked computing environment may be any computing environment that includes intercommunicating computers, i.e., a computer network. Similarly, a networked software application may be computer code that is adapted to facilitate communicating with or otherwise using one or more computing resources, e.g., servers, via a network. For the purposes of the present discussion, a server may be any computing resource, such as a computer and/or software that is adapted to provide content, e.g., data and/or functionality, to another computing resource or entity that requests it, i.e., the client. A client may be any computer or system that is adapted to receive content from another computer or system, called a server. A server system may be any collection of one or more servers and accompanying computing resources. A data center may be any collection of one or more buildings or facilities for accommodating plural computer systems, e.g., servers, and other cloud-based computing resources. Cloud-based computing resources may be any computing resources accommodated by a data center or other collection of one or more intercommunicating servers. A cloud service may be any mechanism (e.g., one or more web services, Application Programming Interfaces (APIs), etc.) for enabling a user and/or other software application to employ data and/or functionality provided via a cloud. A cloud may be any collection of one or more servers. For example, certain clouds are implemented via one or more data centers with servers that may provide data, data storage, and other functionality accessible to client devices. Certain data centers may provide centralized locations for concentrating computing and networking equipment for users to access, consume, and store large amounts of data. Often in collections of computing systems, e.g. cloud computing systems, common resources such as processors and memory are configured for different users to utilize in turn. Such computing collections utilize rewritable memory, e.g. flash memory, that can be erased once one user is done with it and rewritten for a next user. For example, a cloud service provider must ensure that when a new user begins accessing a cloud resource, the resource is configured properly for a subsequent user and any information from a prior user is unavailable. For clarity, certain well-known components, such as the Internet, hard drives, processors, power supplies, routers, Internet Service Providers (ISPs), Input/Output (I/O) workflow orchestrators, process schedulers, identity management clouds, process clouds, certificate authorities, business process management systems, database management systems, middleware, and so on, are not necessarily explicitly called out in the figures. However, those skilled in the art with access to the present teachings will know which components to implement and how to implement them to meet the needs of a given implementation. FIG.1illustrates a first example system10and accompanying computing environment employing a restoration backend22to facilitate selectively implementing background updating of computing resources, as may be implemented via one or more cloud services and/or Application Programming Interfaces, and/or code libraries, such as Root of Trust (ROT) libraries, which may run among or be included within a control plane cloud services module24. The example system10includes one or more client systems12in communication with a server system14, e.g., via the Internet or other network. The server system14may be implemented via a data center and may represent a cloud. Note that in general, groupings of various modules of the system10are illustrative and may vary, e.g., certain modules may be combined with other modules or implemented inside of other modules, or the modules may otherwise be distributed differently (than shown) among a network or within one or more computing devices or virtual machines, without departing from the scope of the present teachings. For example, while a switch28(as discussed more fully below) is shown included in a restoration backend22, the switch28may be considered outside of the restoration backend22, without departing from the scope of the present teachings. Similarly, a service processor38of a front-end processing module18may be considered part of the one or more server-side computer systems20, as opposed to part of the front-end processing module18, without departing from the scope of the present teachings. Furthermore, an alternative grouping and arrangement of modules of a system, which may be readily adapted for use with the present teachings (and associated embodiments discussed herein) by those skilled in the art, is discussed more fully in the above-identified and incorporated U.S. patent application, entitled CONFIGURABLE MEMORY DEVICE CONNECTED TO A MICROPROCESSOR. In the present example embodiment, the client system12includes client-side software16for facilitating accessing data and functionality provided by the server system14. The example server system14includes a front-end processing module18, which may be implemented via one or more web services and/or Application Programming Interfaces (APIs) and associated processors, including a service processor38. For the purposes of the present discussion, software functionality may be any function, capability, or feature, e.g., stored or arranged data, that is provided via computer code, i.e., software. Software functionality may include actions, such as retrieving data pertaining to a computing object (e.g., business object); performing an enterprise-related task, calculating analytics, launching certain dialog boxes, performing searches, implementing forensics analysis algorithms on memory devices, and so on, as discussed more fully below. The front-end processing module18communicates with one or more first computer systems20, resources of which are selectively leased to users, e.g., customers of a proprietor of the server system14. Generally, such a first computer system20includes one or more first memory devices30used to facilitate operations of the first computer system20. Note that the term “memory device,” may be used interchangeably with the term “memory,” herein. Requests to perform operations (that a user of the client system12directs the first computer system20to perform) are handled by the service processor38of the front-end processing, which facilitates interfacing the client system12with server-side computing resources, including the first memory device30(and accompanying firmware, data, etc.) of the first computer20, in addition to other computing resources (e.g., processing, operating system software, application software, and so on) provided by the computer system20. When the service processor38is handling messaging from the client system12, which may then thereby affect use of the first memory device30, then the first memory device30is said to be online or active, i.e., it's being used or is available for use by the client system12via the front-end processing module18. Similarly, when the first memory device30is electronically isolated (e.g., via a switch28, as discussed more fully below) from the client system12and front-end processing module18, the first memory device30is said to be offline or non-active. Note that the service processor38may include functionality similar to a Baseboard Management Controller (BMC) for monitoring server (computer system) hardware and communicating with various peripherals, e.g., Field-Programmable Gate Arrays (FPGAs), Basic Input/Output Systems (BIOSs), etc. In the present example embodiment, the restoration backend22includes various modules26,28,32-36and associated functionality involved in implementing background updating and associated processing of one or more offline memory devices32-36. The restoration backend22communicates with the service processor38of the front-end processing18and with the first memory device30of the first computer system20. The various modules26,28,32-36of the restoration backend22include a Root of Trust Processor (ROT)26(also called ROT processor herein). The ROT26implements functionality for securely interfacing one or more cloud services of the control plane cloud services module24with the one or more of the memory devices32-36of the restoration back end22, via the switch28. The ROT26may issue one or more control signals to the switch28, e.g., to selectively control switching operation of the switch28, as discussed more fully below. The switch28selectively couples (e.g., in response to one or more control signals issued by the ROT processor26) the service processor38with one or more of the memory devices30-36. In the present embodiment, the service processor38is electrically coupled to the first memory device30via the switch28. For illustrative purposes, the first memory device30is shown included in the first computer system20. However, the first memory device30may or may not be included within the first computer system20. The switch28includes functionality for selectively electrically disconnecting communications between the service processor38and the first memory device30, and reconnecting it to one or more other memories, e.g., one of the currently offline memory devices32-34. This switching and reconnecting is called swapping herein. For instance, if the ROT26issues a control signal to the switch28to disconnect the service processor38from the first memory device30; electronically place the first memory device30offline; then connect to the third memory device34, then the third memory device34may be considered part of the first computer system20, to the extent that the third memory device34can then be used by the first computer system20, whereas the prior first memory30, gets electronically moved into the restoration backend22via the switch28. Note that the ROT processor26also communicates with one or more cloud services (e.g., web services, Application Programming Interfaces (APIs), etc.) of the control plane cloud services module24. In the present example embodiment the control plane cloud services module24includes modules for implementing functionality (effectuated through the ROT processor26) to implement forensics analysis and data writing to one or more of the offline memories32-36that may be undergoing backend processing, e.g., preparation for being usable to replace the first memory device30when another user is assigned the first computer system20. By selectively using offline backend processing to prepare the memory devices32-36for subsequent use in association with the first computer system20, while the first memory device30is offline, this can obviate any downtime for the first computer system20. For instance, when the first computer system20is relinquished by a first user of the client system(s)12, after using the first memory device30, then the ROT processor26can detect this event, e.g., via signaling from one or more cloud services of the control plane cloud services module24. Upon detection of such a relinquishing event, the ROT processor26may then use the switch28to electronically swap out the first memory device30. For instance, the first memory device30may be electronically swapped out and replaced with the third memory device34that has been sanitized, updated, and otherwise processed in preparation for use by a subsequent user of the first computer system20. This swapping time, happening upon turnover of the first computer system20from a first user to a second user (or in preparation for turnover to a second user) may be close to zero, such that downtime of the computer system20is virtually eliminated. Note that embodiments discussed herein include additional beneficial technology; not just the offline background processing for the purposes of implementing recycling operations. For instance, as discussed more fully below, the control plane cloud services module24includes one or more cloud services (e.g., web services) or other modules for implementing forensics analysis of one or more of the offline memory modules32-36, in addition to functionality for writing any requisite new data and/or updates to previously forensically analyzed memory devices. Note that, after forensics analysis is performed on a given offline memory device32-36, e.g., so as to confirm that the memory device has not been tampered with (e.g., so as to confirm that data and/or firmware or software thereon has not been modified to be different from data and/or from data and/or firmware that is indicated via a known source of trust), then the memory device need not be wiped. Instead, any new updates, e.g., firmware and/or data updates, can be written to the memory device32-36, without first requiring a wipe of virtually the entire memory device32-36. In some cases, e.g., when no firmware and/or data updates are available, no additional data will need to be written to the memory device (e.g., one of the memory devices32-36) after it passes forensics analysis (e.g., to confirm that no data and/or firmware has changed or is otherwise inappropriate), for the memory device to be considered ready for reuse by a subsequent user of the computer system20. Use of the forensics analysis may also facilitate detecting any malicious activity on the server system14, which might otherwise go undetected when relying merely upon memory wipes to prepare memory devices, i.e., to recycle memory devices for subsequent use in association with the first computer system20by a subsequent user. Note that while in the present example embodiment, the restoration backend22operates on offline memory devices32-26to prepare them for subsequent online use on the computer system20, embodiments are not limited thereto. For example, in certain implementations, one or more cloud services of the control plane cloud services module24may employ the ROT processor26to inspect, i.e., perform forensics analysis, on an online memory, e.g., on the first memory device30, without departing from the scope of the present teachings. In the present example implementation, cloud services of the control plane cloud services module24are used to forensically inspect the second memory device32, while any new data (e.g., updates to firmware and/or data) is written to the third memory device34. At that point, the third memory device34had previously been in the position (electronic position) of the second memory device32, such that forensic inspection has already been performed on the third memory device34before any new data is written thereto. Note that both the second memory32, undergoing forensics analysis and/or inspection, and the third memory34, which may be having new data simultaneously written thereto, are currently offline in the embodiment ofFIG.1. Note however, that embodiments are not limited to the simultaneous backend processing of multiple memory devices (e.g., the second memory device32, and the third memory device34). For instance, in certain scenarios, a single offline memory device may pass through forensics processing and any data (and/or firmware) writing, rewriting, and/or updating, while the first memory device30is currently in use by the first computer system20and accompanying first user of the client system12. By the time that the user relinquishes the first computer system20(and the first memory device30must be processed by the restoration backend22before use by another customer, and a sanitized and updated second memory must then be placed online for use by the first computer system20), the background processing of the second memory device32will likely have been completed. However, the embodiment discussed more fully below may process and hold two of the memory devices32-36(e.g., the second memory device32and the third memory device36) offline until the first computer system20and associated first memory device30are relinquished by the user. Accordingly, in the present example embodiment, after any new data is written to the third memory device34while forensics analysis is being performed on the second memory device32, and after forensics analysis is completed on the second memory device32, then a waiting period may begin. The waiting period may involve waiting until the first user of the first computer system20relinquishes the first computer system20. When the first user relinquishes the first computer system20, then the first memory device30is taken offline and electronically positioned (via the switch28via one or more controls signals from the ROT processor26) in place of the second memory device32, e.g., in preparation for forensics analysis implemented, at least in part, via one or more cloud services of the control plane cloud services module24. Simultaneously, the already prepared (background processed in preparation for swapping out the first memory device30) third memory device34is electronically put online for the first computer system20, while the first memory device30is taken offline. In this case, the third memory device34is said to take the position of the first memory device30. Then, the first memory device30will be offline, and will take the place of the second memory device32, the second memory device32will take the place of the third memory device34. Accordingly, forensics analysis will then be performed on the now offline first memory device30. Upon the next recycle event (also called recycling event), the first memory will then move to the next position (corresponding to the position of the third memory device34inFIG.1), and the third memory device will (having had any new data written thereto) will advance to online status, thereby electronically replacing the prior memory device used by the first computer system20. The cycle of selectively swapping memory devices30,32,34in a virtual circular queue that includes two processing steps (forensics analysis and new data writing) ensures that efficient forensics analysis and data writing can be performed via the restoration backend while a user may be executing loads on the first computer system20and using which ever memory is currently online, e.g., the first memory30is shown to be currently online inFIG.1. Once the user relinquishes the first computer system20, a background processed memory device, e.g., the third memory device34may be swapped into position (i.e., placed online) for use by the first computer system20. Note that while the present example embodiment discussed above uses at least three memory devices as part of a three-stage loop, the additional or fewer stages may be employed, without departing from the scope of the present teachings. FIG.2illustrates a second example system50and accompanying computing environment that may be implemented by the first example system10ofFIG.1, and which further illustrates example details of the control plane cloud services module24used to selectively employ forensics and background updating. Note that while the first online memory device30ofFIG.1is not shown inFIG.2, the first computer system20ofFIG.2also includes (or otherwise communicates with or owns) the first memory device30ofFIG.1. The example control plane cloud services module24includes a controller54in communication with an inspector service (also called inspector module herein)56, a memory rewriter service (also called a memory rewriter module herein)58, and a source of truth60. The source of truth includes60data, firmware images, and any other information that may be needed for the inspector module56to conduct forensics analysis of one or more memory devices, e.g., the second memory device32and the third memory device34, and any information, e.g., data and/or firmware updates that may be written to one or more of the memory devices32,34. In the present example embodiment, the inspector module56and the memory rewriter (and/or writer, as it may also simply perform a write function, without departing from the scope of the present teachings)58communicate with the memory devices32,34through a selective memory swapping system52, that includes the restoration backend22. The memory swapping system52is called a swapping system, as it facilitates (e.g., via the switch28ofFIG.1) selectively electronically taking non-volatile memory devices offline (in preparation for recycling via background processing), and placing recycled memory devices online, e.g., when a computer system changes hands, e.g., passed from use by one user or customer to another user or customer. Note that for illustrative purposes, the offline memories, i.e., the second memory device32and the second memory device34and shown included on the first computer system20, even though the memory devices32,34are currently offline and electronically disconnected from any processing that can be performed by a user of the first computer system20. Accordingly, the physical locations of the offline memory devices32,34may vary, without departing from the scope of the present teachings. Furthermore, the memory devices32,34are shown including or representing plural memory devices, e.g., hose Network Interface Controller (NIC) memory, a smart NIC, a BIOS memory, a Field Programmable Gate Array memory, and so on. Note that each of the memory devices32,34can represent one or more of, or any combination of such non-volatile memory devices. In the present example embodiment, the second memory32, the third memory34, and the first memory34are non-volatile memories, i.e., they retain memory when power is disconnected from the associated integrated circuits. In the present example embodiment, the inspector service56includes code and associated functionality for performing forensics analysis, which may include employing hashes and associated function, Cyclic Redundancy Checks (CRCs), and/or other forensics techniques to verify the integrity of data and/or firmware maintained on the non-volatile memories32,34being background processed, e.g., by the control plane cloud services module24in communication with the restoration backend22(e.g., the ROT26thereof, as shown inFIG.1). FIG.3illustrates example components26,28,38of the memory swapping system52ofFIG.2, which may be implemented via the first example system10ofFIG.1. In the present example embodiment, the selective memory swapping system52is shown including the ROT processor26, the service processor38(of the front-end processing module18ofFIG.1), and the switch28of the restoration backend22ofFIG.1. Note that while the first online memory device30ofFIG.1is not shown inFIG.3, the selective memory swapping system52also communicates with the first memory device30ofFIG.1. InFIG.3, the control plane cloud services module24are shown communicating with the ROT26of the selective memory swapping system52via a first bus (Bus 1). The ROT26communicates with the switch28via a second bus (Bus 2), the service processor38communicates with the switch28via a third bus (Bus 3). The switch28communicates with the second offline non-volatile memory32via a fourth bus (Bus 4). The switch further communicates with the third non-volatile memory device34via a fifth us (Bus 5). Note that the switch28may be implemented via various technologies. Those skilled in the art with access to the present teachings may readily determine the appropriate switch architecture to meet the needs of a given implementation, without undue experimentation. In one implementation, the switch28can be implemented using one or more crossbar switches and/or other switching mechanism(s) for selectively switching (e.g., connecting and/or disconnecting) one or more input terminals to one more output terminals. FIG.4is a first sequence flow diagram illustrating an example conventional approach to recycling a memory device82in a computer system via a cloud service80, the approach of which may necessitate substantial computer system downtime, e.g., while the associated non-volatile memory82is recycled. The example flow includes a vertical time axis88, where processing time increases in duration further down the axis88. An initial data-erasing and pushing step includes issuing an erase and push operation control84that then launches a memory erasure process and subsequent writing process that takes substantial time, e.g., as indicated by an indicated time duration86. After new data is received and written to the non-volatile memory82, then the recycle operation (also called the background processing operation) is complete, and the non-volatile memory82is ready for reuse, e.g., is ready to put online as needed. An acknowledgement message90may be returned to the cloud service80to confirm that the erasure and rewriting operation completed. However, conventionality, the entire time duration86, represents time during which the computer system using the non-volatile memory82is non-operational. This represents computer downtime. Embodiments not only substantially eliminate such downtime86, but further provide forensics analysis functionality, and more efficient rewriting of data. For instance, depending upon the forensics analysis, not all data and firmware on the non-volatile memory device82necessarily must be rewritten, and sometimes no data and/or firmware will need to be written to, or rewritten to, the non-volatile memory82. This can happen, for instance, when a first user relinquishes a computer system for use by a second user, but when no modifications were made to the non-volatile memory82and no data and/or firmware updates are yet available for the non-volatile memory82. FIG.5is a second sequence flow diagram illustrating an example communications exchanges100(and processing including forensics analysis) between the cloud service24and the second non-volatile memory32, and communications exchanges102(without forensics steps) between the cloud service24and the third memory device34ofFIGS.1-3. The flow ofFIG.5further illustrates substantial time savings, including virtual elimination of computer system downtime for recycling events and memory swapping events in accordance with embodiments discussed herein. Note that the example communications exchange between the control plane cloud services module24and the third memory device34involves the pushing of any new data (e.g., updates if available), and does not necessarily need to involve the erasure of any data (as performed in via the approach shown inFIG.4), as the integrity of the existing data of the non-volatile memory device34may have already been confirmed via prior forensics analysis, via embodiments discussed herein. Note that, generally, if the integrity of the existing non-volatile memory device34has already been confirmed to be good (and sufficiently recent for the purposes of a particular implementation), and no software or firmware update is available, then no data needs to be erased therefrom or written thereto. In the present example embodiment, the forensics processing100includes the control plane cloud services module24issuing a request message104asking the second memory device32for a hash measurement. The second memory device32then process the hash measurement request during a hash-processing step106. After completing the hash-processing step106responsive to the hash request104, the second memory device then issues a responsive hash measurement108back to the control plane cloud services module24. The control plane cloud services module24then processes the returned hash measurement108in a hash-measurement processing step110. After completing the hash-measurement processing step110, the control plane cloud services module24asks, a first data-requesting message112, the second memory device32for any additional data needed based on the processed hash measurement processing110. The second memory device32then processes first data-request message112during a data-request processing step114, and then sends data back to the control plane cloud services module24via a data-sending message116. The control plane cloud services module24then processes the retrieved data sent in the data-sending step116, in a retrieved-data processing step118. After completing the retrieved-data processing step118, the control plane cloud services module24issues a new-data request message120to the second memory device32. The second memory device32then processes the new-data request message120in an associated new-data request processing step122. In the present example scenario, the second memory device32determines, in the new-data request processing step122, that no new data needs to be sent, or that it has already been sent. If so, then forensics analysis for the second memory device32is complete. A completion acknowledgement message (not shown) may then be sent from the second memory device32back to the control plane cloud services module24. Note that simultaneously or concurrently with the control plane cloud services module24issuing the initial hash measurement request message104to the second memory device32, the control plane cloud services module24begins pushing new data to the previously forensically analyzed third memory device34, e.g., via an initial push message124. The push message124is then processed by the third memory device34in a push-message processing step126. Note that the push-message processing step126is of relatively short duration (e.g., relative to the erasing and pushing duration86shown in the approach ofFIG.4), as does not also require the erasing of any data that has already passed forensics inspection. Furthermore, note that, while not shown inFIG.5, communications between the control plane cloud services module24and the non-volatile memories32,34, may occur through the ROT26and switch28, e.g., as shown inFIG.1. In addition, note that forensics processing100portion implemented by the control plane cloud services module24ofFIG.5may be implemented by the accompanying inspector service or module56ofFIG.2. Similarly, the pushing (e.g., issuance of the push message124) can be implemented via the memory rewriter (or writer)58of the control plane cloud services module24, as shown inFIG.2. In summary, the second non-volatile memory32is checked (via forensics processing100) for any corruption, and is validated. The validated portions need not be erased to prepare it for pushing new data. This may save additional time, in that the pushing of new data onto the second memory device32can be minimized, i.e., less data may need to be pushed, as the forensically checked areas can be preserved. The pushed new data may include updates, and so on, to certain regions of the of the already forensically validated third memory device34. In an alternative implementation and accompanying scenario, a user or customer may have just completed executing of (i.e., using) the second memory device32. In this alternative scenario, the third memory device34had already been staged with new data (e.g., and forensically analyzed) during the customer's prior use of the second memory device32. In this case, the third memory34will be ready for use by a new customer, i.e., will be ready to be placed on line via the restoration backend22and accompanying switch28and ROT26ofFIG.1. A new customer can then immediately begin executing off the third memory device34after it is switched online, while inspection and forensics are being performed on the second memory device32. In this way, only two memories need to be included in the swapping operation. Note however, that this alternative approach is not shown inFIG.5, which instead shows simultaneous offline steps104,124performed on the second memory device32and the third memory device34. In the alternative scenario, the third memory device34will have already had data pushed thereto, e.g., data obtained from the source of truth60ofFIG.60and then written to as needed (e.g., via the memory rewriter or writer58ofFIG.2), after forensics analysis e.g., corresponding to the forensics processing steps100, as may be controlled via the control plane cloud services module24. FIG.6is a flow diagram of a first example method130suitable for use with various embodiments discussed herein. The first example method130facilitates selectively verifying/validating (e.g., forensics analysis) and/or updating one or more memory devices of a computer system of a computing environment in preparation for use by a subsequent user. The first example method130includes a first step132, which includes determining that a current first user of a first computer system (e.g., the first computer system20ofFIG.1) that is employing a first memory device (e.g., the first memory device30ofFIG.1) is slated to relinquish the first memory device at a future time. This can be done via a detection signal sent from one or more services running on the control plane cloud services module24ofFIG.1to the ROT processor26ofFIG.1. The control plane cloud services module24ofFIG.24may include one or more cloud services that are configured to detect when a given user has vacated or is scheduled to vacate a particular computer system (e.g., the first computer system20ofFIG.1), which can thereby alert one or more additional modules, e.g., the restoration backend22, of the server system14ofFIG.1. A second step134includes preparing a second memory device (e.g., the second memory device32ofFIG.1) to be interchanged with the first memory device (e.g., the first memory device30ofFIG.1) in preparation for use thereof by the first computer system and a second subsequent user. The preparing, implemented via the second step134, may further include performing forensics analysis (e.g., via the ROT26ofFIG.1, at the direction of one or more signals from the control plane cloud services module24ofFIGS.1and2, e.g., from the inspector module56ofFIG.2); on the second memory device (e.g., the second memory device32ofFIG.1), and then selectively updating or altering data or code on the second memory device in response to the performing. A third step136, includes detecting that the first computer system has been relinquished by the first user. This detection may also be obtained by the restoration back end22ofFIG.1via signaling from the control plane cloud services module24, which may incorporate one or more cloud services for performing such detection. A fourth step138includes employing a memory swapping system (e.g., the swapping system52ofFIG.3) to electronically position the second memory device in place of the first memory in response to the detecting, thereby enabling a second user to use the computer system in communication with the second memory device. Note that the first example method130may be modified, without departing from the scope of the present teachings, e.g., additional steps may be added, modified, swapped with other steps, and so on. For example, the first example method130may be modified to further specify a step of employing a cloud service to communicate with the memory swapping system to facilitate preparing the second memory device to be interchanged with the first memory device; including employing the cloud service to communicate with a Root of Trust (ROT) (e.g., corresponding to the ROT processor26ofFIG.1) of the memory swapping system to facilitate the forensics analysis and the selectively updating or altering data. The first example method130may further specify employing the ROT processor communication with a switch (e.g., the switch28ofFIG.1) to facilitate interchanging the first memory device with the second memory device. The first example method130may further specify use of a service processor (e.g., the service processor38ofFIG.1) in communication with the switch to facilitate interfacing one or more operations of the computer system initiated by a user of the computer system, with the first memory device or the second memory device, depending upon whether the first memory device or the second memory device, respectively, has been prepared for use by the computer system. This happens when the first memory device or the second memory device are placed online, e.g., via the ROT processor and the accompanying switch of a restoration backend (e.g., the restoration backend22ofFIG.1). The first example method130may further specify, in response to the detecting: taking the first memory device offline, while placing the second memory device online; conducting forensics analysis on the first memory device, and producing forensics results in response thereto; using the forensics results to determine data to subsequently write to the first memory device; selectively writing new data to a third memory device that has undergone forensics processing to determine the new data that should be written to the third memory device, so as to prepare the third memory device for electronically positioning in place of the second memory device when the second memory device and accompanying computer system are relinquished by a second user; detecting that the second user has relinquished the computer; and electronically positioning the third memory device in place of the second memory device, such that: the second memory device is taken offline in preparation for forensics analysis; the third memory device is placed online for use by the computer system and a third user; and the first memory device is positioned in preparation for rewriting or updating data thereto, so as to prepare the prepared second memory device for use by a forth user, upon relinquishing of the computer system and the third memory device used by the third user. The first memory device and the second memory device may represent non-volatile memories. The computing environment may be a networked computing environment, such as a cloud-based computing environment. The forensics analysis may include: employing one or more hashes or Cyclic Redundancy Checks (CRC), or other codes to ascertain an indication as to whether or not a set of data and/or computer code has been modified or otherwise tampered, replaced, or augmented on the first memory device or the second memory device. FIG.7is a general block diagram of a system900and accompanying computing environment usable to implement the embodiments ofFIGS.1-6. Embodiments may be implemented as standalone applications (for example, residing in a user device) or as web-based applications implemented using a combination of client-side and server-side code. The general system900includes user devices960-990, including desktop computers960, notebook computers970, smartphones980, mobile phones985, and tablets990. The general system900can interface with any type of user device, such as a thin-client computer, Internet-enabled mobile telephone, mobile Internet access device, tablet, electronic book, or personal digital assistant, capable of displaying and navigating web pages or other types of electronic documents and UIs, and/or executing applications. Although the system900is shown with five user devices, any number of user devices can be supported. A web server910is used to process requests from web browsers and standalone applications for web pages, electronic documents, enterprise data or other content, and other data from the user computers. The web server910may also provide push data or syndicated content, such as RSS feeds, of data related to enterprise operations. An application server920operates one or more applications. The applications can be implemented as one or more scripts or programs written in any programming language, such as Java, C, C++, C#, or any scripting language, such as JavaScript or ECMAScript (European Computer Manufacturers Association Script), Perl, PHP (Hypertext Preprocessor), Python, Ruby, or TCL (Tool Command Language). Applications can be built using libraries or application frameworks, such as Rails, Enterprise JavaBeans, or .NET. Web content can created using HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and other web technology, including templating languages and parsers. The data applications running on the application server920are adapted to process input data and user computer requests and can store or retrieve data from data storage device or database930. Database930stores data created and used by the data applications. In an embodiment, the database930includes a relational database that is adapted to store, update, and retrieve data in response to SQL format commands or other database query languages. Other embodiments may use unstructured data storage architectures and NoSQL (Not Only SQL) databases. In an embodiment, the application server920includes one or more general-purpose computers capable of executing programs or scripts. In an embodiment, web server910is implemented as an application running on the one or more general-purpose computers. The web server910and application server920may be combined and executed on the same computers. An electronic communication network940-950enables communication between user computing devices960-990, web server910, application server920, and database930. In an embodiment, networks940-950may further include any form of electrical or optical communication devices, including wired network940and wireless network950. Networks940-950may also incorporate one or more local-area networks, such as an Ethernet network, wide-area networks, such as the Internet; cellular carrier data networks; and virtual networks, such as a virtual private network. The system900is one example for executing applications according to an embodiment of the invention. In another embodiment, application server920, web server910, and optionally database930can be combined into a single server computer application and system. In a further embodiment, virtualization and virtual machine applications may be used to implement one or more of the application server920, web server910, and database930. In still further embodiments, all or a portion of the web and application serving functions may be integrated into an application running on each of the user computers. For example, a JavaScript application on the user computer may be used to retrieve or analyze data and display portions of the applications. With reference toFIGS.1and7, the client system(s)12ofFIG.1may be implemented via one or more of the desktop computer960, tablet990, smartphone980, notebook computer970, and/or mobile phone985ofFIG.7. The server system14ofFIG.1and accompanying modules18-28may be implemented via the web server910and/or application server920ofFIG.7. The source of truth60ofFIG.2may be implemented using the data storage device930ofFIG.7. FIG.8illustrates a block diagram of an example computing device or system500, which may be used for implementations described herein. For example, the computing device1000may be used to implement server devices910,920ofFIG.7as well as to perform the method implementations described herein. In some implementations, the computing device1000may include a processor1002, an operating system1004, a memory1006, and an input/output (I/O) interface1008. In various implementations, the processor1002may be used to implement various functions and features described herein, as well as to perform the method implementations described herein. While the processor1002is described as performing implementations described herein, any suitable component or combination of components of the computing device1000or any suitable processor or processors associated with the device1000or any suitable system may perform the steps described. Implementations described herein may be carried out on a user device, on a server, or a combination of both. The example computing device1000also includes a software application1010, which may be stored on memory1006or on any other suitable storage location or computer-readable medium. The software application1010provides instructions that enable the processor1002to perform the functions described herein and other functions. The components of computing device1000may be implemented by one or more processors or any combination of hardware devices, as well as any combination of hardware, software, firmware, etc. For ease of illustration,FIG.8shows one block for each of processor1002, operating system1004, memory1006, I/O interface1008, and software application1010. These blocks1002,1004,1006,1008, and1010may represent multiple processors, operating systems, memories, I/O interfaces, and software applications. In various implementations, the computing device1000may not have all of the components shown and/or may have other elements including other types of components instead of, or in addition to, those shown herein. Although the description has been described with respect to particular embodiments thereof, these particular embodiments are merely illustrative, and not restrictive. For instance, although features may be described with respect to specific types of resources or operations, e.g., non-volatile memory, the features described herein may be applicable to other cloud computing resources and operations. Furthermore, while cloud computing is one example of a computing system described, where the memory restoration system may be implemented by a motherboard, the present memory restoration system may be employed in other computing environments in which a memory device or other electronic hardware is updated in the background. For example, network cards, hard drives, etc. may be updated without interfering with currently executing software. Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different particular embodiments. In some particular embodiments, multiple steps shown as sequential in this specification can be performed at the same time. Particular embodiments may be implemented in a computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or device. Particular embodiments can be implemented in the form of control logic in software or hardware or a combination of both. The control logic, when executed by one or more processors, may be operable to perform that which is described in particular embodiments. For example, a non-transitory medium such as a hardware storage device can be used to store the control logic, which can include executable instructions. Particular embodiments may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, etc. Other components and mechanisms may be used. In general, the functions of particular embodiments can be achieved by any means as is known in the art. Distributed, networked systems, components, and/or circuits can be used. Cloud computing or cloud services can be employed. Communication, or transfer, of data may be wired, wireless, or by any other means. It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope to implement a program or code that can be stored in a machine-readable medium to permit a computer to perform any of the methods described above. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems. Examples of processing systems can include servers, clients, end user devices, routers, switches, networked storage, etc. A computer may be any processor in communication with a memory. The memory may be any suitable processor-readable storage medium, such as random-access memory (RAM), read-only memory (ROM), magnetic or optical disk, or other non-transitory media suitable for storing instructions for execution by the processor. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Thus, while particular embodiments have been described herein, latitudes of modification, various changes, and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of particular embodiments will be employed without a corresponding use of other features without departing from the scope and spirit as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit. | 51,606 |
11861215 | DETAILED DESCRIPTION OF THE EMBODIMENTS The following discussion of the embodiments of the disclosure directed to an electronics assembly including a plurality of distributed midplanes each connecting electronic components to a controller is merely exemplary in nature, and is in no way intended to limit the disclosure or its applications or uses. FIG.1is an isometric view of a known 2U data storage system10including a frame12, where an outer chassis of the system10has been removed to expose the components therein. The storage system10is intended to represent any suitable data storage system consistent with the discussion herein and operates using any suitable protocol, such as peripheral component interconnect (PCI) express (PCIe), serial attached SCSI (SAS), open coherent accelerator processor interface (OpenCAPI), Gen-Z, cache coherent interconnect for accelerators (CCIX), and compute express link (CXL). The system10includes a plurality of aligned storage drives14, for example, a row of twenty-four U2 drives, provided at a front of the system10. The system10also includes a pair of stacked storage controllers16and18positioned at a rear of the system10and being operable to store and extract data among and between the drives14, as is well known by those skilled in the art. The system10further includes a pair of PSUs20and22positioned adjacent to the controllers16and18providing power for the system10and a number of fans24and heat sinks26for cooling purposes. The system10also includes a midplane30positioned between the drives14and the controllers16and18and providing electrical connections therebetween in a known manner, where the midplane30includes a PCBA32having electrical traces to send signals among and between the drives14and the storage controllers16and18.FIG.2is a front isometric view of the midplane30separated from the system10and showing a number of drive connectors34for connecting the drives14to the midplane30.FIG.3is a rear isometric view of the midplane30separated from the system10and showing a number of controller connectors36for connecting the storage controllers16and18to the midplane30. A special configuration of slots38and other openings extend through the PCBA32to allow airflow through the midplane30for cooling purposes. A PSU connector28provides connection to the PSUs20and22. The electrical traces on the PCBA32provide signal paths between the connectors34and36. It is often necessary to route the traces around the slots38, which increases their length. Thus, as discussed above, the electrical traces in the midplane30and the know configuration of the slots38are often not conducive for the higher signal speeds and cooling requirements that are being developed in the art. For example, if greater cooling is required, the slots38may need to be larger, which likely will increase the length of the traces. Further, some of the slots38often need to be larger than other of the slots38for cooling purposes. Because of this, some of the controller connectors36are electrically coupled to several of the drive connectors34, which also requires increased trace length. As will be discussed in detail below, this disclosure proposes replacing the single piece midplane30with a plurality of spaced apart distributed midplanes that allow for shorter signal traces between the connectors that connect to the drives14and the connectors that connect to the storage controllers16and18and allow for the flow of air between the midplanes and establish redundant communications paths between them. The traces on the distributed midplanes can be very short to improve signal quality. In addition, the distributed midplanes reduce the total product cost compared to a traditional single piece midplane because they use a standard card edge connector that is low cost instead of a pair of ultra-high-speed backplane connectors between the midplane and the storage controller, have less PCB manufacturing cost due to less complexity of the small midplane, have lower cost PCB raw material and less layer counts and eliminates the need for re-timers. FIG.4is a top view andFIG.5is a side view of a simplified illustration of a data storage system40of the type shown inFIG.1. The system40includes two stacked rows42and44of twenty-four storage drives46, for example, E3 drives in a 4 U data storage system or E1 drives in a 2U data storage system, each provided at a front of the system10. The system40also includes a pair of stacked storage controllers50and52positioned at a rear of the system40and being operable to control data flow among and between the drives46. The system40further includes a pair of PSUs54and56positioned adjacent to the controllers50and52and providing power for the system40, where an input/output (I/O) area58is provided between the PSUs54and56. The system40also includes twenty-four midplanes60distributed and spaced apart to provide spaces62therebetween. The system40further includes a number of fans64that provide airflow between the midplanes60. Each midplane60includes a connector70that is coupled to a connector72in one of the drives46in the top row42and a connector76that is coupled to a connector74in one of the drives46in the bottom row44. Each midplane60also includes a connector78that is coupled to a connector80in the top storage controller50and a connector82that is coupled to a connector84in the bottom storage controller52.FIG.6is a side view of one of the midplanes60separated from the system40and showing a number of signal traces86formed in a PCB88between the connectors70,76,78and82. Diodes88control the flow of power from the controllers50and52to the drives46. In one non-limiting embodiment, the connectors72,74,78and82are PCB golden fingers. FIG.7is a side view of a 2U data storage system90similar to the system40, where there is only a single row of the drives46and where like elements are defined by the same reference number. The system90includes modified distributed midplanes92where the connector76has been removed.FIG.8is a side view of one of the midplanes92separated from the system90and showing a number of signal traces94formed in a PCB96between the connectors70,78and82. The foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. One skilled in the art will readily recognize from such discussion and from the accompanying drawings and claims that various changes, modifications and variations can be made therein without departing from the spirit and scope of the disclosure as defined in the following claims. | 6,560 |
11861216 | DETAILED DESCRIPTION A memory system may transfer data stored in, for example, a buffer to a memory device in accordance with a set of barrier commands, where each barrier command may be associated with respective portions of the data stored in the buffer. For example, the memory system may transfer all of the data associated with one barrier command before transferring the data associated with a next barrier command, and so on. Barrier commands may be used to support recovery procedures performed by the memory system in the event of a data loss event (e.g., an asynchronous power loss). For example, the memory system may determine whether to retain or discard a set of data transferred from, for example, a buffer to a memory device (e.g., where the storage of the data has not yet been confirmed by higher-level processes) based on or in response to determining whether the set of data is associated with a barrier command for which all of the data has been written to the memory device. If the memory system determines that all of the data for the barrier command associated with the set of data has been correctly written, the memory system may retain the set of data. Otherwise, the memory system may discard the set of data. When a memory system uses a first buffer (which may also be referred to as a cursor) for data having a first characteristic (e.g., single-level data) and a second buffer (which may also be referred to as a cursor) for data having a different characteristic (e.g., multi-level data), the techniques for using barrier commands to recover data after a data loss event may be insufficient—e.g., because data that is associated with a subsequent barrier command in the first buffer may be written before all of the data that is associated with the prior barrier command in the second buffer is written. Thus, the memory system may be unable to confirm that all of the data has been written for a prior barrier command based merely on identifying an index of a subsequent barrier command. Accordingly, the memory system may use other different, less-efficient techniques to confirm whether sets of data stored in a memory device are valid when multiple buffers are used—e.g., the memory system may anticipate flushing in both buffers. To increase performance of recovery operations when multiple storage locations, such as buffers, are used, enhanced techniques may be implemented to support data recovery using barrier commands. For example, a technique that keeps track of the barrier commands associated with sets of data and a last barrier command for which all of the associated data has been written may be used (which may be referred to as the “last barrier command” or the “last flushed barrier command”). When recovering sets of data, the barrier commands associated with the sets of data may be compared against the last barrier command to determine whether a complete set of data associated with a barrier command has been written—for example, if it is determined that an index of the barrier command associated with a set of data is less than an index of the last barrier command. Features of the disclosure are initially described in the context of systems, devices, and circuits. Features of the disclosure are also described in the context of a process flow and operational diagrams. These and other features of the disclosure are further illustrated by and described in the context of an apparatus diagram and flowchart that relate to data recovery using barrier commands. FIG.1illustrates an example of a system100that supports data recovery using barrier commands in accordance with examples as disclosed herein. The system100includes a host system105coupled with a memory system110. A memory system110may be or include any device or collection of devices, where the device or collection of devices includes at least one memory array. For example, a memory system110may be or include a Universal Flash Storage (UFS) device, an embedded Multi-Media Controller (eMMC) device, a flash device, a universal serial bus (USB) flash device, a secure digital (SD) card, a solid-state drive (SSD), a hard disk drive (HDD), a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile DIMM (NVDIMM), among other possibilities. The system100may be included in a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), an Internet of Things (IoT) enabled device, an embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or any other computing device that includes memory and a processing device. The system100may include a host system105, which may be coupled with the memory system110. In some examples, this coupling may include an interface with a host system controller106, which may be an example of a controller or control component configured to cause the host system105to perform various operations in accordance with examples as described herein. The host system105may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, the host system105may include an application configured for communicating with the memory system110or a device therein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the host system105), a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., peripheral component interconnect express (PCIe) controller, serial advanced technology attachment (SATA) controller). The host system105may use the memory system110, for example, to write data to the memory system110and read data from the memory system110. Although one memory system110is shown inFIG.1, the host system105may be coupled with any quantity of memory systems110. The host system105may be coupled with the memory system110via at least one physical host interface. The host system105and the memory system110may in some cases be configured to communicate via a physical host interface using an associated protocol (e.g., to exchange or otherwise communicate control, address, data, and other signals between the memory system110and the host system105). Examples of a physical host interface may include, but are not limited to, a SATA interface, a UFS interface, an eMMC interface, a PCIe interface, a USB interface, a Fiber Channel interface, a Small Computer System Interface (SCSI), a Serial Attached SCSI (SAS), a Double Data Rate (DDR) interface, a DIMM interface (e.g., DIMM socket interface that supports DDR), an Open NAND Flash Interface (ONFI), and a Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported between a host system controller106of the host system105and a memory system controller115of the memory system110. In some examples, the host system105may be coupled with the memory system110(e.g., the host system controller106may be coupled with the memory system controller115) via a respective physical host interface for each memory device130included in the memory system110, or via a respective physical host interface for each type of memory device130included in the memory system110. The memory system110may include a memory system controller115and one or more memory devices130. A memory device130may include one or more memory arrays of any type of memory cells (e.g., non-volatile memory cells, volatile memory cells, or any combination thereof). Although two memory devices130-aand130-bare shown in the example ofFIG.1, the memory system110may include any quantity of memory devices130. Further, if the memory system110includes more than one memory device130, different memory devices130within the memory system110may include the same or different types of memory cells. The memory system controller115may be coupled with and communicate with the host system105(e.g., via the physical host interface) and may be an example of a controller or control component configured to cause the memory system110to perform various operations in accordance with examples as described herein. The memory system controller115may also be coupled with and communicate with memory devices130to perform operations such as reading data, writing data, erasing data, or refreshing data at a memory device130—among other such operations—which may generically be referred to as access operations. In some cases, the memory system controller115may receive commands from the host system105and communicate with one or more memory devices130to execute such commands (e.g., at memory arrays within the one or more memory devices130). For example, the memory system controller115may receive commands or operations from the host system105and may convert the commands or operations into instructions or appropriate commands to achieve the desired access of the memory devices130. In some cases, the memory system controller115may exchange data with the host system105and with one or more memory devices130(e.g., in response to or otherwise in association with commands from the host system105). For example, the memory system controller115may convert responses (e.g., data packets or other signals) associated with the memory devices130into corresponding signals for the host system105. The memory system controller115may be configured for other operations associated with the memory devices130. For example, the memory system controller115may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from the host system105and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices130. The memory system controller115may include hardware such as one or more integrated circuits or discrete components, a buffer memory, or a combination thereof. The hardware may include circuitry with dedicated (e.g., hard-coded) logic to perform the operations ascribed herein to the memory system controller115. The memory system controller115may be or include a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or any other suitable processor or processing circuitry. The memory system controller115may also include a local memory120. In some cases, the local memory120may include read-only memory (ROM) or other memory that may store operating code (e.g., executable instructions) executable by the memory system controller115to perform functions ascribed herein to the memory system controller115. In some cases, the local memory120may additionally or alternatively include static random-access memory (SRAM) or other memory that may be used by the memory system controller115for internal storage or calculations, for example, related to the functions ascribed herein to the memory system controller115. A memory device130may include one or more arrays of non-volatile memory cells. For example, a memory device130may include NAND (e.g., NAND flash) memory, ROM, phase change memory (PCM), self-selecting memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM), magneto RAM (MRAM), NOR (e.g., NOR flash) memory, Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), electrically erasable programmable ROM (EEPROM), or any combination thereof. Additionally, or alternatively, a memory device130may include one or more arrays of volatile memory cells. For example, a memory device130may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells. In some examples, a memory device130may include (e.g., on a same die or within a same package) a local controller135, which may execute operations on one or more memory cells of the respective memory device130. A local controller135may operate in conjunction with a memory system controller115or may perform one or more functions ascribed herein to the memory system controller115. For example, as illustrated inFIG.1, a memory device130-amay include a local controller135-aand a memory device130-bmay include a local controller135-b. In some cases, a memory device130may be or include a NAND device (e.g., NAND flash device). A memory device130may be or include a memory die160. For example, in some cases, a memory device130may be a package that includes one or more dies160. A die160may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die160may include one or more planes165, and each plane165may include a respective set of blocks170, where each block170may include a respective set of pages175, and each page175may include a set of memory cells. In some cases, a NAND memory device130may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, a NAND memory device130may include memory cells configured to each store multiple bits of information, which may be referred to as multi-level cells (MLCs) if configured to each store two bits of information, as tri-level cells (TLCs) if configured to each store three bits of information, as quad-level cells (QLCs) if configured to each store four bits of information, or more generically as multiple-level memory cells. Multiple-level memory cells may provide greater density of storage relative to SLC memory cells but may, in some cases, involve narrower read or write margins or greater complexities for supporting circuitry. In some cases, planes165may refer to groups of blocks170, and in some cases, concurrent operations may take place within different planes165. For example, concurrent operations may be performed on memory cells within different blocks170so long as the different blocks170are in different planes165. In some cases, an individual block170may be referred to as a physical block, and a virtual block180may refer to a group of blocks170within which concurrent operations may occur. For example, concurrent operations may be performed on blocks170-a,170-b,170-c, and170-dthat are within planes165-a,165-b,165c, and165-d, respectively, and blocks170-a,170-b,170-c, and170-dmay be collectively referred to as a virtual block180. In some cases, a virtual block may include blocks170from different memory devices130(e.g., including blocks in one or more planes of memory device130-aand memory device130-b). In some cases, the blocks170within a virtual block may have the same block address within their respective planes165(e.g., block170-amay be “block0” of plane165-a, block170-bmay be “block0” of plane165-b, and so on). In some cases, performing concurrent operations in different planes165may be subject to one or more restrictions, such as concurrent operations being performed on memory cells within different pages175that have the same page address within their respective planes165(e.g., related to command decoding, page address decoding circuitry, or other circuitry being shared across planes165). In some cases, a block170may include memory cells organized into rows (pages175) and columns (e.g., strings, not shown). For example, memory cells in a same page175may share (e.g., be coupled with) a common word line, and memory cells in a same string may share (e.g., be coupled with) a common digit line (which may alternatively be referred to as a bit line). For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page175may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block170may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation). Further, in some cases, NAND memory cells may be erased before they can be re-written with new data. Thus, for example, a used page175may in some cases not be updated until the entire block170that includes the page175has been erased. In some cases, to update some data within a block170while retaining other data within the block170, the memory device130may copy the data to be retained to a new block170and write the updated data to one or more remaining pages of the new block170. The memory device130(e.g., the local controller135) or the memory system controller115may mark or otherwise designate the data that remains in the old block170as invalid or obsolete and may update a logical-to-physical (L2P) mapping table to associate the logical address (e.g., LBA) for the data with the new, valid block170rather than the old, invalid block170. In some cases, such copying and remapping may be performed instead of erasing and rewriting the entire old block170due to latency or wearout considerations, for example. In some cases, one or more copies of an L2P mapping table may be stored within the memory cells of the memory device130(e.g., within one or more blocks170or planes165) for use (e.g., reference and updating) by the local controller135or memory system controller115. In some cases, L2P mapping tables may be maintained and data may be marked as valid or invalid at the page level of granularity, and a page175may contain valid data, invalid data, or no data. Invalid data may be data that is outdated due to a more recent or updated version of the data being stored in a different page175of the memory device130. Invalid data may have been previously programmed to the invalid page175but may no longer be associated with a valid logical address, such as a logical address referenced by the host system105. Valid data may be the most recent version of such data being stored on the memory device130. A page175that includes no data may be a page175that has never been written to or that has been erased. The system100may include any quantity of non-transitory computer readable media that support data recovery using barrier commands. For example, the host system105, the memory system controller115, or a memory device130(e.g., a local controller135) may include or otherwise may access one or more non-transitory computer readable media storing instructions (e.g., firmware) for performing the functions ascribed herein to the host system105, memory system controller115, or memory device130. For example, such instructions, if executed by the host system105(e.g., by the host system controller106), by the memory system controller115, or by a memory device130(e.g., by a local controller135), may cause the host system105, memory system controller115, or memory device130to perform one or more associated functions as described herein. In some cases, a memory system110may utilize a memory system controller115to provide a managed memory system that may include, for example, one or more memory arrays and related circuitry combined with a local (e.g., on-die or in-package) controller (e.g., local controller135). An example of a managed memory system is a managed NAND (MNAND) system. A controller, such as host system controller106or memory system controller115) may write data for a set of commands that is associated with a barrier command to a buffer. The controller may also perform a flushing operation that transfers at least a portion of the data from the buffer to a memory device130. Based on (e.g., before or in response to) transferring the at least the portion of the data, the controller may determine whether to update an indication of a last barrier command for which all of the associated data has been written to the memory device130. During a recovery operation, the controller may validate the portion of the data stored in the memory device130based on or in response to determining that the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command. FIG.2illustrates an example of a system200that supports data recovery using barrier commands in accordance with examples as disclosed herein. The system200may be an example of a system100as described with reference toFIG.1or aspects thereof. The system200may include a memory system210configured to store data received from the host system205and to send data to the host system205, if requested by the host system205using access commands (e.g., read commands or write commands). The system200may implement aspects of the system100as described with reference toFIG.1. For example, the memory system210and the host system205may be examples of the memory system110and the host system105, respectively. The memory system210may include memory devices240to store data transferred between the memory system210and the host system205, e.g., in response to receiving access commands from the host system205, as described herein. The memory devices240may include one or more memory devices as described with reference toFIG.1. For example, the memory devices240may include NAND memory, PCM, self-selecting memory, 3D cross point, other chalcogenide-based memories, FERAM, MRAM, NOR (e.g., NOR flash) memory, STT-MRAM, CBRAM, RRAM, or OxRAM. The memory system210may include a storage controller230for controlling the passing of data directly to and from the memory devices240, e.g., for storing data, retrieving data, and determining memory locations in which to store data and from which to retrieve data. The storage controller230may communicate with memory devices240directly or via a bus (not shown) using a protocol specific to each type of memory device240. In some cases, a single storage controller230may be used to control multiple memory devices240of the same or different types. In some cases, the memory system210may include multiple storage controllers230, e.g., a different storage controller230for each type of memory device240. In some cases, a storage controller230may implement aspects of a local controller135as described with reference toFIG.1. The memory system210may additionally include an interface220for communication with the host system205and a buffer225for temporary storage of data being transferred between the host system205and the memory devices240. The interface220, buffer225, and storage controller230may be for translating data between the host system205and the memory devices240, e.g., as shown by a data path250, and may be collectively referred to as data path components. Using the buffer225to temporarily store data during transfers may allow data to be buffered as commands are being processed, thereby reducing latency between commands and allowing arbitrary data sizes associated with commands. This may also allow bursts of commands to be handled, and the buffered data may be stored or transmitted (or both) once a burst has stopped. The buffer225may include relatively fast memory (e.g., some types of volatile memory, such as SRAM or DRAM) or hardware accelerators or both to allow fast storage and retrieval of data to and from the buffer225. The buffer225may include data path switching components for bi-directional data transfer between the buffer225and other components. The temporary storage of data within a buffer225may refer to the storage of data in the buffer225during the execution of access commands. That is, upon completion of an access command, the associated data may no longer be maintained in the buffer225(e.g., may be overwritten with data for additional access commands). In addition, the buffer225may be a non-cache buffer. That is, data may not be read directly from the buffer225by the host system205. For example, read commands may be added to a queue without an operation to match the address to addresses already in the buffer225(e.g., without a cache address match or lookup operation). The memory system210may additionally include a memory system controller215for executing the commands received from the host system205and controlling the data path components in the moving of the data. The memory system controller215may be an example of the memory system controller115as described with reference toFIG.1. A bus235may be used to communicate between the system components. In some cases, one or more queues (e.g., a command queue260, a buffer queue265, and a storage queue270) may be used to control the processing of the access commands and the movement of the corresponding data. This may be beneficial, e.g., if more than one access command from the host system205is processed concurrently by the memory system210. The command queue260, buffer queue265, and storage queue270are depicted at the interface220, memory system controller215, and storage controller230, respectively, as examples of a possible implementation. However, queues, if used, may be positioned anywhere within the memory system210. Data transferred between the host system205and the memory devices240may take a different path in the memory system210than non-data information (e.g., commands, status information). For example, the system components in the memory system210may communicate with each other using a bus235, while the data may use the data path250through the data path components instead of the bus235. The memory system controller215may control how and if data is transferred between the host system205and the memory devices240by communicating with the data path components over the bus235(e.g., using a protocol specific to the memory system210). If a host system205transmits access commands to the memory system210, the commands may be received by the interface220, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). Thus, the interface220may be considered a front end of the memory system210. Upon receipt of each access command, the interface220may communicate the command to the memory system controller215, e.g., via the bus235. In some cases, each command may be added to a command queue260by the interface220to communicate the command to the memory system controller215. The memory system controller215may determine that an access command has been received based on or in response to the communication from the interface220. In some cases, the memory system controller215may determine the access command has been received by retrieving the command from the command queue260. The command may be removed from the command queue260after it has been retrieved therefrom, e.g., by the memory system controller215. In some cases, the memory system controller215may cause the interface220, e.g., via the bus235, to remove the command from the command queue260. Upon the determination that an access command has been received, the memory system controller215may execute the access command. For a read command, this may mean obtaining data from the memory devices240and transmitting the data to the host system205. For a write command, this may mean receiving data from the host system205and moving the data to the memory devices240. In either case, the memory system controller215may use the buffer225for, among other things, temporary storage of the data being received from or sent to the host system205. The buffer225may be considered a middle end of the memory system210. In some cases, buffer address management (e.g., pointers to address locations in the buffer225) may be performed by hardware (e.g., dedicated circuits) in the interface220, buffer225, or storage controller230. To process a write command received from the host system205, the memory system controller215may first determine if the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the write command. In some cases, a buffer queue265may be used to control a flow of commands associated with data stored in the buffer225, including write commands. The buffer queue265may include the access commands associated with data currently stored in the buffer225. In some cases, the commands in the command queue260may be moved to the buffer queue265by the memory system controller215and may remain in the buffer queue265while the associated data is stored in the buffer225. In some cases, each command in the buffer queue265may be associated with an address at the buffer225. That is, pointers may be maintained that indicate where in the buffer225the data associated with each command is stored. Using the buffer queue265, multiple access commands may be received sequentially from the host system205and at least portions of the access commands may be processed concurrently. If the buffer225has sufficient space to store the write data, the memory system controller215may cause the interface220to transmit an indication of availability to the host system205(e.g., a “ready to transfer” indication), e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). As the interface220subsequently receives from the host system205the data associated with the write command, the interface220may transfer the data to the buffer225for temporary storage using the data path250. In some cases, the interface220may obtain from the buffer225or buffer queue265the location within the buffer225to store the data. The interface220may indicate to the memory system controller215, e.g., via the bus235, if the data transfer to the buffer225has been completed. Once the write data has been stored in the buffer225by the interface220, the data may be transferred out of the buffer225and stored in a memory device240. This may be done using the storage controller230. For example, the memory system controller215may cause the storage controller230to retrieve the data out of the buffer225using the data path250and transfer the data to a memory device240. The storage controller230may be considered a back end of the memory system210. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, that the data transfer to a memory device of the memory devices240has been completed. In some cases, a storage queue270may be used to aid with the transfer of write data. For example, the memory system controller215may push (e.g., via the bus235) write commands from the buffer queue265to the storage queue270for processing. The storage queue270may include entries for each access command. In some examples, the storage queue270may additionally include a buffer pointer (e.g., an address) that may indicate where in the buffer225the data associated with the command is stored and a storage pointer (e.g., an address) that may indicate the location in the memory devices240associated with the data. In some cases, the storage controller230may obtain from the buffer225, buffer queue265, or storage queue270the location within the buffer225from which to obtain the data. The storage controller230may manage the locations within the memory devices240to store the data (e.g., performing wear-leveling, garbage collection, and the like). The entries may be added to the storage queue270, e.g., by the memory system controller215. The entries may be removed from the storage queue270, e.g., by the storage controller230or memory system controller215upon completion of the transfer of the data. To process a read command received from the host system205, the memory system controller215may again first determine if the buffer225has sufficient available space to store the data associated with the command. For example, the memory system controller215may determine, e.g., via firmware (e.g., controller firmware), an amount of space within the buffer225that may be available to store data associated with the read command. In some cases, the buffer queue265may be used to aid with buffer storage of data associated with read commands in a similar manner as discussed above with respect to write commands. For example, if the buffer225has sufficient space to store the read data, the memory system controller215may cause the storage controller230to retrieve the data associated with the read command from a memory device240and store the data in the buffer225for temporary storage using the data path250. The storage controller230may indicate to the memory system controller215, e.g., via the bus235, when the data transfer to the buffer225has been completed. In some cases, the storage queue270may be used to aid with the transfer of read data. For example, the memory system controller215may push the read command to the storage queue270for processing. In some cases, the storage controller230may obtain from the buffer225or storage queue270the location within the memory devices240from which to retrieve the data. In some cases, the storage controller230may obtain from the buffer queue265the location within the buffer225to store the data. In some cases, the storage controller230may obtain from the storage queue270the location within the buffer225to store the data. In some cases, the memory system controller215may move the command processed by the storage queue270back to the command queue260. Once the data has been stored in the buffer225by the storage controller230, the data may be transferred out of the buffer225and sent to the host system205. For example, the memory system controller215may cause the interface220to retrieve the data out of the buffer225using the data path250and transmit the data to the host system205, e.g., according to a protocol (e.g., a UFS protocol or an eMMC protocol). For example, the interface220may process the command from the command queue260and may indicate to the memory system controller215, e.g., via the bus235, that the data transmission to the host system205has been completed. The memory system controller215may execute received commands according to an order (e.g., a first-in, first-out order, according to the order of the command queue260). For each command, the memory system controller215may cause data corresponding to the command to be moved into and out of the buffer225, as discussed above. As the data is moved into and stored within the buffer225, the command may remain in the buffer queue265. A command may be removed from the buffer queue265, e.g., by the memory system controller215, if the processing of the command has been completed (e.g., if data corresponding to the access command has been transferred out of the buffer225). If a command is removed from the buffer queue265, the address previously storing the data associated with that command may be available to store data associated with a new command. The memory system controller215may additionally be configured for operations associated with the memory devices240. For example, the memory system controller215may execute or manage operations such as wear-leveling operations, garbage collection operations, error control operations such as error-detecting operations or error-correcting operations, encryption operations, caching operations, media management operations, background refresh, health monitoring, and address translations between logical addresses (e.g., LBAs) associated with commands from the host system205and physical addresses (e.g., physical block addresses) associated with memory cells within the memory devices240. That is, the host system205may issue commands indicating one or more LBAs and the memory system controller215may identify one or more physical block addresses indicated by the LBAs. In some cases, one or more contiguous LBAs may correspond to noncontiguous physical block addresses. In some cases, the storage controller230may be configured to perform one or more of the above operations in conjunction with or instead of the memory system controller215. In some cases, the memory system controller215may perform the functions of the storage controller230and the storage controller230may be omitted. A memory system controller215may store data in a buffer (e.g., buffer225) before storing the data in a memory device of the memory devices240(which may be referred to as a transferring operation or a flushing operation). In some examples, a buffer may also be referred to as a cursor. In some examples, the memory system controller215stores metadata (e.g., an associated command index) with a set of data that is transferred from the buffer225to the memory devices240. The memory system controller215may update an information, such as an L2P table, before the data in the buffer225is written from the buffer225to one or more of memory devices240. The updated L2P table may reflect where the data in the buffer225will be/is stored within the memory system210. When “Write Cache” is enabled by the host system205, the memory system controller215may further inform the host system205that an operation for executing a programming command received from the host system205has been completed when the data is written to the buffer225—for example, instead of informing the host system205when the data is written to the memory devices240, which may increase latency characteristics from the perspective of the host system205. Accordingly, if the data is successfully transferred from the buffer225to one or more of the memory devices240, a host system205may subsequently access the stored data at a physical address using a corresponding logical address that is known to the host system205(at least) as well as is known to the memory system controller215in some examples. In some examples, the memory system controller215may execute the received programming commands, for example, in the order in which the programming commands are received. For example, this may include writing the data associated with the programming commands to the buffer in a corresponding order. However, the data stored in the buffer225may not necessarily be written to the memory devices240in the same order—for example, sets of data corresponding to the commands may be written to the memory devices in a different order than the order in which the corresponding commands were received. In some examples, the memory system controller215writes the data stored in the buffer225to the memory devices240in an order that increases a performance of the memory system210. For example, the memory system controller215may delay writing a large set of data to the memory devices until a background operation is completed, which may decrease power consumption, among other advantages. Data stored in the buffer225may be stored in volatile memory cells. In some examples, the buffer225may be or include a DRAM or SRAM device. Thus, data stored in the buffer225may be lost in the event of a fault at memory system210(e.g., an unexpected power loss or surge). In some examples, an event at the memory system210causes the data stored in the buffer225to be lost before all of the data stored in the buffer225can be written to one of memory devices240. Because the sets of data corresponding to received programming commands and stored in the buffer225may be written to the memory devices240in an indeterminate order, the memory system210may be unable to determine for which programming commands the corresponding sets of data were fully written to the memory devices240prior to the occurrence of the event. In some examples, a cache synchronization command (which may be referred to as a SYNC CACHE command) or a forced unit access write command (which may be referred to as a FUA WRITE command) causes data stored in the buffer225to be written to the memory devices240before data for other commands received before or after the cache synchronization or forced unit access write command will be written to the memory devices. Accordingly, after completing the execution of a cache synchronization or force unit access write command, the memory system controller215may determine that all of the data stored in the buffer225up to a certain command (e.g., the command that precedes the synchronization command) has been written to memory devices240. Also, in the event that data stored in buffer225is lost, the memory system controller215may only recover data for a received command was written to memory devices240. Recovery may start from an established checkpoint after the cache synchronization or forced unit access write commands are completed. In some examples, the checkpoint indicates that the data in an L2P table prior to the creation of the checkpoint is valid and the memory system controller215may revert to the pre-checkpoint version of the L2P table after an occurrence of an event that causes the data stored in buffer225to be lost. Additionally, or alternatively, a barrier command (which may be referred to as a BARRIER command) may cause data for certain commands to be written before other commands, in order to keep track for which barrier command all of the associated commands have been written, or both. Particularly, barrier commands may be transmitted between sets of commands, where a set of data for a set of commands that occurs prior to a first barrier command may be written to the memory devices240before a set of data for a second set of commands that occur after the first barrier command, and so on. In some examples, an index of an associated barrier command may be stored with a set of data. In the event that data stored in buffer225is lost, the memory system controller215may determine, during a recovery operation, whether data for a received command was written to memory devices240, for example based on or in response to establishing a checkpoint after all of the data for the commands associated with a barrier command are written. If, during a recovery process, the memory system controller215determines that less than all of the data for a barrier command was written to the memory devices240, the memory system controller215may discard all of the data associated with the barrier command (and may revert a portion of a current L2P table to a corresponding portion of a prior version of the L2P table). In comparison to a cache synchronization of forced unit access write command, the barrier command, when received, may not force data in buffer225to be written to the memory devices240, but instead may enforce a group-based ordering for the recovery process. In some examples, after recovering from an event that causes the data stored in buffer225to be lost, the memory system controller215performs a recovery operation to determine which physical addresses of the memory devices are storing valid information. The recovery operation may involve identifying a difference between an earlier version and current version of an L2P table and further determining whether data stored at the disparate physical addresses addressed by the current version and not the earlier version of the L2P table are storing valid data—e.g., by reading metadata stored at the physical addresses. In some examples, the memory system controller215determines that the data stored at a portion (or all) of the physical addresses is valid based on or in response to determining that the data is associated with a fully written barrier command—e.g., based on or in response to identifying a set of data associated with a subsequent barrier command. One or more of the memory devices240may include memory cells that can be used to store single-level or multi-level data. For example, a multi-level memory cell (such as a tri-level or quad-level cell) may be capable of storing a single bit of data (which may correspond to single-level data) if a single-level programming operation is used or multiple bits of data (which may correspond to multi-level data) if a multi-level programming operation is used. Based on concurrently supporting the storage of single-level data and multi-level data, the memory system controller215may manage virtual blocks that are used to store single-level data (which may be referred to as single-level virtual blocks) and virtual blocks that are used to store multi-level data (which may be referred to as multi-level virtual blocks). In some examples, the memory system210may include two buffers (e.g., cursors) to manage single-level and multi-level data—e.g., a first buffer for single-level data and a second buffer for multi-level data. When a memory system210uses a first buffer for single-level data and a second buffer for multi-level data, the techniques for using barrier commands to recover data after a data loss event may be insufficient—e.g., because all of the data in the first buffer and associated with a barrier command may be written (which may also be referred to as “flushed”) prior to all of the data in the second buffer and associated with the barrier command. In such cases, data in the first buffer that is associated with a subsequent barrier command may be written before all of the data in the second buffer that is associated with the prior barrier command is written. Thus, the memory system210may be unable to confirm that all of the data has been written for a prior barrier command based merely on identifying an index of a subsequent barrier command. In some examples, to recover from an event that causes data in multiple buffers to be lost, the memory system controller215may use a synchronization caching or forced writing operation that causes data across all of the buffers to be transferred to the memory device. In such case, the memory system controller215may maintain a global forced writing index that is used to identify data that has been written to the memory devices240. In such cases, the memory system controller may recover the data only if all of the data associated with a global forced writing index can be recovered—e.g., based on or in response to identifying a subsequent global forced writing index. However, forcing data to be transferred from the buffers may decrease a performance of memory system210—e.g., by causing operations to be performed earlier than desired. To increase a write performance without impacting the correctness of the recovery required by the barrier command when multiple buffers are used, enhanced techniques may be used to support data recovery. For example, a technique that keeps track of the barrier commands associated with sets of data and a last barrier command for which all of the associated data has been written may be used. When recovering sets of data, the barrier commands associated with the sets of data may be compared against the last barrier command to determine whether a complete set of data associated with a barrier command has been written—e.g., if it is determined that an index of the barrier command associated with a set of data is less than an index of the last barrier command. In some examples, a controller (e.g., a memory system controller215or a controller at host system205) may receive or generate a set of commands (e.g., programming commands, read commands, barrier commands) to access the memory devices240. The controller may write sets of data associated with the sets of commands to the buffer225. In some examples, the controller may write a first portion of the sets of data to a first buffer in buffer225that is used for data having a first characteristic (e.g., data associated with single-level operations) and a second portion of the sets of data to a second buffer in buffer225that is used for data having a second characteristic (e.g., data associated with multi-level operations). The controller may also store, with the sets of data (e.g., as metadata), an index of a barrier command associated with the sets of data. The controller may write the sets of data (and metadata) stored in the buffer225to the memory devices240—e.g., as part of one or more flushing operations. Prior to the one or more flushing operations (e.g., prior to each flushing operation), the controller may determine whether to update a field used to keep track of the last barrier command for which all of the associated data has been written may be used—e.g., the controller may update the field to indicate a new barrier command based on or in response to determining that all of the data associated with the barrier command in the first buffer and the second buffer is to be flushed to memory devices240in a subsequent flushing operation. At some point during operation, the memory system210may experience a data loss event that causes data in the buffer225to be corrupted or erased. During a recovery operation, the memory system210may attempt to determine which portions of the data stored in the buffer225(relative to an initial point in time) were successfully written to the memory devices240prior to the data loss event. The memory system210may further attempt to determine whether the portions of the data successfully written to the memory devices240should be discarded or recovered. In some examples, if the memory system210determines that precedent data (to data written to the memory devices240) was not written to the memory devices240or that only a portion of a set of data was written to the memory devices240, the memory system may discard (e.g. invalidate, erase) the identified data that was written to the memory devices240. The memory system210may discard data based on or in response to determining that an index of a barrier command associated with the data is greater than or equal to an index of a last flushed barrier command. Also, the memory system210may retain data based on or in response to determining that an index of a barrier command associated with the data is less than an index of a last flushed barrier command. By keeping track of the barrier commands associated with sets of data and a last flushed barrier command, a portion (or all) of data that is flushed, after a checkpoint and prior to a data loss event, from multiple buffers to a memory device may be recovered. FIG.3illustrates an example of a set of operations for data recovery using barrier commands in accordance with examples as disclosed herein. Process flow300may be performed by controller302, which may be an example of or within a host system controller or memory system controller described herein. Process flow300may also be performed by buffer304or memory device306, which may be respective examples of a buffer and memory device described herein. Buffer304may include multiple buffers—e.g., a first buffer for single-level data and a second buffer for multi-level data. In some examples, one or more different components may perform different operations than as depicted, such as the buffer304or the memory device306performing operations otherwise shown as being performed by controller302. In some examples, process flow300illustrates an example set of operations performed to support data recovery using barrier commands. For example, process flow300may include operations for associating data with barrier command, maintaining an indication of a last barrier command for which all of the associated data has been flushed (which may be referred to as the last flushed barrier command), and using the associated barrier commands and last flushed barrier command for data recovery after a data loss event. Aspects of the process flow300may be implemented by a controller, among other components. Additionally, or alternatively, aspects of the process flow300may be implemented as instructions stored in memory (e.g., firmware stored in a memory coupled with a controller). For example, the instructions, when executed by a controller (e.g., controller302), may cause the controller to perform the operations of the process flow300. One or more of the operations described in process flow300may be performed earlier or later, omitted, replaced, supplemented, or combined with another operation. Also, additional operations described herein may replace, supplement or be combined with one or more of the operations described in process flow300. At310, a set of commands may be obtained at controller302. The set of commands may include access commands and barrier commands. The access commands may include programming commands, read commands, remapping commands, and unmapping commands. In some examples, the access commands include single-level access commands and multi-level access commands. Controller302may receive the commands from a host system. In some examples, the commands may be generated at controller302—e.g., if controller302is located at a host system. In some examples, the commands may include logical address and correspond to sets of data to be written to a memory system. At315, a barrier index associated with the obtained commands may be determined. In some examples, controller302determines the barrier index associated with the commands based on a position of the commands relative to obtained barrier commands. For example, controller302may determine that a first set of commands are associated with a first barrier command based on the first set of commands occurring prior to the first barrier command, a second set of commands are associated with a second barrier command based on the second set of commands occurring after the first barrier command and before the second barrier command and so on. At320, controller302may identify data associated with the obtained commands. Controller302may identify, for example, sets of data corresponding to respective commands of the obtained commands. In some examples, controller302determines that sets of data are associated with first characteristics (e.g., single-level operations, hot data, sequential data, metadata, database data, etc.) and other sets of data are associated with second characteristics (e.g., multi-level operations, cold data, random data, application data, media data, etc.). For example, controller302may determine that a set of data is associated with single-level operations if a logical address associated with the set of data corresponds to a single-level virtual block or is associated with a single-level command. Controller302may also determine that another set of data is associated with multi-level operations if a logical address associated with the other set of data corresponds to a multi-level virtual block or is associated with a multi-level command. At325, the sets of data may be written to buffer304. In some examples, buffer304may include one or more buffers. In some examples, controller302writes sets of data associated with single-level storage techniques to a first buffer of buffer304(which may be referred to as buffer1) and sets of data associated with multi-level storage techniques to a second buffer of buffer304(which may be referred to as buffer2). Controller302may write the sets of data to buffer304in a same order in which the corresponding commands were obtained. Alternatively, controller302may write the sets of data to buffer304in a different order than an order in which the corresponding commands were obtained. Controller302may write, with the sets of data, an indication of the barrier commands associated with the sets of data (e.g., as metadata). In some examples, controller302may also write, with the sets of data, an indication of a last flushed barrier command at the time the sets of data are written to the buffer. At330, information, such as an L2P table, that indicates a mapping between logical addresses and physical addresses within a memory system may be updated. In some examples, controller302updates (or initiates the update of) the L2P table after writing the sets of data to buffer304. In other examples, controller302updates (or initiates the update of) the L2P table when the sets of data are transferred from buffer304to memory device306. In yet other examples, controller302updates (or initiates the update of) the L2P table after the sets of data are successfully written to a physical location of memory device306. At335, a last flushed barrier command may be determined based on determining a set of data to be flushed in a subsequent flushing operations from buffer304. In some examples, controller302determines the last flushed barrier command based on or in response to scheduling a flushing operation from the buffers at buffer304. For example, prior to a flushing operation, controller302may identify the barrier commands associated with the sets of data to be flushed and determine whether there are any remaining sets of data to be flushed for the barrier commands apart from the sets of data to be flushed. If there are no remaining sets of data to be flushed for a barrier command and there are no preceding barrier commands for which data remains to be flushed, controller302may determine that the barrier command is the last flushed barrier command. In some examples, controller302stores the indication of the last flushed barrier command (e.g., an index of the last flushed barrier command) in nonvolatile memory. Additionally, controller305may update metadata of the sets of data to be flushed to indicate the last flushed barrier command—e.g., controller305may update a metadata field in each of the sets of data to indicate the last flushed barrier command prior to performing the flushing operation. In some examples, controller302may designate each barrier command as a node and create a list of the commands associated with each node. Controller302may further order the nodes so that the earliest received node is positioned at a beginning of the nodes. Each time a command is executed, controller302may remove the command from a respective list. Once all of the commands in a list have been removed, controller302may remove the node associated with the list. Accordingly, as nodes are removed, controller302may determine that the barrier command associated with the first node of the nodes is the last flushed barrier command. In some examples, controller302may determine the last flushed barrier command based on the barrier commands associated with the sets of data stored in the buffers of buffer304, as described in more detail herein and with reference toFIG.5. At340, data may be flushed from buffer304—e.g., the sets of data for which the metadata has been updated. In some examples, controller302initiates a series of flush operations to move data from buffer304to memory device306. In some examples, controller302initiate a first set of flush operations to move data from a first buffer of buffer304to memory device306(e.g., to single-level virtual blocks) and a second set of flush operations to move data from a second buffer of buffer304to memory device306(e.g., to multi-level virtual blocks). In some examples, the performance of the first set of flush operations and the second set of flush operations may be intermixed with one another—e.g., performed in an alternating pattern. At345, the sets of data may be transferred from buffer304to memory device306. In some examples, the sets of data are transferred to memory device as part of flushing operations performed by controller302. The sets of data may be transferred to memory device306in an order that is different than the order with which the sets of data were written to buffer304. For example, sets of data that were stored in buffer304after other sets of data may be written to memory device306before the other sets of data. In some examples, the order for transferring the sets of data to memory device306is selected to increase a performance of a memory system, where certain sets of data may be transferred at certain times to decrease power consumption, increase bandwidth, etc. An indication of the barrier command associated with the sets of data may be transferred with the sets of data. In some examples, an indication of the last flushed barrier command (e.g., in a corresponding metadata field) associated with the sets of data at the time of the transfer may also be transferred with the set of data. In some examples, the sets of data are stored in pages of a virtual block, where each page can be used to store a set of data and metadata for the set of data. The metadata may include an index of the barrier command associated with the set of data and, in some examples, an index of the last flushed barrier command at a time immediately prior to flushing the set of data to memory device306. At350, a data loss event may occur at controller302. The data loss event may include or be based on an asynchronous power loss, a power surge, interference, etc., that is detectable by controller302. As a result of the data loss event, at least some data stored in buffer304may be lost. At355, a recovery procedure may be initiated. As part of the recovery procedure, controller302may determine whether sets of data that were stored in buffer304around the time of the data loss event (e.g., up to a threshold duration before the data loss event, after a latest checkpoint) were successfully transferred to memory device306prior to the occurrence of the data loss event. As part of the recovery procedure, controller302may read data stored in memory device306(based on a latest version of an L2P table) to determine whether the data stored at locations of the memory device should be retained or discarded. In some examples, controller302identifies the physical addresses of memory device306to read based on a previous version of the L2P table that was stored at a checkpoint—e.g., based on differences between the previous and latest versions of the L2P table. In some examples, controller302may discard sets of data that were written to memory device306based on or in response to determining that precedent sets of data (e.g., sets of data stored in buffer304prior to the written sets of data) were not written to memory device306. In some examples, controller302may discard sets of data that were written to memory device306based on or in response to determining that other sets of data associated with a same barrier command as the written sets of data were not written to memory device306. At360, one or more sets of data may be read from memory device306. In some examples, controller302reads memory cells stored at physical locations of memory device306that were written with data after a checkpoint. Controller302may also read, with the sets of data, indications of barrier commands associated with the sets of data and, in some examples, indications of a last flushed barrier command associated with storing the sets of data—e.g., based on or in response to reading metadata stored with the sets of data. At365, a last flushed barrier command may be determined. In some examples, controller302determines the last flushed barrier command based on an indication of the last flushed barrier command (e.g., an index of the last flushed barrier command) stored in nonvolatile memory. In some examples, controller302determines the last flushed barrier command based on the indications of the last flushed barrier command stored with the data (e.g., as metadata) read from memory device306—e.g., controller302may determine the last flushed barrier command based on the highest last flushed barrier command index obtained with the read sets of data. At370, barrier commands associated with the sets of data read from memory device306may be determined. Controller302may determine the barrier commands associated with the sets of data based on corresponding metadata read with the sets of data. In some examples, controller302may determine an index of a barrier command associated with each of the sets of data. At375, the sets of data read from memory device306may be validated. In some examples, validating the data may include determining that the data is valid and should be retained or that the data is invalid and should be discarded. To validate the sets of data, controller302may, for each of the sets of data read from memory device306, compare a barrier command index associated with a set of data against an index of the determined last flushed barrier command. If the barrier command index associated with the set of data is less than or equal to the determined last flushed barrier command, the controller302may retain the set of data. Otherwise, if the barrier command index is greater than the determined last flushed barrier command, the controller302may discard the set of data. FIG.4Aillustrates an example of an operational diagram that supports data recovery using barrier commands in accordance with examples as disclosed herein. Operational diagram400-adepicts a set of commands received at a memory system, including buffer1 commands405that are associated with a first buffer (which may also be referred to as a first cursor or buffer1), buffer2 commands410that are associated with a second buffer (which may also be referred to as a second cursor or buffer2), and barrier commands415. The buffer1 commands405and buffer2 commands410may be programming commands, remapping commands, other commands, or any combination thereof. In some examples, an increased or decreased quantity of commands may be included between the barrier commands415(e.g., a different quantity of commands may be included between two barrier commands415compared to a quantity of commands between two other barrier commands415). Buffer1 commands405may be associated with a first buffer at a memory system. In some examples, buffer1 commands405may be used to access (e.g., read from or write to) a memory device in accordance with single-level operations. The data accessed using buffer1 commands405may be stored in single-level virtual blocks at the memory device. Buffer2 commands410may be associated with a second buffer at a memory system. In some examples, buffer2 commands410may be used to access (e.g., read from or write to) a memory device in accordance with multi-level access operations. The data accessed using buffer2 commands410may be stored in multi-level virtual blocks at the memory device. Each of the buffer commands may be associated with one of the barrier commands415. For example, first buffer1 command405-1, second buffer1 command405-2, and first buffer2 command410-1may be associated with first barrier command415-1. Third buffer1 command405-3, fourth buffer1 command405-4, second buffer2 command410-2, and third buffer2 command410-3may be associated with second barrier command415-2. And fourth buffer2 command410-4, fifth buffer2 command410-5, and fifth buffer1 command405-5may be associated with a subsequent barrier command. Each of the buffer commands may also be associated with different sets of data that may be stored in respective buffers—e.g., the sets of data associated with buffer1 commands405may be stored in buffer1 and the sets of data associated with buffer2 commands410may be stored in buffer2. When a memory system stores a set of data in a buffer, the memory device, or both, the memory system may store an indication of the barrier command associated with the set of data. For example, when storing, in buffer1, a set of data corresponding to first buffer1 command405-1, the memory system may store an indication that the set of data is associated with first barrier command415-1. Similarly, when storing, in buffer2, a set of data corresponding to first buffer2 command410-1, the memory system may store an indication that the set of data is associated with first barrier command415-1. Additionally, the memory system may store an indication that the set of data corresponding to third buffer1 command405-3is associated with second barrier command415-2. And so on. FIG.4Billustrates an example of an operational diagram for data recovery using barrier commands in accordance with examples as disclosed herein. Operational diagram400-bdepicts a set of flush operations420executed by a memory system. The flush operations420may be used to transfer data corresponding to the buffer1 commands405and buffer2 command410from respective buffers to a memory device. In some examples, the data in buffer1 and buffer2 may be independently flushed in an order that is based the barrier commands. For example, for buffer1, all of the data associated with first barrier command415-1may be flushed before any data associated with the subsequent barrier commands. Similarly, for buffer2, all of the data associated with first barrier command415-1may be flushed before any data associated with the subsequent barrier commands. That said, data in buffer1 that is associated with second barrier command415-2may be flushed before all of the data in buffer2 that is associated with first barrier command415-1is flushed. When executing first flush operation420-1, a memory system may write sets of data associated with first buffer2 command410-1(which may be represented as a(1,0)) and second buffer2 command410-2(which may be represented as b(2,0)) to the memory device. For the sets of data, the first value within the parentheses may indicate an index of the barrier command associated with a set of data and the second value may indicate an index of the last flushed barrier command—e.g., a(1,0) may represent that the corresponding data is associated with first barrier command415-1and that a 0th barrier command is the last flushed barrier command (which may indicate that all the data belonging to commands associated with a 0th barrier command have been flushed or that no barrier command has been received since the occurrence of a last checkpoint). The memory system may also determine whether all of the data associated with a barrier command are to be flushed based on (e.g., prior to) executing first flush operation420-1. In some examples, the memory system makes the determination prior to first flush operation420-1or as part of first flush operation420-1(but prior to transferring data from the buffer to the memory device). Based on (e.g., prior to or at a beginning of) performing first flush operation420-1, the memory system may determine that there is remaining data to be flushed for a current barrier command (e.g., first barrier command415-1) and, thus, may maintain the value of the last flushed index. Similarly, when executing second flush operation420-2, the memory system may write sets of data associated with first buffer1 command405-1(which may be represented as A (1,1)), second buffer1 command405-2(which may be represented as B (1,1)), and third buffer1 command405-3(which may be represented as C (2,1)). Based on (e.g., prior to or at a beginning of) executing second flush operation420-2, the memory system may update the last flushed index based on (e.g., in response) to determining that all of the sets of data (e.g., A, a, and B) associated with first barrier command415-1are to be flushed. Particularly, the memory system may update the last flushed index to reflect the index of first barrier command415-1. Similarly, when executing third flush operation420-3, the memory system may write sets of data associated with third buffer2 command410-1(which may be represented as c(2,1)), fourth buffer2 command405-4(which may be represented as d(3,1)), and fifth buffer2 command405-5(which may be represented as e(3,1)). Based on (e.g., prior to or at a beginning of) executing the third flush operation420-3, the memory system may maintain the last flushed index—e.g., because the data associated with fourth buffer1 command405-4may not yet be scheduled for flushing by third flush operation420-3. And similarly, when executing fourth flush operation420-4, the memory system may write sets of data associated with fourth buffer1 command405-4(which may be represented as D (2,2)), and fifth buffer1 command405-5(which may be represented as E (3,2)). Based on (e.g., prior to or at a beginning of) executing fourth flush operation420-4, the memory system may update the last flushed index to reflect the index of second barrier command415-2. In case of a data loss event, a memory system may perform a recovery process that uses the barrier-related indices to determine whether to maintain or discard data flushed to a memory device.FIG.4Bdepicts the result of the recovery process as the flush operations420are completed, where the dotted lines depict a completion of respective flush operations. For example, if the data loss event occurs after a completion of first flush operation420-1, the memory system may fail to recover any of the flushed data. Particularly, during the recovery operation, the memory system may read the sets of data (a(1,0) & b(2,0)) from a memory device along with the associated barrier indices. The memory system may further compare the barrier indices with a last flushed index (which may be stored in non-volatile memory or obtained from other sets of data stored in the memory device) and determine that the barrier indices associated with the sets of data exceed the last flushed index. If the data loss event occurs after a completion of second flush operation420-2, the memory system may recover a portion of the sets of data written to one or more memory devices. Particularly, the memory system may compare the barrier indices associated with the set of data (a(1,1), b(2,1), A (1,1), B (1,1), and C (2,1)) with the last flushed index (which may be associated with first barrier command415-1and represented by the value one (1)) and determine that the sets of data associated with first barrier command415-1(a, A, & B) are recoverable. The memory system may obtain the last flushed index from non-volatile memory or from metadata stored the sets of data flushed during the execution of first flush operation420-1. In some examples, the data loss event may occur during the execution of second flush operation420-2. In such cases, the memory system may use the barrier indices to recover the set of data associated with buffer2 (a(1,1)) based on or in response to identifying a second set of data associated with buffer2 (b(2,1)) that is associated with a subsequent barrier command. The memory system may discard the sets of data associated with buffer1 (e.g., A (1,1) and B (1,1))—e.g., because the memory system may be unable to determine which barrier command the lost data of buffer1 was associated with. If the data loss event occurs after a completion of third flush operation420-3, the memory system may recover the same portion as after the completion of second flush operation420-2—e.g., because all of the data associated with second barrier command may not yet be written. If the data loss event occurs after a completion of fourth flush operation420-4, the memory system may recover the data associated with first barrier command415-1and second barrier command415-2(e.g., a(1,2), b(2,2), c(2,2), A (1,2), B (1,2) & C (2,2))— e.g., based on the last flushed index being associated with second barrier command415-2and represented by the value two (2). In some examples, instead of storing the last flushed index, a memory system may use the barrier indices stored with the sets of data to determine the last flushed index as described in more detail herein and with reference toFIG.5. FIG.5illustrates an example of an operational diagram that supports data recovery using barrier commands in accordance with examples as disclosed herein. Operational diagram500depicts volatile buffers505used to store data before the data is transferred to one or more non-volatile memory devices. Operational diagram500further depicts flushing operations as well as aspects of an example technique for managing barrier command information associated with the flushing operations. As described herein, data stored in the volatile buffers505may be chronologically ordered in accordance with the sequence of barriers, which may simplify the tracking of a last flushed barrier. In some examples, the correspondence between data and barriers can be tracked with different and dedicated data structures. Operational diagram500, for example, begins with respective sets of data associated with a first, second fourth, and sixth barrier command (B1, B2, B4, and B6) in first buffer505-1and respective sets of data associated with a third, fourth, fifth, and seventh barrier command (B3, B4, B5, and B7) in second buffer505-2. The greyed-out portions of first buffer505-1and second buffer505-2may represent unused portions of the buffers505. As depicted in operational diagram500, sets of data for a barrier command may be written in one or both of first buffer505-1and second buffer505-2. As also depicted in operational diagram500, once a controller begins writing data for a particular barrier command to the buffers505, no additional data for prior barrier commands may be written to the buffers505. At515, an additional set of data may be added to second buffer505-2. In some examples, the barrier command associated with the set of data may be unknown and may be represented as “B?.” At520, a flushing operation may be initiated for first buffer505-1and all of the data in first buffer505-1may be transferred to a memory device. After completing the flushing operation, at least some if not all of the entries in first buffer505-1may be unused. In some examples, the association between the set of data and the unknown barrier command may not be determined—e.g., because the flushing operation is for first buffer505-1and only the barrier command associated with the first set of data stored in second buffer505-2need be known. In some examples, if the association between the first entry of second buffer505-2and a barrier command is not known (e.g., has not been determined), the association may be determined as part of the flushing operation. As part of the flushing operation (e.g., at a beginning of the flushing operation before data is transfer from first buffer505-1), the barrier commands associated with each set of data to be flushed may be determined (e.g., based on mapping information stored by the controller, metadata stored with the sets of data, etc.) and the last flushed barrier command may be determined. To determine the last flushed barrier command, a minimum of an index of the barrier command associated with the last set of data stored in the buffer to be flushed (e.g., B7of first buffer505-1) and one less than the index of the barrier command associated with the first set of data stored in the other buffer (e.g., B3) may be used. At520, the index of the last flushed barrier command may be equal to min(7, 3−1), which may be equal to two (2). The determined indices of the barrier commands associated with sets of data being flushed from first buffer505-1may be stored in first flushing buffer510-1. For example, the first entry in first flushing buffer510-1(which may correspond to the first entry of and first set of data stored in first buffer505-1) may store the index of the first barrier command (B1), the second entry in first flushing buffer510-1may store the index of the second barrier command (B2), the third entry in first flushing buffer510-1may store the index of the fourth barrier command (B4), and so on. All of the entries in first flushing buffer510-1may also store the index of the last flushed barrier command (e.g., B2). In some examples, the data stored in first flushing buffer510-1may be stored in a memory device with the corresponding sets of data (e.g., as metadata). At525, additional data may be written to the buffers. In some examples, sets of data that are associated with an unknown barrier command may be written to second buffer505-2. In some examples, sets of data that are associated with an unknown barrier command may be written to another location. At530, a flushing operation may be initiated. Thus, the association between the sets of data and the unknown barrier commands may be determined—e.g., to determine the index of the last barrier command in second buffer505-2. At535, after identifying the association between the sets of data and the unknown barrier commands, all of the data in second buffer505-2may be transferred to the memory device. Also, prior to the flushing operation or at a beginning of the flushing operation prior to transferring data from second buffer505-2, the barrier commands associated with each set of flushed data may be determined and the last flushed barrier command may be determined as similarly described with reference to525. At535, the index of the last flushed barrier command may be equal to min(8, 7−1), which may be equal to 6. As similarly described at520, the determined indices of the barrier commands associated with sets of data being flushed from second buffer505-2and the index of the last flushed barrier command may be stored in second flushing buffer510-2. In some examples, the data stored in second flushing buffer510-2may be stored in the memory device with the corresponding sets of data (e.g., as metadata). During a recovery operation, a controller may use the barrier command indices associated with data read from the memory device and the last flushed indices to determine whether to retain or discard data as described herein and with reference toFIG.3. FIG.6shows a block diagram600of a memory system620that supports data recovery using barrier commands in accordance with examples as disclosed herein. The memory system620may be an example of aspects of a memory system as described with reference toFIGS.2through6. The memory system620, or various components thereof, may be an example of means for performing various aspects of data recovery using barrier commands as described herein. For example, the memory system620may include a buffer component625, a flushing component630, a barrier monitoring component635, a recovery component640, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The buffer component625may be configured as or otherwise support a means for writing, to a buffer, data for a set of commands that is associated with a barrier command. The barrier monitoring component635may be configured as or otherwise support a means for determining, based at least in part on a portion of the data to be flushed from the buffer, whether to update an indication of a last barrier command for which all of the associated data has been written to a memory device. The flushing component630may be configured as or otherwise support a means for performing a flushing operation based at least in part on determining whether to update the indication of the last barrier command, the flushing operation comprising transferring the portion of the data from the buffer to the memory device. The recovery component640may be configured as or otherwise support a means for validating, during a recovery operation, the portion of the data stored in the memory device based at least in part on determining the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command. In some examples, the flushing component630may be configured as or otherwise support a means for storing, based at least in part on the flushing operation, an index of the barrier command with the portion of the data, where the indication includes the index of the barrier command. In some examples, the buffer component625may be configured as or otherwise support a means for writing, to the buffer, second data associated with a second set of commands that is associated with a second barrier command. In some examples, the flushing component630may be configured as or otherwise support a means for writing, from the buffer to the memory device as part of the flushing operation, a portion of the second data. In some examples, the flushing component630may be configured as or otherwise support a means for storing, based at least in part on the flushing operation, an index of the second barrier command with the portion of the second data. In some examples, the flushing component630may be configured as or otherwise support a means for writing, from the buffer to the memory device as part of a second flushing operation, a second portion of the data. In some examples, the flushing component630may be configured as or otherwise support a means for storing, based at least in part on the second flushing operation, an index of the barrier command with the second portion of the data. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for maintaining, based at least in part on the flushing operation, a state of the indication of the last barrier command based at least in part on determining that less than all of the data associated with the barrier command has been written to the memory device. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for updating, based at least in part on the second flushing operation, the indication of the last barrier command to indicate the barrier command. In some examples, the recovery component640may be configured as or otherwise support a means for performing the recovery operation after the second flushing operation, where the portion of the data and the second portion of the data are validated during the recovery operation based at least in part on updating the indication of the last barrier command. In some examples, to support determining whether to update the indication of the last barrier command to indicate the barrier command, the barrier monitoring component635may be configured as or otherwise support a means for determining whether all of the data associated with the barrier command is to be written to the memory device. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for updating the indication of the last barrier command to indicate the barrier command based at least in part on determining that all of the data associated with the barrier command has been written to the memory device. In some examples, to support validating the portion of the data stored in the memory device, the recovery component640may be configured as or otherwise support a means for determining that the barrier command is associated with the portion of the data based at least in part on an index of the barrier command stored with the portion of the data. In some examples, to support validating the portion of the data stored in the memory device, the recovery component640may be configured as or otherwise support a means for determining an index of the last barrier command based at least in part on the indication of the last barrier command. In some examples, to support validating the portion of the data stored in the memory device, the recovery component640may be configured as or otherwise support a means for determining that the index of the barrier command is less than or equal to the index of the last barrier command, where the portion of the data is validated based at least in part on the index of the barrier command associated with the portion of the data being less than or equal to the index of the barrier command. In some examples, None, and the flushing component630may be configured as or otherwise support a means for writing a second portion of the data to the second buffer as part of a second flushing operation. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for maintaining, based at least in part on the flushing operation, a state of the indication of the last barrier command based at least in part on determining that less than all of the data associated with the barrier command has been written to the memory device. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for updating, based at least in part on the second flushing operation, the indication of the last barrier command to indicate the barrier command. In some examples, the recovery component640may be configured as or otherwise support a means for performing the recovery operation after the second flushing operation, where the portion of the data and the second portion of the data are validated during the recovery operation based at least in part on updating the indication of the last barrier command. In some examples, the recovery operation occurs after the flushing operation. In some examples, the buffer includes a first buffer associated with single-level operations and a second buffer associated with multi-level operations, and the buffer component625may be configured as or otherwise support a means for writing, to the second buffer, second data for a second set of commands that is associated with a second barrier command and, to the first buffer, third data for a third set of commands that is associated with a third barrier command. In some examples, the buffer includes a first buffer associated with single-level operations and a second buffer associated with multi-level operations, and the flushing component630may be configured as or otherwise support a means for performing the flushing operation for the first buffer, where the portion of the data and a portion of the third data are written to the memory device based at least in part on performing the flushing operation. In some examples, the buffer includes a first buffer associated with single-level operations and a second buffer associated with multi-level operations, and the barrier monitoring component635may be configured as or otherwise support a means for obtaining an index of the third barrier command based at least in part on the third data being flushed from the first buffer at an end of the flushing operation. In some examples, the buffer includes data that is to be flushed to different (e.g., two or more) virtual blocks, where the virtual blocks may have different characteristics—e.g., one virtual block may be associated with single-level programming operations and another virtual block may be associated with multi-level programming operations. In some examples, the virtual blocks are dedicated to data having certain characteristics. For example, the virtual blocks may be associated with data for different logic units. In some examples, the virtual blocks may be associated with different data having different stream identifiers—e.g., one virtual block may be associated with hot data and the other virtual block may be associated with cold data, one virtual block may be associated with sequential data and the other virtual block may be associated with random data, one virtual block may be associated with metadata data and the other virtual block may be associated with application data, one virtual block may be associated with multi-media content, one virtual block may be associated with database files, etc. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for obtaining, based at least in part on performing the flushing operation, an index of the second barrier command based at least in part on an initial entry of the second buffer storing the second data. In some examples, the barrier monitoring component635may be configured as or otherwise support a means for determining the indication of the last barrier command based at least in part on a difference between the index of the third barrier command and the index of the second barrier command, where validating the portion of the data stored in the memory device is based at least in part on determining the indication of the last barrier command. FIG.7shows a flowchart illustrating a method700that supports data recovery using barrier commands in accordance with examples as disclosed herein. The operations of method700may be implemented by a memory system or its components as described herein. For example, the operations of method700may be performed by a memory system as described with reference toFIGS.2through6. In some examples, a memory system may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally, or alternatively, the memory system may perform aspects of the described functions using special-purpose hardware. At705, the method may include writing, to a buffer, data for a set of commands that is associated with a barrier command. The operations of705may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of705may be performed by a buffer component625as described with reference toFIG.6. At710, the method may include determining, based at least in part on a portion of the data to be flushed from the buffer, whether to update an indication of a last barrier command for which all of the associated data has been written to a memory device. The operations of710may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of710may be performed by a barrier monitoring component635as described with reference toFIG.6. At715, the method may include performing a flushing operation based at least in part on determining whether to update the indication of the last barrier command, the flushing operation comprising transferring the portion of the data from the buffer to the memory device. The operations of715may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of715may be performed by a flushing component630as described with reference toFIG.6. At720, the method may include validating, during a recovery operation, the portion of the data stored in the memory device based at least in part on determining the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command. The operations of720may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of720may be performed by a recovery component640as described with reference toFIG.6. In some examples, an apparatus or electronic device as described herein may perform a method or methods, such as the method700. The apparatus or electronic device may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory, computer-readable medium storing instructions executable by a processor), or any combination thereof for performing the following aspects of the present disclosure: Aspect 1: The apparatus, including features, circuitry, logic, means, or instructions, or any combination thereof for writing, to a buffer, data for a set of commands that is associated wuith a barrier command; determining, based at least in part on a portion of the data to be flushed from the buffer, whether to update an indication of a last barrier command for which all of the associated data has been written to a memory device; performing a flushing operation based at least in part on determining whether to update the indication of the last barrier command, the flushing operation comprising transferring the portion of the data from the buffer to the memory device; and validating, during a recovery operation, the portion of the data stored in the memory device based at least in part on determining the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command Aspect 2: The apparatus of aspect 1, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for storing, based at least in part on the flushing operation, an index of the barrier command with the portion of the data, where the indication includes the index of the barrier command. Aspect 3: The apparatus of any of aspects 1 through 2, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for writing, to the buffer, second data associated with a second set of commands that is associated with a second barrier command; writing, from the buffer to the memory device as part of the flushing operation, a portion of the second data; and storing, based at least in part on the flushing operation, an index of the second barrier command with the portion of the second data. Aspect 4: The apparatus of any of aspects 1 through 3, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for writing, from the buffer to the memory device as part of a second flushing operation, a second portion of the data and storing, based at least in part on the second flushing operation, an index of the barrier command with the second portion of the data. Aspect 5: The apparatus of aspect 4, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for maintaining, based at least in part on the flushing operation, a state of the indication of the last barrier command based at least in part on determining that less than all of the data associated with the barrier command has been written to the memory device and updating, based at least in part on the second flushing operation, the indication of the last barrier command to indicate the barrier command. Aspect 6: The apparatus of aspect 5, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for performing the recovery operation after the second flushing operation, where the portion of the data and the second portion of the data are validated during the recovery operation based at least in part on updating the indication of the last barrier command. Aspect 7: The apparatus of any of aspects 1 through 6 where determining whether to update the indication of the last barrier command to indicate the barrier command, further includes operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining whether all of the data associated with the barrier command is to be written to the memory device. Aspect 8: The apparatus of any of aspects 1 through 7, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for updating the indication of the last barrier command to indicate the barrier command based at least in part on determining that all of the data associated with the barrier command has been written to the memory device. Aspect 9: The apparatus of any of aspects 1 through 8 where validating the portion of the data stored in the memory device, further includes operations, features, circuitry, logic, means, or instructions, or any combination thereof for determining that the barrier command is associated with the portion of the data based at least in part on an index of the barrier command stored with the portion of the data; determining an index of the last barrier command based at least in part on the indication of the last barrier command; and determining that the index of the barrier command is less than or equal to the index of the last barrier command, where the portion of the data is validated based at least in part on the index of the barrier command associated with the portion of the data being less than or equal to the index of the barrier command. Aspect 10: The apparatus of any of aspects 1 through 9 where and the method, apparatuses, and non-transitory, computer-readable medium, further includes operations, features, circuitry, logic, means, or instructions, or any combination thereof for writing a second portion of the data to the second buffer as part of a second flushing operation. Aspect 11: The apparatus of aspect 10, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for maintaining, based at least in part on the flushing operation, a state of the indication of the last barrier command based at least in part on determining that less than all of the data associated with the barrier command has been written to the memory device and updating, based at least in part on the second flushing operation, the indication of the last barrier command to indicate the barrier command. Aspect 12: The apparatus of aspect 11, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for performing the recovery operation after the second flushing operation, where the portion of the data and the second portion of the data are validated during the recovery operation based at least in part on updating the indication of the last barrier command. Aspect 13: The apparatus of any of aspects 1 through 12, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for the recovery operation occurs after the flushing operation. Aspect 14: The apparatus of any of aspects 1 through 13 where the buffer includes a first buffer associated with single-level operations and a second buffer associated with multi-level operations and the method, apparatuses, and non-transitory, computer-readable medium, further includes operations, features, circuitry, logic, means, or instructions, or any combination thereof for writing, to the second buffer, second data for a second set of commands that is associated with a second barrier command and, to the first buffer, third data for a third set of commands that is associated with a third barrier command; performing the flushing operation for the first buffer, where the portion of the data and a portion of the third data are written to the memory device based at least in part on performing the flushing operation; and obtaining an index of the third barrier command based at least in part on the third data being flushed from the first buffer at an end of the flushing operation. Aspect 15: The apparatus of aspect 14, further including operations, features, circuitry, logic, means, or instructions, or any combination thereof for obtaining, based at least in part on performing the flushing operation, an index of the second barrier command based at least in part on an initial entry of the second buffer storing the second data and determining the indication of the last barrier command based at least in part on a difference between the index of the third barrier command and the index of the second barrier command, where validating the portion of the data stored in the memory device is based at least in part on determining the indication of the last barrier command. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to a condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. If a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other if the switch is open. If a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The terms “if,” “when,” “based on,” or “based at least in part on” may be used interchangeably. In some examples, if the terms “if,” “when,” “based on,” or “based at least in part on” are used to describe a conditional action, a conditional process, or connection between portions of a process, the terms may be interchangeable. The term “in response to” may refer to one condition or action occurring at least partially, if not fully, as a result of a previous condition or action. For example, a first condition or action may be performed and second condition or action may at least partially occur as a result of the previous condition or action occurring (whether directly after or after one or more other intermediate conditions or actions occurring after the first condition or action). Additionally, the terms “directly in response to” or “in direct response to” may refer to one condition or action occurring as a direct result of a previous condition or action. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring independent of whether other conditions or actions occur. In some examples, a first condition or action may be performed and second condition or action may occur directly as a result of the previous condition or action occurring, such that no other intermediate conditions or actions occur between the earlier condition or action and the second condition or action or a limited quantity of one or more intermediate steps or actions occur between the earlier condition or action and the second condition or action. Any condition or action described herein as being performed “based on,” “based at least in part on,” or “in response to” some other step, action, event, or condition may additionally or alternatively (e.g., in an alternative example) be performed “in direct response to” or “directly in response to” such other condition or action unless otherwise specified. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” if a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” if a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a hyphen and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. For example, the various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory, computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 116,396 |
11861217 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). The present disclosure generally relates to aborting a command efficiently using the host memory buffer (HMB). A data storage device includes one or more memory device and a controller that is DRAM-less coupled to the one or more memory devices. The controller is configured to receive a command from a host device, begin execution of the command, and receive an abort request command for the command. The command includes pointers that direct the data storage device to various locations on the data storage device where relevant content is located. Once the abort command is received, the content of the host pointers stored in the data storage device RAM are changed to point to the HMB. The data storage device then waits until any already started transactions over the interface bus that are associated with the command have been completed. Thereafter, a failure completion command is posted to the host device. FIG.1Ais a schematic block diagram illustrating a storage system100in which data storage device106may function as a storage device for a host device104, according to certain embodiments. For instance, the host device104may utilize a non-volatile memory (NVM)110included in data storage device106to store and retrieve data. The host device104comprises a host DRAM138, where a portion of the host DRAM138is allocated as a host memory buffer (HMB)140. The HMB140may be used by the data storage device106as an additional working area or an additional storage area by the data storage device106. The HMB140may be inaccessible by the host device in some examples. In some examples, the storage system100may include a plurality of storage devices, such as the data storage device106, which may operate as a storage array. For instance, the storage system100may include a plurality of data storage devices106configured as a redundant array of inexpensive/independent disks (RAID) that collectively function as a mass storage device for the host device104. The host device104may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device106. As illustrated inFIG.1A, the host device104may communicate with the data storage device106via an interface114. The host device104may comprise any of a wide range of devices, including computer servers, network attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or other devices capable of sending or receiving data from a data storage device. The data storage device106includes a controller108, NVM110, a power supply111, volatile memory112, an interface114, and a write buffer116. In some examples, the data storage device106may include additional components not shown inFIG.1Afor the sake of clarity. For example, the data storage device106may include a printed circuit board (PCB) to which components of the data storage device106are mechanically attached and which includes electrically conductive traces that electrically interconnect components of the data storage device106, or the like. In some examples, the physical dimensions and connector configurations of the data storage device106may conform to one or more standard form factors. Some example standard form factors include, but are not limited to, 3.5″ data storage device (e.g., an HDD or SSD), 2.5″ data storage device, 1.8″ data storage device, peripheral component interconnect (PCI), PCI-extended (PCI-X), PCI Express (PCIe) (e.g., PCIe x1, x4, x8, x16, PCIe Mini Card, MiniPCI, etc.). In some examples, the data storage device106may be directly coupled (e.g., directly soldered) to a motherboard of the host device104. The interface114of the data storage device106may include one or both of a data bus for exchanging data with the host device104and a control bus for exchanging commands with the host device104. The interface114may operate in accordance with any suitable protocol. For example, the interface114may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. The electrical connection of the interface114(e.g., the data bus, the control bus, or both) is electrically connected to the controller108, providing electrical connection between the host device104and the controller108, allowing data to be exchanged between the host device104and the controller108. In some examples, the electrical connection of the interface114may also permit the data storage device106to receive power from the host device104. For example, as illustrated inFIG.1A, the power supply111may receive power from the host device104via the interface114. The NVM110may include a plurality of memory devices or memory units. NVM110may be configured to store and/or retrieve data. For instance, a memory unit of NVM110may receive data and a message from the controller108that instructs the memory unit to store the data. Similarly, the memory unit of NVM110may receive a message from the controller108that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some examples, a single physical chip may include a plurality of dies (i.e., a plurality of memory units). In some examples, each memory unit may be configured to store relatively large amounts of data (e.g., 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, etc.). In some examples, each memory unit of NVM110may include any type of non-volatile memory devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magnetoresistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of non-volatile memory devices. The NVM110may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller108may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level. The data storage device106includes a power supply111, which may provide power to one or more components of the data storage device106. When operating in a standard mode, the power supply111may provide power to one or more components using power provided by an external device, such as the host device104. For instance, the power supply111may provide power to the one or more components using power received from the host device104via the interface114. In some examples, the power supply111may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply111may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, supercapacitors, batteries, and the like. In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases. The data storage device106also includes volatile memory112, which may be used by controller108to store information. Volatile memory112may include one or more volatile memory devices. In some examples, the controller108may use volatile memory112as a cache. For instance, the controller108may store cached information in volatile memory112until cached information is written to non-volatile memory110. As illustrated inFIG.1A, volatile memory112may consume power received from the power supply111. Examples of volatile memory112include, but are not limited to, random-access memory (RAM) static RAM (SRAM), flip-flops, and latches. The data storage device106includes a controller108, which may include the volatile memory112. For example, the controller108may include SRAM. Furthermore, the controller108may manage one or more operations of the data storage device106. For instance, the controller108may manage the reading of data from and/or the writing of data to the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108may initiate a data storage command to store data to the NVM110and monitor the progress of the data storage command. The controller108may determine at least one operational characteristic of the storage system100and store the at least one operational characteristic to the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108temporarily stores the data associated with the write command in the internal memory or write buffer116before sending the data to the NVM110. In some other embodiments, the HMB140may be utilized. FIG.1Bis a schematic block diagram illustrating a storage system150in which data storage device106may function as a storage device for a host device104, according to certain embodiments. For simplification purposes, common elements between the storage system100ofFIG.1Aand the storage system150may be referenced by the same reference numeral. The storage system150is similar to the storage system shown inFIG.1A. However, in the storage system150, the data storage device152includes dynamic RAM (DRAM)154, whereas in the storage system100, the data storage device106is DRAM-less. In embodiments described herein, where the DRAM154is present, such as in the storage system150, data and pointers corresponding to one or more commands may be temporarily stored in the DRAM154prior to being processed. However, in embodiments described herein, where the DRAM154is not present, such as in the storage system100where the data storage device106is DRAM-less, data and pointers corresponding to one or more commands may be temporarily stored in the HMB140or in other volatile memory, such as SRAM. Furthermore, examples of volatile memory112may further include, but not limited to, DRAM and synchronous dynamic RAM (SDRAM (e.g., DDR1, DDR2, DDR3, DDR3L, LPDDR3, DDR4, LPDDR4, and the like)). FIG.2is a schematic illustration of an abort request, according to certain embodiments. Aspects ofFIG.2may be similar to the storage system100ofFIG.1A. For example, host220may be the host device104, controller202may be the controller108, and NVM222may be the NVM110. During data storage device, such as the data storage device106ofFIG.1Aoperation, a pending command, such as host generated read command or a host generated write command, may be aborted by either the host220or by the controller202. An abort command may be issued by the host220or by the controller202. For example, the abort command may be generated by a main processor204of the controller202, where the main processor204sends the abort command to one or more processors206a-206nof the controller202. When abort command is received by the one or more processors206a-206n, the one or more processors may either terminate all tasks associated with the abort command by scanning the pending commands or wait to terminate all pending commands not yet started, where the pending commands that are started are allowed to complete prior to terminating all other pending commands. After terminating the relevant pending commands, a completion message is issued to the host220. RegardingFIG.2, the main processor204issues an abort command request to the one or more processors206a-206n. The one or more processors206a-206nutilizes the hardware (HW) accelerators208to scan each pending command and terminate the relevant pending commands. After terminating the relevant pending commands, the one or more processors206a-206nposts a completion message, which may be a failure completion message if the abort command initiated by the data storage device, to the data path210, where the data path210transmits the completion message to the host220. In regular operation, the data path210may be utilized to transfer data to and from the NVM222by utilizing the direct memory access (DMA) modules212, encode/decode error correction code (ECC) using an ECC engine218, generate security protocols by the security engine214, and manage the storage of the data by the RAID module216. The abort command operation may have a high latency before posting the completion message or the failure completion message to the host220. Because of the high latency, buffer and the resources of the data storage device may be utilized inefficiently. Furthermore, certain cases of abort command operations may have to be performed separately or have a separate procedure to complete the certain cases of abort command operations. FIG.3is a flowchart illustrating an abort request process300, according to certain embodiments. At block302, an abort request or an abort command is received by the one or more processors, such as the one or more processors206a-206nofFIG.2, where the one or more processors may be a component of the controller, such as the controller202ofFIG.2. In some embodiments, the abort request may be generated by the host, such as the host220, and transferred to the controller via a data bus. In other embodiments, the abort request may be generated by the main processor, such as the main processor204ofFIG.2, where the main processor sends the abort request to the relevant processor of the one or more processors. At block304, the controller modifies the content of the buffer pointers that reside in an internal copy of the command. In embodiments where the data storage device includes a DRAM, such as the data storage device152including the DRAM154ofFIG.1B, the internal copy of the command may be the command stored in the DRAM154of the data storage device152. However, in embodiments where the data storage device is DRAM-less, such as the data storage device106ofFIG.1A, the internal copy of the command may be stored in an HMB, such as the HMB140ofFIG.1A, or in an internal cache, such as the SRAM of the controller108. The buffer pointers may be pointing to the HMB, such as the HMB140ofFIG.1A. In some embodiments, the HMB includes two 4 KB HMB buffers. The previously listed values are not intended to be limiting, but to provide an example of a possible embodiment. At block306, the controller determines if all the current transfers are complete. The current transfers may be commands executed, but not yet completed. If the current transfers are not yet complete, then the controller waits for the current transfers to be completed. However, if the current transfers are completed at block306, then at block308, the controller posts a completion message or a failure completion message to the host device. FIG.4is a timing diagram of processing an abort request, according to certain embodiments. Aspects ofFIG.4may be similar to those described inFIG.3. At time1, the host device, such as the host device104ofFIG.1A, issues a command to the data storage device, such as the data storage device106ofFIG.1A. The command may either be a read command, a write command, or the like. At some time after the host issues the command to the data storage device, due to transfer latencies and the like, such as time2, the controller, such as the controller202ofFIG.2, initiates the data transfer operation. While the data transfer operation is executed, the data storage device receives an abort command at time3. In one embodiment, the abort command may be generated by the host device. In another embodiment, the abort command may be generated by the data storage device, where the abort command is generated by the controller or the main processor, such as the main processor204ofFIG.2. At time4, the data storage device modifies the one or more pointers associated with the abort command that resides in the data storage device. In embodiments where the data storage device includes a DRAM, such as the data storage device152ofFIG.1B, the one or more pointers may be stored in the DRAM154and the controller108may modify the one or more pointers associated with the abort command stored in the DRAM154. However, in embodiments where the data storage device does not include the DRAM154, such as the data storage device106ofFIG.1A, the one or more pointers may be temporarily stored in SRAM or stored in HMB and modified in either SRAM or the HMB. At time5, the data storage device sends a failure completion message to the host device, which occurs after the data transfer operation at time2. At time6, the data transfer operation has stopped and the data storage device drains a set of data associated with the abort request command to the HMB, such as the HMB140ofFIG.1A. In some embodiments, the draining of the set of data begins prior to posting the failure completion message to the host. In other embodiments, the failure completion message is posted before the data transfer to HMB operation is aborted. In embodiments where the data storage device includes a DRAM, such as the data storage device152ofFIG.1B, the set of data may be drained to the DRAM, such as the DRAM154ofFIG.1B. FIG.5is a schematic illustration of a PRP list described in the NVMe standard, according to certain embodiments. The command502includes a plurality of physical region page (PRP) pointers, such as a first PRP1504and a second PRP2506, where each PRP pointer points to a buffer of a plurality of buffers. The plurality of buffers may be a portion of the HMB, such as the HMB140ofFIG.1A. Furthermore, inFIG.5, each page, page0518, page1520, page2522, and page3524represents a different buffer. In one example, each of the buffers may have a size of aligned to the size of a command or a dummy command, such as about 4 K. A dummy command may be a data storage device generated command to set parameters of the size of the buffers in the HMB. The first PRP1504and the second PRP2506includes an offset of “xx”, where the offset is a pointer offset from a location, such as a header. Each PRP pointer may either be a pointer pointing to a buffer or a pointer pointing to a list of entries. For example, the first PRP1504includes a first pointer526that points to the first page0518. The second PRP2506includes a second pointer528that points to the first entry, PRP entry0510, of the PRP list508. The PRP list508has an offset of 0, such that the PRP list508is aligned with the size of the buffer. For example, a first PRP entry0510includes a third pointer530pointing to a second page1520, a second PRP entry1512includes a fourth pointer532pointing to a third page2522, and a third PRP entry2514includes a fifth pointer534pointing to a fourth page3524. The last entry of the PRP list508may include a pointer pointing to a subsequent or a new PRP list. FIG.6is a schematic illustration of two host memory buffers (HMBs) used for command draining, according to certain embodiments. The NVMe command602is a stored copy of the commands received by the controller, where the NVMe command602may be stored in a volatile memory or a non-volatile memory of the data storage device. A first PRP1604amay be the first PRP504and a second PRP2604bmay be the second PRP506ofFIG.5. The value of the first PRP1604ais overwritten to point to a first HMB buffer606a. The second PRP2604bpoints to a second HMB buffer606b. The HMB, such as the HMB140, includes the first HMB buffer606aand the second HMB buffer606b. The first HMB buffer606aand the second HMB buffer606bmay have a size of about 4 KB. The first HMB buffer606amay be utilized as a drain buffer, where the data associated with the abort command will be drained to or transferred to in both read operations and write operations. The second HMB buffer606bis a list of a plurality of buffers608a-608n. The second HMB buffer606bmay be initialized by the controller, such as the controller202ofFIG.2, of the data storage device, such as the data storage device106ofFIG.1A, at the initialization phase. The initialization phase may be during the wake up operations of the data storage device, such as when power is supplied to the data storage device. Each pointer of the plurality of buffers608a-608nof the second HMB buffer606bpoints to the first HMB buffer606a. Furthermore, rather than the last pointer608npointing to a subsequent or the next buffer list, the last pointer608npoints to the first buffer608aof the same HMB buffer. By pointing each pointer of the second HMB buffer606bto the first HMB buffer606a, the pointer of the last buffer608nof the second HMB buffer606bto the first buffer608a, and the pointer of the first PRP1to the first HMB buffer606a, the relevant data associated with the read operations or the write operations will be drained to the first HMB buffer606awhen an abort command is received. FIG.7is a flowchart700illustrating advanced command retry (ACR), according to certain embodiments. When the data storage device receives a command that includes an ACR request, one or more HMBs may be allocated to hold the set of data of the command. When a failed command has the ACR, the host, such as the host device104ofFIG.1A, is notified of the failed command and the host may re-queue the failed command in the command buffer after a delay, such as about 10 seconds. The delay times may be published by the data storage device, such as the data storage device104ofFIG.1A, via an identify controller command. Rather than re-queueing the data associated with the failed command in the host buffer, such as the host DRAM138ofFIG.1A, outside of the host HMB, such as the HMB140ofFIG.1A, the data associated with the failed command is queued by the data storage device in the HMB. The flowchart700is initiated at block702when the ACR request for a command is received. At block704, the HMB buffers are allocated. The HMB buffers includes a first HMB buffer, such as the first HMB buffer606aofFIG.6, and a second HMB buffer, such as the second HMB buffer606bofFIG.6, where the first HMB buffer is the drain buffer and the second HMB buffer is a list of buffer pointers pointing to the first HMB buffer. At block706, the internal versions of the pointers (i.e., PRP1and PRP2) are modified to point to the allocated HMB buffers. For example, the PRP1pointer may point to the first HMB buffer and the PRP2pointer may point to the second HMB buffer. At block708, the controller determines if all the current transfers for commands that are already started with an associated target host buffer are completed. If the current transfers are not yet completed, then the controller waits for the commands to be completed. At block710, after all the current commands are completed, the controller posts a failure completion message to the host with the ACR indication for a command that has failed. At block712, the one or more HMBs are accessed, such that the data of the failed command is transferred to a location of the one or more HMBs. A representation of the series of transfers is issued on the interface of the host device, where the series of transfers are stored in the one or more HMBs. When the HMB buffers are accessed, the data associated with the failed command is transferred to the first HMB buffer (i.e., the drain HMB buffer). At block714, the host device re-queues the command to the data storage device, where the re-queued command is the original command that failed. At block716, the data associated with the re-queued command is copied from the relevant location in the HMB, or in some embodiments, the one or more HMBs, to a host buffer. The re-queued command is executed by the controller utilizing the data stored in the host buffer. By changing the content of command pointers, abort commands can be processed much more efficiently leading to improved storage device performance. Aborting a command in a simple way while not having any latency compared to the complex, high latency flows that exist today improves efficiency. Additionally, using the HMB as a cache buffer for an ACR failed command speeds up processing. In one embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the data storage device is DRAM-less, and wherein the controller is configured to: receive an original command from a host device; begin execution of the original command; receive an abort request command to abort an original command, wherein the abort request command is either received from a host device or generated by the data storage device; modify one or more pointers of the original command that reside in a host memory buffer (HMB); drain a set of data associated with the original command to the HMB; and return a failure completion message to the host device, wherein the failure completion message is returned to the host device after already issued data transfers using original command pointers are completed. The controller is further configured to continue to process data transfer associated with the original command after receiving the abort request. The processing data transfer continues after completion of the modifying one or more pointers. Draining the set of data occurs: after the failure completion message is returned, begins prior to the failure completion message being returned, or a combination thereof. The failure completion message is delivered while data transfer associated with the original command is still processing, wherein the data transfer occurring after the failure completion message is delivered utilize the modified one or more pointers. Draining the set of data comprises pointing each pointer to a drain buffer. A last pointer points to a same buffer list in which the last pointer resides. In another embodiment, a data storage device comprises: one or more memory devices; and a controller coupled to the one or more memory devices, wherein the data storage device is DRAM-less, and wherein the controller is configured to: receive an original command from a host device; determine to complete original command with Advanced Command Retry (ACR); allocate one or more host memory buffers (HMBs) for holding a set of data associated with the original command; return a completion message with to the host device, wherein the completion message requests the host device to re-try the original command; execute the original command while transferring data to the allocated one or more buffers within HMBs; receive a reissued original command from the host device; and copy data for the reissued original command from the allocated one or more buffers within HMBs. When the controller returns the completion message to the host device: a representation of the data is issued on an interface of the host device; and the data is stored in the HMBs, wherein the HMBs are not used for draining data, and wherein the HMBs comprise a plurality of buffers in sufficient size to maintain data to ensure the data storage device can copy data from the HMB to the host device upon receiving a command from the host device to retrieve the data. The controller is further configured to receive a re-issue command of the original command from the host device. The controller is further configured to copy data from the one or more HMBs. The copying comprises copying the series of transfers from the one or more HMBs to a host buffer for the re-issued command. The controller is configured to wait for completing current transfers associated with the original command that have already started prior to returning the completion message, wherein after the controller returns the completion message the data storage device does not access original buffers with original command, and wherein after the controller returns the completion message the data storage device can access the one or more HMBs. During the waiting and prior to returning the completion message, the data storage device may access the original buffers and the one or more HMBs in parallel. In another embodiment, a data storage device comprises: one or more memory means; and a controller coupled to the one or more memory means, wherein the data storage device is DRAM-less, and wherein the controller is configured to: receive an abort command request from a host device; allocate a first host memory buffer (HMB) and a second HMB for holding a series of data associated with the abort command request, wherein: the first HMB is configured to drain the series of data associated with the abort command request; and the second HMB is configured to point to a drain buffer; and return a completion message to the host device. The first HMB is the drain buffer. Data associated with the abort command are drained to the drain buffer in read and write operations. The second HMB is configured to contain a buffer pointer list. All but a last pointer in the buffer pointer list points to the drain buffer. The last pointer in the buffer pointer list points to a different pointer in the buffer pointer list. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 32,165 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.